query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
8a6d0febc486850997b673ba4f98b323
Inexperience and experience with online stores: the importance of TAM and trust
[ { "docid": "8e70aea51194dba675d4c3e88ee6b9ad", "text": "Trust is central to all transactions and yet economists rarely discuss the notion. It is treated rather as background environment, present whenever called upon, a sort of ever-ready lubricant that permits voluntary participation in production and exchange. In the standard model of a market economy it is taken for granted that consumers meet their budget constraints: they are not allowed to spend more than their wealth. Moreover, they always deliver the goods and services they said they would. But the model is silent on the rectitude of such agents. We are not told if they are persons of honour, conditioned by their upbringing always to meet the obligations they have chosen to undertake, or if there is a background agency which enforces contracts, credibly threatening to mete out punishment if obligations are not fulfilled a punishment sufficiently stiff to deter consumers from ever failing to fulfil them. The same assumptions are made for producers. To be sure, the standard model can be extended to allow for bankruptcy in the face of an uncertain future. One must suppose that there is a special additional loss to becoming bankrupt a loss of honour when honour matters, social and economic ostracism, a term in a debtors’ prison, and so forth. Otherwise, a person may take silly risks or, to make a more subtle point, take insufficient care in managing his affairs, but claim that he ran into genuine bad luck, that it was Mother Nature’s fault and not his own lack of ability or zeal.", "title": "" }, { "docid": "4cff5279110ff2e45060f3ccec7d51ba", "text": "Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)", "title": "" }, { "docid": "4506bc1be6e7b42abc34d79dc426688a", "text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.", "title": "" }, { "docid": "bd13f54cd08fe2626fe8de4edce49197", "text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "99f7aa4a6e3111d18ccbb527d2a9f312", "text": "This study investigates the development of trust in a Web-based vendor during two stages of a consumer’s Web experience: exploration and commitment. Through an experimental design, the study tests the effects of third party endorsements, reputation, and individual differences on trust in the vendor during these two stages.", "title": "" } ]
[ { "docid": "8b142a381feede01d52b0affc1cf1e46", "text": "A novel optimization design method for CWSP is presented to bring large tolerance by enlarging its performance margin. The proposed OFPD can be used in local optimization procedures and as well as in global ones to design an optimized CWSP.", "title": "" }, { "docid": "d5d6376c1925a44eede25aacb1ef3020", "text": "Commerce on the Internet is still seriously hindered by the lack of a common language for collaborative commercial activities. Although XML (Extendible Markup Language) allows trading partners to exchange semantic information electronically, it does not provide support for document routing. In this paper, we propose the design for an eXchangeable routing language (XRL) using XML syntax. Since XML is becoming a major international standard, it is understood widely. The routing schema in XRL can be used to support flexible routing of documents in the Internet environment. The formal semantics of XRL are expressed in terms of Petri nets and examples are used to demonstrate how it can be used for implementing inter-organizational electronic commerce applications.", "title": "" }, { "docid": "8b7d3410e279f335f3ed5c6d6e9b60bc", "text": "A wideband patch antenna loaded with a planar metamaterial unit cell is proposed. The metamaterial unit cell is composed of an interdigital capacitor and a complementary split-ring resonator (CSRR) slot. A dispersion analysis of the metamaterial unit cell reveals that an increase in series capacitance can decrease the half-wavelength resonance frequency, thus reducing the electrical size of the proposed antenna. In addition, circulating current distributions around the CSRR slot with increased interdigital finger length bring about the TM01 mode radiation, while the normal radiation mode is the TM10 mode. Furthermore, the TM01 mode can be combined with the TM10 mode without a pattern distortion. The hybridization of the two modes yields a wideband property (6.8%) and a unique radiation pattern that is comparable with two independent dipole antennas positioned orthogonally. Also, the proposed antenna achieves high efficiency (96%) and reasonable gain (3.85 dBi), even though the electrical size of the antenna is only 0.24λ0×0.24λ0×0.02λ0.", "title": "" }, { "docid": "fee4a6936f2bf0fa9984f5d042a5ffdd", "text": "Intonation, long thought to be a key to effectiveness in spoken language, is more and more commonly addressed in English language teaching through the use of speech visualization technology. While the use of visualization technology is a crucial advance in the teaching of intonation, such teaching can be further enhanced by connecting technology to an understanding of how intonation functions in discourse. This study examines the intonation of four readers reading out-of-context sentences and then the same sentences as part of coherent discourse-level texts. Two discourse-level uses of intonation, the use of intonational paragraph markers (paratones) and the distribution of tonal patterns, are discussed and implications for teaching intonation are addressed. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a0410ab87defa87c1fdead0058f9d7c1", "text": "In this paper we analyze the error performance of free-space optical (FSO) communication over multiple hops. We first develop an error model for a single hop based on visibility, atmospheric attenuation, and geometric spread of the light beam. We model atmospheric visibility by Gaussian distributions with mean and variance values to reflect clear and adverse weather conditions. Based on this, we find the end-to-end bit error distribution of the FSO link for single hop and multi-hop scenarios. We present simulation results for decoded relaying, where each hop decodes the signal before retransmitting. We demonstrate that multi-hop FSO communication achieves a significant reduction in the mean bit error rate and also reduces the variance of the bit error rate. We argue that by lowering mean error and error variance, multi-hop operation facilitates an efficient system design and improves the reliability of the FSO link by application of specific coding schemes (such as forward error correction techniques).", "title": "" }, { "docid": "c4a7e413a12e62b66ec7512ac137ed31", "text": "Methylammonium lead iodide (CH3NH3PbI3) (MAPI)-embedded β-phase comprising porous poly(vinylidene fluoride) (PVDF) composite (MPC) films turns to an excellent material for energy harvester and photodetector (PD). MAPI enables to nucleate up to ∼91% of electroactive phase in PVDF to make it suitable for piezoelectric-based mechanical energy harvesters (PEHs), sensors, and actuators. The piezoelectric energy generation from PEH made with MPC film has been demonstrated under a simple human finger touch motion. In addition, the feasibility of photosensitive properties of MPC films are manifested under the illumination of nonmonochromatic light, which also promises the application as organic photodetectors. Furthermore, fast rising time and instant increase in the current under light illumination have been observed in an MPC-based photodetector (PD), which indicates of its potential utility in efficient photoactive device. Owing to the photoresponsive and electroactive nature of MPC films, a new class of stand-alone self-powered flexible photoactive piezoelectric energy harvester (PPEH) has been fabricated. The simultaneous mechanical energy-harvesting and visible light detection capability of the PPEH is promising in piezo-phototronics technology.", "title": "" }, { "docid": "23e5d6ab308be70276468b988213d8f5", "text": "Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g., classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on. The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). “Bi-level” denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is “automated,” because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e., classification accuracy achieves 81.6% on CUB-200–2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.", "title": "" }, { "docid": "6f50b0c96ac5b82a9d031d484c23db84", "text": "Named Entity Recognition (NER) is a key component in NLP systems for question answering, information retrieval, relation extraction, etc. NER systems have been studied and developed widely for decades, but accurate systems using deep neural networks (NN) have only been introduced in the last few years. We present a comprehensive survey of deep neural network architectures for NER, and contrast them with previous approaches to NER based on feature engineering and other supervised or semi-supervised learning algorithms. Our results highlight the improvements achieved by neural networks, and show how incorporating some of the lessons learned from past work on feature-based NER systems can yield further improvements.", "title": "" }, { "docid": "766bc5cee369a729dc310c7134edc36e", "text": "Spatial multiple access holds the promise to boost the capacity of wireless networks when an access point has multiple antennas. Due to the asynchronous and uncontrolled nature of wireless LANs, conventional MIMO technology does not work efficiently when concurrent transmissions from multiple stations are uncoordinated. In this paper, we present the design and implementation of a crosslayer system, called SAM, that addresses the challenges of enabling spatial multiple access for multiple devices in a random access network like WLAN. SAM uses a chain-decoding technique to reliably recover the channel parameters for each device, and iteratively decode concurrent frames with misaligned symbol timings and frequency offsets. We propose a new MAC protocol, called CCMA, to enable concurrent transmissions by different mobile stations while remaining backward compatible with 802.11. Finally, we implement the PHY and MAC layer of SAM using the Sora high-performance software radio platform. Our evaluation results under real wireless conditions show that SAM can improve network uplink throughput by 70% with two antennas over 802.11.", "title": "" }, { "docid": "647163a9539df3a0a9dee1335481dfa5", "text": "In recent columns we showed how linear regression can be used to predict a continuous dependent variable given other independent variables1,2. When the dependent variable is categorical, a common approach is to use logistic regression, a method that takes its name from the type of curve it uses to fit data. Categorical variables are commonly used in biomedical data to encode a set of discrete states, such as whether a drug was administered or whether a patient has survived. Categorical variables may have more than two values, which may have an implicit order, such as whether a patient never, occasionally or frequently smokes. In addition to predicting the value of a variable (e.g., a patient will survive), logistic regression can also predict the associated probability (e.g., the patient has a 75% chance of survival). There are many reasons to assess the probability of a state of a categorical variable, and a common application is classification— predicting the class of a new data point. Many methods are available, but regression has the advantage of being relatively simple to perform and interpret. First a training set is used to develop a prediction equation, and then the predicted membership probability is thresholded to predict the class membership for new observations, with the point classified to the most probable class. If the costs of misclassification differ between the two classes, alternative thresholds may be chosen to minimize misclassification costs estimated from the training sample (Fig. 1). For example, in the diagnosis of a deadly but readily treated disease, it is less costly to falsely assign a patient to the treatment group than to the no-treatment group. In our example of simple linear regression1, we saw how one continuous variable (weight) could be predicted on the basis of another continuous variable (height). To illustrate classification, here we extend that example to use height to predict the probability that an individual plays professional basketball. Let us assume that professional basketball players have a mean height of 200 cm and that those who do not play professionally have a mean height of 170 cm, with both populations being normal and having an s.d. of 15 cm. First, we create a training data set by randomly sampling the heights of 5 individuals who play professional basketball and 15 who do not (Fig. 2a). We then assign categorical classifications of 1 (plays professional basketball) and 0 (does not play professional basketball). For simplicity, our example is limited to two classes, but more are possible. Let us first approach this classification using linear regression, which minimizes least-squares1, and fit a line to the data (Fig. 2a). Each data point has one of two distinct y-values (0 and 1), which correspond to the probability of playing professional basketball, and the fit represents the predicted probability as a function of height, increasing from 0 at 159 cm to 1 at 225 cm. The fit line is truncated outside the [0, 1] range because it cannot be interpreted as a probability. Using a probability threshold of 0.5 for classification, we find that 192 cm should be the decision boundary for predicting whether an individual plays professional basketball. It gives reasonable classification performance—only one point is misclassified as false positive, and one point as false negative (Fig. 2a). Unfortunately, our linear regression fit is not robust. Consider a child of height H = 100 cm who does not play professional basketball (Fig. 2a). This height is below the threshold of 192 cm and would be classified correctly. However, if this data point is part of the training set, it will greatly influence the fit3 and increase the classification threshold to 197 cm, which would result in an additional false negative. To improve the robustness and general performance of this classifier, we could fit the data to a curve other than a straight line. One very simple option is the step function (Fig. 2b), which is 1 when greater than a certain value and 0 otherwise. An advantage of the step function is that it defines a decision boundary (185 cm) that is not affected by the outlier (H = 100 cm), but it cannot provide class probabilities other than 0 and 1. This turns out to be sufficient for the purpose of classification—many classification algorithms do not provide probabilities. However, the step function also does not differentiate between the more extreme observations, which are far from the decision boundary and more likely to be correctly assigned, and those near the decision boundary for which membership in a b", "title": "" }, { "docid": "70f396f6904d7012e5af1099bbb11e2f", "text": "A 6-day-old, male, Kilis goat kid with complaints of poor sucking reflex, dysuria, and swelling on the scrotal area was referred and it began to urinate when the sac was pressed on. On the clinical examination of the kid, it was observed that the urethral orifice and process narrowed down. Skin laid between anus-scrotum did not close fully on the ventral line. The most important finding was the penile urethral dilatation, which caused the fluctuating swelling on the scrotal region. Phimosis and two ectopic testis were also found on the right and left side in front of the preputium. There were no pathological changes in the hematological and urine analyses. Urethral diverticulum was treated by urethrostomy and hypoplasia of penis was noted during operation. No treatment for hypoplasia penis, phimosis and ectopic testis was performed. Postoperatively, kid healed and urination via urethral fistula without any complications was observed.", "title": "" }, { "docid": "6859cecf9eedf9defdef8862377ca975", "text": "The issue of how to account for the interpretation of ‘only’ has always been exciting and challenging. Over the years many sophisticated proposals have been brought forward, but ‘only’ always managed to strike back by exposing another new and strange property. In this paper we will argue that there is a way to approach the meaning of ‘only’ that can deal with some of its well-known challenges but still is faithful to classical ideas. In section 2 we will start our discussion by introducing the traditional and predominant view on the meaning of ‘only’ – we will call it the focus alternative approach. The main aim of the section will be to argue that this is not the right way to account for the meaning of ‘only’. In section 3 we will then introduce a different approach, proposed by von Stechow (1991) – the background alternatives approach. We will develop a formalization of the latter analysis making use of minimal models and show that there is a close relation between the two contrasting approaches. But even though both approaches share the same driving idea, the background alternatives approach is better capable to deal with the challenges of the meaning of ‘only’. The rest of the paper will support this claim by showing that the approach can account for well-known problems of focus alternative proposals. Of course, we cannot discuss all the puzzles of the meaning of ‘only’ in one paper. We have, therefore, decided to concentrate on two well-known problems that concern pragmatic properties of ‘only’. A closer discussion of the many semantic issues ‘only’ raises has to wait for another occasion. In section 4 we will deal with the question what part of the meaning of ‘only’ belongs to its semantics and what part has to be attributed to pragmatic considerations. The next section deals with the relevance dependence of ‘only’. Finally, in section 6 we will argue that we should account for the inference from ‘Only φ’ to φ as a conversational implicature. This part strongly builds on a proposal made in Schulz (to appear) and van Rooij & Schulz (2004). We will see that this Gricean", "title": "" }, { "docid": "1bcf510af5f1d70cb0d9d5e700f76f06", "text": "1 Research Assistant, Department of Accountancy and Corporate Finance, Ghent University, Belgium Mailing address: Department of Accountancy and Corporate Finance, Ghent University, Kuiperskaai 55 E, B-9000 Gent, Belgium; Tel: +32 (0)9/264 35 07; Fax: +32 (0)9/264 35 77; E-mail: Sofie.Balcaen@ugent.be 2 Professor, Department of Accountancy and Corporate Finance, Ghent University, Belgium; Ernst & Young Chair of Growth Management, Vlerick Leuven Gent Management School, Belgium.", "title": "" }, { "docid": "3bb48e5bf7cc87d635ab4958553ef153", "text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: malin.sundstrom@hb.se", "title": "" }, { "docid": "f06a01dd29730e91e15d8c2e1d3b084a", "text": "Recent years have seen the development of a multitude of tools for the security analysis of Android applications. A major deficit of current fully automated security analyses, however, is their inability to drive execution to interesting parts, such as where code is dynamically loaded or certain data is decrypted. In fact, security-critical or downright offensive code may not be reached at all by such analyses when dynamically checked conditions are not met by the analysis environment. To tackle this unsolved problem, we propose a tool combining static call path analysis with byte code instrumentation and a heuristic partial symbolic execution, which aims at executing interesting calls paths. It can systematically locate potentially security-critical code sections and instrument applications such that execution of these sections can be observed in a dynamic analysis. Among other use cases, this can be leveraged to force applications into revealing dynamically loaded code, a simple yet effective way to circumvent detection by security analysis software such as the Google Play Store's Bouncer. We illustrate the functionality of our tool by means of a simple logic bomb example and a real-life security vulnerability which is present in hunderd of apps and can still be actively exploited at this time.", "title": "" }, { "docid": "b5b5d6c5768e40a343b672a33f9c3f0c", "text": "In this paper we describe Icarus, a cognitive architecture for physical agents that integrates ideas from a number of traditions, but that has been especially influenced by results from cognitive psychology. We review Icarus’ commitments to memories and representations, then present its basic processes for performance and learning. We illustrate the architecture’s behavior on a task from in-city driving that requires interaction among its various components. In addition, we discuss Icarus’ consistency with qualitative findings about the nature of human cognition. In closing, we consider the framework’s relation to other cognitive architectures that have been proposed in the literature. Introduction and Motivation A cognitive architecture (Newell, 1990) specifies the infrastructure for an intelligent system that remains constant across different domains and knowledge bases. This infrastructure includes a commitment to formalisms for representing knowledge, memories for storing this domain content, and processes that utilize and acquire the knowledge. Research on cognitive architectures has been closely tied to cognitive modeling, in that they often attempt to explain a wide range of human behavior and, at the very least, desire to support the same broad capabilities as human intelligence. In this paper we describe Icarus, a cognitive architecture that builds on previous work in this area but also has some novel features. Our aim is not to match quantitative data, but rather to reproduce qualitative characteristics of human behavior, and our discussion will focus on such issues. The best method for evaluating a cognitive architecture remains an open question, but it is clear that this should happen at the systems level rather than in terms of isolated phenomena. We will not claim that Icarus accounts for any one result better than other candidates, but we will argue that it models facets of the human cognitive architecture, and the ways they fit together, that have been downplayed by other researchers in this area. Copyright c © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. A conventional paper on cognitive architectures would first describe the memories and their contents, then discuss the mechanisms that operate over them. However, Icarus’ processes interact with certain memories but not others, suggesting that we organize the text around these processes and the memories on which they depend. Moreover, some modules build on other components, which suggests a natural progression. Therefore, we first discuss Icarus’ most basic mechanism, conceptual inference, along with the memories it inspects and alters. After this, we present the processes for goal selection and skill execution, which operate over the results of inference. Finally, we consider the architecture’s module for problem solving, which builds on both inference and execution, and its associated learning processes, which operate over the results of problem solving. In each case, we discuss the framework’s connection to qualitative results from cognitive psychology. In addition, we illustrate the ideas with examples from the domain of in-city driving, which has played a central role in our research. Briefly, this involves controlling a vehicle in a simulated urban environment with buildings, road segments, street intersections, and other vehicles. This domain, which Langley and Choi (2006) describe at more length, provides a rich setting to study the interplay among different facets of cognition. Beliefs, Concepts, and Inference In order to carry out actions that achieve its goals, an agent must understand its current situation. Icarus includes a module for conceptual inference that is responsible for this cognitive task which operates by matching conceptual structures against percepts and beliefs. This process depends on the contents and representation of elements in short-term and long-term memory. Because Icarus is designed to support intelligent agents that operate in some external environment, it requires information about the state of its surroundings. To this end, it incorporates a perceptual buffer that describes aspects of the environment the agent perceives directly on a given cycle, after which it is updated. Each element or percept in this ephemeral memory corresponds to a particular object and specifies the object’s type, a unique name, and a set of attribute-value pairs that characterize the object on the current time step. Although one could create a stimulus-response agent that operates directly off perceptual information, its behavior would not reflect what we normally mean by the term ‘intelligent’, which requires higher-level cognition. Thus, Icarus also includes a belief memory that contains higher-level inferences about the agent’s situation. Whereas percepts describe attributes of specific objects, beliefs describe relations among objects, such as the relative positions of two buildings. Each element in this belief memory consists of a predicate and a set of symbolic arguments, each of which refers to some object, typically one that appears in the perceptual buffer. Icarus beliefs are instances of generalized concepts that reside in conceptual memory , which contains longterm structures that describe classes of environmental situations. The formalism that expresses these logical concepts is similar to that for Prolog clauses. Like beliefs, Icarus concepts are inherently symbolic and relational structures. Each clause in conceptual memory includes a head that gives the concept’s name and arguments, along with a body that states the conditions under which the clause should match against the contents of short-term memories. The architecture’s most basic activity is conceptual inference. On each cycle, the environmental simulator returns a set of perceived objects, including their types, names, and descriptions in the format described earlier. Icarus deposits this set of elements in the perceptual buffer, where they initiate matching against long-term conceptual definitions. The overall effect is that the system adds to its belief memory all elements that are implied deductively by these percepts and concept definitions. Icarus repeats this process on every cycle, so it constantly updates its beliefs about the environment. The inference module operates in a bottom-up, datadriven manner that starts from descriptions of perceived objects. The architecture matches these percepts against the bodies of primitive concept clauses and adds any supported beliefs (i.e., concept instances) to belief memory. These trigger matching against higher-level concept clauses, which in turn produces additional beliefs. The process continues until Icarus has added to memory all beliefs it can infer in this manner. Although this mechanism reasons over structures similar to Prolog clauses, its operation is closer to the elaboration process in the Soar architecture (Laird et al., 1987). For example, for the in-city driving domain, we provided Icarus with 41 conceptual clauses. On each cycle, the simulator deposits a variety of elements in the perceptual buffer, including percepts for the agent itself (self ), street segments (e.g., segment2), lane lines (e.g., line1), buildings, and other entities. Based on attributes of the object self and one of the segments, the architecture derives the primitive concept instance (in-segment self segment2). Similarly, from self and the object line1, it infers the belief (in-lane self line1). These two elements lead Icarus to deduce two nonprimitive beliefs, (centered-in-lane self segment2 line1) and (aligned-with-lane-in-segment self segment2 line1). Finally, from these two instances and another belief, (steering-wheel-straight self), the system draws an even higher-level inference, (driving-well-in-segment self segment2 line1). Other beliefs that encode relations among perceived entities also follow from the inference process. Icarus’ conceptual inference module incorporates a number of key ideas from the psychological literature: • Concepts are distinct cognitive entities that humans use to describe their environment and goals; moreover, they support both categorization and inference; • The great majority of human categories are grounded in perception, making reference to physical characteristics of objects they describe (Barsalou, 1999); • Many human concepts are relational in nature, in that they describe connections or interactions among objects or events (Kotovsky & Gentner, 1996); • Concepts are organized in a hierarchical manner, with complex categories being defined in terms of simpler structures. Icarus reflects each of these claims at the architectural level, which contrasts with most other architectures’ treatment of concepts and categorization. However, we will not claim our treatment is complete. Icarus currently models concepts as Boolean structures that match in an all-or-none manner, whereas human categories have a graded character (Rosch & Mervis, 1975). Also, retrieval occurs in a purely bottomup fashion, whereas human categorization and inference exhibits top-down priming effects. Both constitute important directions for extending the framework. Goals, Skills, and Execution We have seen that Icarus can utilize its conceptual knowledge to infer and update beliefs about its surroundings, but an intelligent agent must also take action in the environment. To this end, the architecture includes additional memories that concern goals the agent wants to achieve, skills the agent can execute to reach them, and intentions about which skills to pursue. These are linked by a performance mechanism that executes stored skills, thus changing the environment and, hopefully, taking the agent closer to its goals. In particular, Icarus incorporates a goal memory that contains the agent’s top-level objectives. A goal is some concept instance that the agent wants to satisfy. T", "title": "" }, { "docid": "034f6044eda34a00c64db60fb4144eb6", "text": "Motivation\nDiffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model.\n\n\nResults\nWe first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training.\n\n\nAvailability and Implementation\nThe MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank .\n\n\nContact\ngribskov@purdue.edu.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "29734bed659764e167beac93c81ce0a7", "text": "Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks.", "title": "" }, { "docid": "10571a65808fb1253b6ad7f3a43c2e69", "text": "Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent.", "title": "" }, { "docid": "473885db83b1a84595be0ad9927c5d3e", "text": "It is well known that over-parametrized deep neural networks (DNNs) are an overly expressive class of functions that can memorize even random data with 100% training accuracy. This raises the question why they do not easily overfit real data. To answer this question, we study deep networks using Fourier analysis. We show that deep networks with finite weights (or trained for finite number of steps) are inherently biased towards representing smooth functions over the input space. Specifically, the magnitude of a particular frequency component (k) of deep ReLU network function decays at least as fast as O(k−2), with width and depth helping polynomially and exponentially (respectively) in modeling higher frequencies. This shows for instance why DNNs cannot perfectly memorize peaky delta-like functions. We also show that DNNs can exploit the geometry of low dimensional data manifolds to approximate complex functions that exist along the manifold with simple functions when seen with respect to the input space. As a consequence, we find that all samples (including adversarial samples) classified by a network to belong to a certain class are connected by a path such that the prediction of the network along that path does not change. Finally we find that DNN parameters corresponding to functions with higher frequency components occupy a smaller volume in the parameter.", "title": "" } ]
scidocsrr
e89289505bece7a8c5ff3cfd0d094cac
A 4.2-W 10-GHz GaN MMIC Doherty Power Amplifier
[ { "docid": "b0c694eb683c9afb41242298fdd4cf63", "text": "We have demonstrated 8.5-11.5 GHz class-E MMIC high-power amplifiers (HPAs) with a peak power-added-efficiency (PAE) of 61% and drain efficiency (DE) of 70% with an output power of 3.7 W in a continuous-mode operation. At 5 W output power, PAE and DE of 58% and 67% are measured, respectively, which implies MMIC power density of 5 W/mm at Vds = 30 V. The peak gain is 11 dB, with an associated gain of 9 dB at the peak PAE. At an output power of 9 W, DE and PAE of 59% and 51 % were measured, respectively. In order to improve the linearity, we have designed and simulated X-band class-E MMIC PAs similar to a Doherty configuration. The Doherty-based class-E amplifiers show an excellent cancellation of a third-order intermodulation product (IM3), which improved the simulated two-tone linearity C/IM3 to >; 50 dBc.", "title": "" } ]
[ { "docid": "e4570b3894a333da2e2bf23bc90f6920", "text": "The malaria parasite's chloroquine resistance transporter (CRT) is an integral membrane protein localized to the parasite's acidic digestive vacuole. The function of CRT is not known and the protein was originally described as a transporter simply because it possesses 10 transmembrane domains. In wild-type (chloroquine-sensitive) parasites, chloroquine accumulates to high concentrations within the digestive vacuole and it is through interactions in this compartment that it exerts its antimalarial effect. Mutations in CRT can cause a decreased intravacuolar concentration of chloroquine and thereby confer chloroquine resistance. However, the mechanism by which they do so is not understood. In this paper we present the results of a detailed bioinformatic analysis that reveals that CRT is a member of a previously undefined family of proteins, falling within the drug/metabolite transporter superfamily. Comparisons between CRT and other members of the superfamily provide insight into the possible role of the protein and into the significance of the mutations associated with the chloroquine resistance phenotype. The protein is predicted to function as a dimer and to be oriented with its termini in the parasite cytosol. The key chloroquine-resistance-conferring mutation (K76T) is localized in a region of the protein implicated in substrate selectivity. The mutation is predicted to alter the selectivity of the protein such that it is able to transport the cationic (protonated) form of chloroquine down its steep concentration gradient, out of the acidic vacuole, and therefore away from its site of action.", "title": "" }, { "docid": "325e33bb763ed78b6b84deeb0b10453f", "text": "The present study was conducted to identify possible acoustic cues of sarcasm. Native English speakers produced a variety of simple utterances to convey four different attitudes: sarcasm, humour, sincerity, and neutrality. Following validation by a separate naı̈ve group of native English speakers, the recorded speech was subjected to acoustic analyses for the following features: mean fundamental frequency (F0), F0 standard deviation, F0 range, mean amplitude, amplitude range, speech rate, harmonics-to-noise ratio (HNR, to probe for voice quality changes), and one-third octave spectral values (to probe resonance changes). The results of analyses indicated that sarcasm was reliably characterized by a number of prosodic cues, although one acoustic feature appeared particularly robust in sarcastic utterances: overall reductions in mean F0 relative to all other target attitudes. Sarcasm was also reliably distinguished from sincerity by overall reductions in HNR and in F0 standard deviation. In certain linguistic contexts, sarcasm could be differentiated from sincerity and humour through changes in resonance and reductions in both speech rate and F0 range. Results also suggested a role of language used by speakers in conveying sarcasm and sincerity. It was concluded that sarcasm in speech can be characterized by a specific pattern of prosodic cues in addition to textual cues, and that these acoustic characteristics can be influenced by language used by the speaker. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9c09b8504a4e8ae249314083f89e951e", "text": "Recently, social media sites like Facebook and Twitter have been severely criticized by policy makers, and media watchdog groups for allowing fake news stories to spread unchecked on their platforms. In response, these sites are encouraging their users to report any news story they encounter on the site, which they perceive as fake. Stories that are reported as fake by a large number of users are prioritized for fact checking by (human) experts at fact checking organizations like Snopes and PolitiFact. Thus, social media sites today are relying on their users' perceptions of the truthfulness of news stories to select stories to fact check.\n However, few studies have focused on understanding how users perceive truth in news stories, or how biases in their perceptions might affect current strategies to detect and label fake news stories. To this end, we present an in-depth analysis on users' perceptions of truth in news stories. Specifically, we analyze users' truth perception biases for 150 stories fact checked by Snopes. Based on their ground truth and the truth value perceived by users, we can classify the stories into four categories -- (i) C1: false stories perceived as false by most users, (ii) C2: true stories perceived as false by most users, (iii) C3: false stories perceived as true by most users, and (iv) C4: true stories perceived as true by most users.\n The stories that are likely to be reported (flagged) for fact checking are from the two classes C1 and C2 that have the lowest perceived truth levels. We argue that there is little to be gained by fact checking stories from C1 whose truth value is correctly perceived by most users. Although stories in C2 reveal the cynicality of users about true stories, social media sites presently do not explicitly mark them as true to resolve the confusion.\n On the contrary, stories in C3 are false stories, yet perceived as true by most users. Arguably, these stories are more damaging than C1 because the truth values of the the story in former situation is incorrectly perceived while truth values of the latter is correctly perceived. Nevertheless, the stories in C1 is likely to be fact checked with greater priority than the stories in C3! In fact, in today's social media sites, the higher the gullibility of users towards believing a false story, the less likely it is to be reported for fact checking.\n In summary, we make the following contributions in this work.\n 1. Methodological: We develop a novel method for assessing users' truth perceptions of news stories. We design a test for users to rapidly assess (i.e., at the rate of a few seconds per story) how truthful or untruthful the claims in a news story are. We then conduct our truth perception tests on-line and gather truth perceptions of 100 US-based Amazon Mechanical Turk workers for each story.\n 2. Empirical: Our exploratory analysis of users' truth perceptions reveal several interesting insights. For instance, (i) for many stories, the collective wisdom of the crowd (average truth rating) differs significantly from the actual truth of the story, i.e., wisdom of crowds is inaccurate, (ii) across different stories, we find evidence for both false positive perception bias (i.e., a gullible user perceiving the story to be more true than it is in reality) and false negative perception bias (i.e., a cynical user perceiving a story to be more false than it is in reality), and (iii) users' political ideologies influence their truth perceptions for the most controversial stories, it is frequently the result of users' political ideologies influencing their truth perceptions.\n 3. Practical: Based on our observations, we call for prioritizing stories to fact check in order to achieve the following three important goals: (i) Remove false news stories from circulation, (ii) Correct the misperception of the users, and (iii) Decrease the disagreement between different users' perceptions of truth.\n Finally, we provide strategies which utilize users' truth perceptions (and predictive analysis of their biases) to achieve the three goals stated above while prioritizing stories for fact checking. The full paper is available at: https://bit.ly/2T7raFO", "title": "" }, { "docid": "ff9ca485a07dca02434396eca0f0c94f", "text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.", "title": "" }, { "docid": "b733ffe2cf4e0ee19b07614075c091a8", "text": "BACKGROUND\nPENS is a rare neuro-cutaneous syndrome that has been recently described. It involves one or more congenital epidermal hamartomas of the papular epidermal nevus with \"skyline\" basal cell layer type (PENS) as well as non-specific neurological anomalies. Herein, we describe an original case in which the epidermal hamartomas are associated with autism spectrum disorder (ASD).\n\n\nPATIENTS AND METHODS\nA 6-year-old boy with a previous history of severe ASD was referred to us for asymptomatic pigmented congenital plaques on the forehead and occipital region. Clinical examination revealed a light brown verrucous mediofrontal plaque in the form of an inverted comma with a flat striated surface comprising coalescent polygonal papules, and a clinically similar round occipital plaque. Repeated biopsies revealed the presence of acanthotic epidermis covered with orthokeratotic hyperkeratosis with occasionally broadened epidermal crests and basal hyperpigmentation, pointing towards an anatomoclinical diagnosis of PENS.\n\n\nDISCUSSION\nA diagnosis of PENS hamartoma was made on the basis of the clinical characteristics and histopathological analysis of the skin lesions. This condition is defined clinically as coalescent polygonal papules with a flat or rough surface, a round or comma-like shape and light brown coloring. Histopathological examination showed the presence of a regular palisade \"skyline\" arrangement of basal cell epidermal nuclei which, while apparently pathognomonic, is neither a constant feature nor essential for diagnosis. Association of a PENS hamartoma and neurological disorders allows classification of PENS as a new keratinocytic epidermal hamartoma syndrome. The early neurological signs, of varying severity, are non-specific and include psychomotor retardation, learning difficulties, dyslexia, hyperactivity, attention deficit disorder and epilepsy. There have been no reports hitherto of the presence of ASD as observed in the case we present.\n\n\nCONCLUSION\nThis new case report of PENS confirms the autonomous nature of this neuro-cutaneous disorder associated with keratinocytic epidermal hamartoma syndromes.", "title": "" }, { "docid": "e724db907bb466c108b5322a2df073da", "text": "CRISPR/Cas9 is a versatile genome-editing technology that is widely used for studying the functionality of genetic elements, creating genetically modified organisms as well as preclinical research of genetic disorders. However, the high frequency of off-target activity (≥50%)-RGEN (RNA-guided endonuclease)-induced mutations at sites other than the intended on-target site-is one major concern, especially for therapeutic and clinical applications. Here, we review the basic mechanisms underlying off-target cutting in the CRISPR/Cas9 system, methods for detecting off-target mutations, and strategies for minimizing off-target cleavage. The improvement off-target specificity in the CRISPR/Cas9 system will provide solid genotype-phenotype correlations, and thus enable faithful interpretation of genome-editing data, which will certainly facilitate the basic and clinical application of this technology.", "title": "" }, { "docid": "edccb0babf1e6fe85bb1d7204ab0ea0a", "text": "OBJECTIVE\nControlled study of the long-term outcome of selective mutism (SM) in childhood.\n\n\nMETHOD\nA sample of 33 young adults with SM in childhood and two age- and gender-matched comparison groups were studied. The latter comprised 26 young adults with anxiety disorders in childhood (ANX) and 30 young adults with no psychiatric disorders during childhood. The three groups were compared with regard to psychiatric disorder in young adulthood by use of the Composite International Diagnostic Interview (CIDI). In addition, the effect of various predictors on outcome of SM was studied.\n\n\nRESULTS\nThe symptoms of SM improved considerably in the entire SM sample. However, both SM and ANX had significantly higher rates for phobic disorder and any psychiatric disorder than controls at outcome. Taciturnity in the family and, by trend, immigrant status and a severity indicator of SM had an impact on psychopathology and symptomatic outcome in young adulthood.\n\n\nCONCLUSION\nThis first controlled long-term outcome study of SM provides evidence of symptomatic improvement of SM in young adulthood. However, a high rate of phobic disorder at outcome points to the fact that SM may be regarded as an anxiety disorder variant.", "title": "" }, { "docid": "3e24de04f0b1892b27fc60bb8a405d0d", "text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.", "title": "" }, { "docid": "1c1830e8e5154566ed03972d300906db", "text": "Filicide is the killing of a child by his or her parent. Despite the disturbing nature of these crimes, a study of filicide classification can provide insight into their causes. Furthermore, a study of filicide classification provides information essential to accurate death certification. We report a rare case of familial filicide in which twin sisters both attempted to kill their respective children. We then suggest a detailed classification of filicide subtypes that provides a framework of motives and precipitating factors leading to filicide. We identify 16 subtypes of filicide, each of which is sufficiently characteristic to warrant a separate category. We describe in some detail the characteristic features of these subtypes. A knowledge of filicide subtypes contributes to interpretation of difficult cases. Furthermore, to protect potential child homicide victims, it is necessary to know how and why they are killed. Epidemiologic studies using filicide subtypes as their basis could provide information leading to strategies for prevention.", "title": "" }, { "docid": "1301030c091eeb23d43dd3bfa6763e77", "text": "A new system for web attack detection is presented. It follows the anomaly-based approach, therefore known and unknown attacks can be detected. The system relies on a XML file to classify the incoming requests as normal or anomalous. The XML file, which is built from only normal traffic, contains a description of the normal behavior of the target web application statistically characterized. Any request which deviates from the normal behavior is considered an attack. The system has been applied to protect a real web application. An increasing number of training requests have been used to train the system. Experiments show that when the XML file has enough information to closely characterize the normal behavior of the target web application, a very high detection rate is reached while the false alarm rate remains very low.", "title": "" }, { "docid": "7f7e7f7ddcbb4d98270c0ba50a3f7a25", "text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.", "title": "" }, { "docid": "9acb0fe31e4586349475cf52323ef0d6", "text": "Accurate and robust segmentation of small organs in wholebody MRI is difficult due to anatomical variation and class imbalance. Recent deep network based approaches have demonstrated promising performance on abdominal multi-organ segmentations. However, the performance on small organs is still suboptimal as these occupy only small regions of the whole-body volumes with unclear boundaries and variable shapes. A coarse-to-fine, hierarchical strategy is a common approach to alleviate this problem, however, this might miss useful contextual information. We propose a two-stage approach with weighting schemes based on auto-context and spatial atlas priors. Our experiments show that the proposed approach can boost the segmentation accuracy of multiple small organs in whole-body MRI scans.", "title": "" }, { "docid": "6318c9d0e62f1608c105b114c6395e6f", "text": "Myofascial pain associated with myofascial trigger points (MTrPs) is a common cause of nonarticular musculoskeletal pain. Although the presence of MTrPs can be determined by soft tissue palpation, little is known about the mechanisms and biochemical milieu associated with persistent muscle pain. A microanalytical system was developed to measure the in vivo biochemical milieu of muscle in near real time at the subnanogram level of concentration. The system includes a microdialysis needle capable of continuously collecting extremely small samples (approximately 0.5 microl) of physiological saline after exposure to the internal tissue milieu across a 105-microm-thick semi-permeable membrane. This membrane is positioned 200 microm from the tip of the needle and permits solutes of <75 kDa to diffuse across it. Three subjects were selected from each of three groups (total 9 subjects): normal (no neck pain, no MTrP); latent (no neck pain, MTrP present); active (neck pain, MTrP present). The microdialysis needle was inserted in a standardized location in the upper trapezius muscle. Due to the extremely small sample size collected by the microdialysis system, an established microanalytical laboratory, employing immunoaffinity capillary electrophoresis and capillary electrochromatography, performed analysis of selected analytes. Concentrations of protons, bradykinin, calcitonin gene-related peptide, substance P, tumor necrosis factor-alpha, interleukin-1beta, serotonin, and norepinephrine were found to be significantly higher in the active group than either of the other two groups (P < 0.01). pH was significantly lower in the active group than the other two groups (P < 0.03). In conclusion, the described microanalytical technique enables continuous sampling of extremely small quantities of substances directly from soft tissue, with minimal system perturbation and without harmful effects on subjects. The measured levels of analytes can be used to distinguish clinically distinct groups.", "title": "" }, { "docid": "2393fc67fdca6b98695d0940fba19ca3", "text": "Evaluation of network security is an essential step in securing any network. This evaluation can help security professionals in making optimal decisions about how to design security countermeasures, to choose between alternative security architectures, and to systematically modify security configurations in order to improve security. However, the security of a network depends on a number of dynamically changing factors such as emergence of new vulnerabilities and threats, policy structure and network traffic. Identifying, quantifying and validating these factors using security metrics is a major challenge in this area. In this paper, we propose a novel security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerability of the remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally policy resistance to attack propagation within the network. We then describe our rigorous validation experiments using real- life vulnerability data of the past 6 years from National Vulnerability Database (NVD) [10] to show the high accuracy and confidence of the proposed metrics. Some previous works have considered vulnerabilities using code analysis. However, as far as we know, this is the first work to study and analyze these metrics for network security evaluation using publicly available vulnerability information and security policy configuration.", "title": "" }, { "docid": "39271e70afb7ea1b1876b57dfab1d745", "text": "This study examined the patterns or mechanism for conflict resolution in traditional African societies with particular reference to Yoruba and Igbo societies in Nigeria and Pondo tribe in South Africa. The paper notes that conflict resolution in traditional African societies provides opportunity to interact with the parties concerned, it promotes consensus-building, social bridge reconstructions and enactment of order in the society. The paper submits further that the western world placed more emphasis on the judicial system presided over by council of elders, kings’ courts, peoples (open place)", "title": "" }, { "docid": "e646f83143a98e5a0b143cb30596d549", "text": "The difference in the performance characteristics of volatile (DRAM) and non-volatile storage devices (HDD/SSDs) influences the design of database management systems (DBMSs). The key assumption has always been that the latter is much slower than the former. This affects all aspects of a DBMS's runtime architecture. But the arrival of new non-volatile memory (NVM) storage that is almost as fast as DRAM with fine-grained read/writes invalidates these previous design choices.\n In this tutorial, we provide an outline on how to build a new DBMS given the changes to hardware landscape due to NVM. We survey recent developments in this area, and discuss the lessons learned from prior research on designing NVM database systems. We highlight a set of open research problems, and present ideas for solving some of them.", "title": "" }, { "docid": "35de3cc0aa21d20074b72d8b85c3a72f", "text": "Fetus-in-fetu (FIF) is a rare entity resulting from abnormal embryogenesis in diamniotic monochorionic twins, being first described by Johann Friedrich Meckel (1800s). This occurs when a vertebrate fetus is enclosed in a normally growing fetus. Clinical manifestations vary. Detection is most often in infancy, the oldest reported age being 47. We report the case of a 4-day-old girl who was referred postnatally following a prenatal fetal scan which had revealed the presence of a multi-loculated retroperitoneal mass lesion with calcifications within. A provisional radiological diagnosis of FIF was made. Elective laparotomy revealed a well encapsulated retroperitoneal mass containing among other structures a skull vault and rudimentary limb buds. Recovery was uneventful. Here we discuss the difference between FIF and teratomas, risks of non-operative therapy and the role of serology in surveillance and detection of malignant change.", "title": "" }, { "docid": "c1338abb3ddd4acb1ba7ed7ac0c4452c", "text": "Defect prediction models that are trained on class imbalanced datasets (i.e., the proportion of defective and clean modules is not equally represented) are highly susceptible to produce inaccurate prediction models. Prior research compares the impact of class rebalancing techniques on the performance of defect prediction models. Prior research efforts arrive at contradictory conclusions due to the use of different choice of datasets, classification techniques, and performance measures. Such contradictory conclusions make it hard to derive practical guidelines for whether class rebalancing techniques should be applied in the context of defect prediction models. In this paper, we investigate the impact of 4 popularly-used class rebalancing techniques on 10 commonly-used performance measures and the interpretation of defect prediction models. We also construct statistical models to better understand in which experimental design settings that class rebalancing techniques are beneficial for defect prediction models. Through a case study of 101 datasets that span across proprietary and open-source systems, we recommend that class rebalancing techniques are necessary when quality assurance teams wish to increase the completeness of identifying software defects (i.e., Recall). However, class rebalancing techniques should be avoided when interpreting defect prediction models. We also find that class rebalancing techniques do not impact the AUC measure. Hence, AUC should be used as a standard measure when comparing defect prediction models.", "title": "" }, { "docid": "71b25e3d37ad3a057a5759179403247e", "text": "BACKGROUND\nObesity is a major health problem in the United States and around the world. To date, relationships between obesity and aspects of the built environment have not been evaluated empirically at the individual level.\n\n\nOBJECTIVE\nTo evaluate the relationship between the built environment around each participant's place of residence and self-reported travel patterns (walking and time in a car), body mass index (BMI), and obesity for specific gender and ethnicity classifications.\n\n\nMETHODS\nBody Mass Index, minutes spent in a car, kilometers walked, age, income, educational attainment, and gender were derived through a travel survey of 10,878 participants in the Atlanta, Georgia region. Objective measures of land use mix, net residential density, and street connectivity were developed within a 1-kilometer network distance of each participant's place of residence. A cross-sectional design was used to associate urban form measures with obesity, BMI, and transportation-related activity when adjusting for sociodemographic covariates. Discrete analyses were conducted across gender and ethnicity. The data were collected between 2000 and 2002 and analysis was conducted in 2004.\n\n\nRESULTS\nLand-use mix had the strongest association with obesity (BMI >/= 30 kg/m(2)), with each quartile increase being associated with a 12.2% reduction in the likelihood of obesity across gender and ethnicity. Each additional hour spent in a car per day was associated with a 6% increase in the likelihood of obesity. Conversely, each additional kilometer walked per day was associated with a 4.8% reduction in the likelihood of obesity. As a continuous measure, BMI was significantly associated with urban form for white cohorts. Relationships among urban form, walk distance, and time in a car were stronger among white than black cohorts.\n\n\nCONCLUSIONS\nMeasures of the built environment and travel patterns are important predictors of obesity across gender and ethnicity, yet relationships among the built environment, travel patterns, and weight may vary across gender and ethnicity. Strategies to increase land-use mix and distance walked while reducing time in a car can be effective as health interventions.", "title": "" }, { "docid": "d031b76b0363a12c0141785ac875e6a4", "text": "In this paper, we consider a smart power infrastructure, where several subscribers share a common energy source. Each subscriber is equipped with an energy consumption controller (ECC) unit as part of its smart meter. Each smart meter is connected to not only the power grid but also a communication infrastructure such as a local area network. This allows two-way communication among smart meters. Considering the importance of energy pricing as an essential tool to develop efficient demand side management strategies, we propose a novel real-time pricing algorithm for the future smart grid. We focus on the interactions between the smart meters and the energy provider through the exchange of control messages which contain subscribers' energy consumption and the real-time price information. First, we analytically model the subscribers' preferences and their energy consumption patterns in form of carefully selected utility functions based on concepts from microeconomics. Second, we propose a distributed algorithm which automatically manages the interactions among the ECC units at the smart meters and the energy provider. The algorithm finds the optimal energy consumption levels for each subscriber to maximize the aggregate utility of all subscribers in the system in a fair and efficient fashion. Finally, we show that the energy provider can encourage some desirable consumption patterns among the subscribers by means of the proposed real-time pricing interactions. Simulation results confirm that the proposed distributed algorithm can potentially benefit both subscribers and the energy provider.", "title": "" } ]
scidocsrr
99cdb216e60bc17be1564c374d39ccd8
Comparing Performances of Big Data Stream Processing Platforms with RAM3S
[ { "docid": "f35d164bd1b19f984b10468c41f149e3", "text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.", "title": "" } ]
[ { "docid": "11a4536e40dde47e024d4fe7541b368c", "text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.", "title": "" }, { "docid": "baeddccc34585796fec12659912a757e", "text": "Recurrent neural networks (RNNs) have shown success for many sequence-modeling tasks, but learning long-term dependencies from data remains difficult. This is often attributed to the vanishing gradient problem, which shows that gradient components relating a loss at time t to time t− τ tend to decay exponentially with τ . Long short-term memory (LSTM) and gated recurrent units (GRUs), the most widely-used RNN architectures, attempt to remedy this problem by making the decay’s base closer to 1. NARX RNNs1 take an orthogonal approach: by including direct connections, or delays, from the past, NARX RNNs make the decay’s exponent closer to 0. However, as introduced, NARX RNNs reduce the decay’s exponent only by a factor of nd, the number of delays, and simultaneously increase computation by this same factor. We introduce a new variant of NARX RNNs, called MIxed hiSTory RNNs, which addresses these drawbacks. We show that for τ ≤ 2nd−1, MIST RNNs reduce the decay’s worst-case exponent from τ/nd to log τ , while maintaining computational complexity that is similar to LSTM and GRUs. We compare MIST RNNs to simple RNNs, LSTM, and GRUs across 4 diverse tasks. MIST RNNs outperform all other methods in 2 cases, and in all cases are competitive.", "title": "" }, { "docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4", "text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.", "title": "" }, { "docid": "9e310ac4876eee037e0d5c2a248f6f45", "text": "The self-balancing two-wheel chair (SBC) is an unconventional type of personal transportation vehicle. It has unstable dynamics and therefore requires a special control to stabilize and prevent it from falling and to ensure the possibility of speed control and steering by the rider. This paper discusses the dynamic modeling and controller design for the system. The model of SBC is based on analysis of the motions of the inverted pendulum on a mobile base complemented with equations of the wheel motion and motor dynamics. The proposed control design involves a multi-loop PID control. Experimental verification and prototype implementation are discussed.", "title": "" }, { "docid": "5233286436f0ecfde8e0e647e89b288f", "text": "Each employee’s performance is important in an organization. A way to motivate it is through the application of reinforcement theory which is developed by B. F. Skinner. One of the most commonly used methods is positive reinforcement in which one’s behavior is strengthened or increased based on consequences. This paper aims to review the impact of positive reinforcement on the performances of employees in organizations. It can be applied by utilizing extrinsic reward or intrinsic reward. Extrinsic rewards include salary, bonus and fringe benefit while intrinsic rewards are praise, encouragement and empowerment. By applying positive reinforcement in these factors, desired positive behaviors are encouraged and negative behaviors are eliminated. Financial and non-financial incentives have a positive relationship with the efficiency and effectiveness of staffs.", "title": "" }, { "docid": "6038975e7868b235f2b665ffbd249b68", "text": "Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks&#x2014;pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss.", "title": "" }, { "docid": "301aee8363dffd7ae4c7ac2945a55842", "text": "This work studies the usage of the Deep Neural Network (DNN) Bottleneck (BN) features together with the traditional MFCC features in the task of i-vector-based speaker recognition. We decouple the sufficient statistics extraction by using separate GMM models for frame alignment, and for statistics normalization and we analyze the usage of BN and MFCC features (and their concatenation) in the two stages. We also show the effect of using full-covariance GMM models, and, as a contrast, we compare the result to the recent DNN-alignment approach. On the NIST SRE2010, telephone condition, we show 60% relative gain over the traditional MFCC baseline for EER (and similar for the NIST DCF metrics), resulting in 0.94% EER.", "title": "" }, { "docid": "9b30a07edc14ed2d1132421d8f372cd2", "text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.", "title": "" }, { "docid": "b7c4d8b946ea6905a2f0da10e6dc9de6", "text": "We develop a broadband channel estimation algorithm for millimeter wave (mmWave) multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs). Our methodology exploits the joint sparsity of the mmWave MIMO channel in the angle and delay domains. We formulate the estimation problem as a noisy quantized compressed-sensing problem and solve it using efficient approximate message passing (AMP) algorithms. In particular, we model the angle-delay coefficients using a Bernoulli–Gaussian-mixture distribution with unknown parameters and use the expectation-maximization forms of the generalized AMP and vector AMP algorithms to simultaneously learn the distributional parameters and compute approximately minimum mean-squared error (MSE) estimates of the channel coefficients. We design a training sequence that allows fast, fast Fourier transform based implementation of these algorithms while minimizing peak-to-average power ratio at the transmitter, making our methods scale efficiently to large numbers of antenna elements and delays. We present the results of a detailed simulation study that compares our algorithms to several benchmarks. Our study investigates the effect of SNR, training length, training type, ADC resolution, and runtime on channel estimation MSE, mutual information, and achievable rate. It shows that, in a mmWave MIMO system, the methods we propose to exploit joint angle-delay sparsity allow 1-bit ADCs to perform comparably to infinite-bit ADCs at low SNR, and 4-bit ADCs to perform comparably to infinite-bit ADCs at medium SNR.", "title": "" }, { "docid": "bd06f693359bba90de59454f32581c9c", "text": "Digital business ecosystems are becoming an increasingly popular concept as an open environment for modeling and building interoperable system integration. Business organizations have realized the importance of using standards as a cost-effective method for accelerating business process integration. Small and medium size enterprise (SME) participation in global trade is increasing, however, digital transactions are still at a low level. Cloud integration is expected to offer a cost-effective business model to form an interoperable digital supply chain. By observing the integration models, we can identify the large potential of cloud services to accelerate integration. An industrial case study is conducted. This paper investigates and contributes new knowledge on a how top-down approach by using a digital business ecosystem framework enables business managers to define new user requirements and functionalities for system integration. Through analysis, we identify the current cap of integration design. Using the cloud clustering framework, we identify how the design affects cloud integration services.", "title": "" }, { "docid": "84c95e15ddff06200624822cc12fa51f", "text": "A growing body of research has recently been conducted on semantic textual similarity using a variety of neural network models. While recent research focuses on word-based representation for phrases, sentences and even paragraphs, this study considers an alternative approach based on character n-grams. We generate embeddings for character n-grams using a continuous-bag-of-n-grams neural network model. Three different sentence representations based on n-gram embeddings are considered. Results are reported for experiments with bigram, trigram and 4-gram embeddings on the STS Core dataset for SemEval-2016 Task 1.", "title": "" }, { "docid": "0a170051e72b58081ad27e71a3545bcf", "text": "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "title": "" }, { "docid": "60ec8f06cdd4bf7cb27565c6d576ff40", "text": "2.5D chips with TSV and interposer are becoming the most popular packaging method with great increased flexibility and integrated functionality. However, great challenges have been posed in the failure analysis process to precisely locate the failure point of each interconnection in ultra-small size. The electro-optic sampling (EOS) based pulsed Time-domain reflectometry (TDR) is a powerful tool for the 2.5D/3D package diagnostics with greatly increased I/O speed and density. The timing of peaks in the reflected waveform accurately reveals the faulty location. In this work, 2.5D chip with known open failure location has been analyzed by a EOS based TDR system.", "title": "" }, { "docid": "5ad696a08b236e200a96589780b2b06c", "text": "The need for increasing flexibility of industrial automation system products leads to the trend of shifting functional behavior from hardware solutions to software components. This trend causes an increasing complexity of software components and the need for comprehensive and automated testing approaches to ensure a required (high) quality level. Nevertheless, key tasks in software testing include identifying appropriate test cases that typically require a high effort for (a) test case generation/construction and (b) test case modification in case of requirements changes. Semi-automated derivation of test cases based on models, like UML, can support test case generation. In this paper we introduce an automated test case generation approach for industrial automation applications where the test cases are specified by UML state chart diagrams. In addition we present a prototype application of the presented approach for a sorting machine. Major results showed that state charts (a) can support efficient test case generation and (b) enable automated generation of test cases and code for industrial automation systems.", "title": "" }, { "docid": "e3853e259c3ae6739dcae3143e2074a8", "text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.", "title": "" }, { "docid": "7edd1ae4ec4bac9ed91e5e14326a694e", "text": "These days, educational institutions and organizations are generating huge amount of data, more than the people can read in their lifetime. It is not possible for a person to learn, understand, decode, and interpret to find valuable information. Data mining is one of the most popular method which can be used to identify hidden patterns from large databases. User can extract historical, hidden details, and previously unknown information, from large repositories by applying required mining techniques. There are two algorithms which can be used to classify and predict, such as supervised learning and unsupervised learning. Classification is a technique which performs an induction on current data (existing data) and predicts future class. The main objective of classification is to make an unknown class to known class by consulting its neighbor class. therefore it is called as supervised learning, it builds the classifier by consulting with the known class labels such as k-nearest neighbor algorithm (k-NN), Naïve Bayes (NB), support vector machine (SVM), decision tree. Clustering is an unsupervised learning that builds a model to group similar objects into categories without consulting a class label. The main objective of clustering is find the distance between objects like nearby and faraway based on their similarities and dissimilarities it groups the objects and detects outliers. In this paper Weka tool is used to analyze by applying preprocessing, classification on institutional academic result of under graduate students of computer science & engineering. Keywords— Weka, classifier, supervised learning,", "title": "" }, { "docid": "7c13ebe2897fc4870a152159cda62025", "text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.", "title": "" }, { "docid": "36c11c29f6605f7c234e68ecba2a717a", "text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.", "title": "" }, { "docid": "a433ebaeeb5dc5b68976b3ecb770c0cd", "text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01", "title": "" }, { "docid": "9f5998ebc2457c330c29a10772d8ee87", "text": "Fuzzy hashing is a known technique that has been adopted to speed up malware analysis processes. However, Hashing has not been fully implemented for malware detection because it can easily be evaded by applying a simple obfuscation technique such as packing. This challenge has limited the usage of hashing to triaging of the samples based on the percentage of similarity between the known and unknown. In this paper, we explore the different ways fuzzy hashing can be used to detect similarities in a file by investigating particular hashes of interest. Each hashing method produces independent but related interesting results which are presented herein. We further investigate combination techniques that can be used to improve the detection rates in hashing methods. Two such evidence combination theory based methods are applied in this work in order propose a novel way of combining the results achieved from different hashing algorithms. This study focuses on file and section Ssdeep hashing, PeHash and Imphash techniques to calculate the similarity of the Portable Executable files. Our results show that the detection rates are improved when evidence combination techniques are used.", "title": "" } ]
scidocsrr
f739aca6dcc42816419fa73850d20acd
A discussion on the validation tests employed to compare human action recognition methods using the MSR Action3D dataset
[ { "docid": "8c70f1af7d3132ca31b0cf603b7c5939", "text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.", "title": "" }, { "docid": "c474df285da8106b211dc7fe62733423", "text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.", "title": "" } ]
[ { "docid": "f5532b33092d22c97d1b6ebe69de051f", "text": "Automatic personality recognition is useful for many computational applications, including recommendation systems, dating websites, and adaptive dialogue systems. There have been numerous successful approaches to classify the “Big Five” personality traits from a speaker’s utterance, but these have largely relied on judgments of personality obtained from external raters listening to the utterances in isolation. This work instead classifies personality traits based on self-reported personality tests, which are more valid and more difficult to identify. Our approach, which uses lexical and acoustic-prosodic features, yields predictions that are between 6.4% and 19.2% more accurate than chance. This approach predicts Opennessto-Experience and Neuroticism most successfully, with less accurate recognition of Extroversion. We compare the performance of classification and regression techniques, and also explore predicting personality clusters.", "title": "" }, { "docid": "a42f7e9efc4c0e2d56107397f98b15f1", "text": "Recently, much advance has been made in image captioning, and an encoder-decoder framework has achieved outstanding performance for this task. In this paper, we propose an extension of the encoder-decoder framework by adding a component called guiding network. The guiding network models the attribute properties of input images, and its output is leveraged to compose the input of the decoder at each time step. The guiding network can be plugged into the current encoder-decoder framework and trained in an end-to-end manner. Hence, the guiding vector can be adaptively learned according to the signal from the decoder, making itself to embed information from both image and language. Additionally, discriminative supervision can be employed to further improve the quality of guidance. The advantages of our proposed approach are verified by experiments carried out on the MS COCO dataset.", "title": "" }, { "docid": "8140838d7ef17b3d6f6c042442de0f73", "text": "The two vascular systems of our body are the blood and lymphatic vasculature. Our understanding of the cellular and molecular processes controlling the development of the lymphatic vasculature has progressed significantly in the last decade. In mammals, this is a stepwise process that starts in the embryonic veins, where lymphatic EC (LEC) progenitors are initially specified. The differentiation and maturation of these progenitors continues as they bud from the veins to produce scattered primitive lymph sacs, from which most of the lymphatic vasculature is derived. Here, we summarize our current understanding of the key steps leading to the formation of a functional lymphatic vasculature.", "title": "" }, { "docid": "f0c334e0d626bd5be4e17f08049d573e", "text": "The cost efficiency and diversity of digital channels facilitate marketers’ frequent and interactive communication with their customers. Digital channels like the Internet, email, mobile phones and digital television offer new prospects to cultivate customer relationships. However, there are a few models explaining how digital marketing communication (DMC) works from a relationship marketing perspective, especially for cultivating customer loyalty. In this paper, we draw together previous research into an integrative conceptual model that explains how the key elements of DMC frequency and content of brand communication, personalization, and interactivity can lead to improved customer value, commitment, and loyalty.", "title": "" }, { "docid": "4e0735c47fba93e77bc33eee689ed03e", "text": "Word-of-mouth (WOM) has been recognized as one of the most influential resources of information transmission. However, conventional WOM communication is only effective within limited social contact boundaries. The advances of information technology and the emergence of online social network sites have changed the way information is transmitted and have transcended the traditional limitations of WOM. This paper describes online interpersonal influence or electronic word of mouth (eWOM) because it plays a significant role in consumer purchase decisions.", "title": "" }, { "docid": "63262d2a9abdca1d39e31d9937bb41cf", "text": "A structural model is presented for synthesizing binaural sound from a monaural source. The model produces well-controlled vertical as well as horizontal effects. The model is based on a simplified time-domain description of the physics of wave propagation and diffraction. The components of the model have a one-to-one correspondence with the physical sources of sound diffraction, delay, and reflection. The simplicity of the model permits efficient implementation in DSP hardware, and thus facilitates real-time operation. Additionally, the parameters in the model can be adjusted to fit a particular individual’s characteristics, thereby producing individualized head-related transfer functions. Experimental tests verify the perceptual effectiveness of the approach.", "title": "" }, { "docid": "6d0c4e7f69169b98484e9acc3c3ffdd9", "text": "Motion capture is a prevalent technique for capturing and analyzing human articulations. A common problem encountered in motion capture is that some marker positions are often missing due to occlusions or ambiguities. Most methods for completing missing markers may quickly become ineffective and produce unsatisfactory results when a significant portion of the markers are missing for extended periods of time. We propose a data-driven, piecewise linear modeling approach to missing marker estimation that is especially beneficial in this scenario. We model motion sequences of a training set with a hierarchy of low-dimensional local linear models characterized by the principal components. For a new sequence with missing markers, we use a pre-trained classifier to identify the most appropriate local linear model for each frame and then recover the missing markers by finding the least squares solutions based on the available marker positions and the principal components of the associated model. Our experimental results demonstrate that our method is efficient in recovering the full-body motion and is robust to heterogeneous motion data.", "title": "" }, { "docid": "924a9b5ff2a60a46ef3dfd8b40abb0fc", "text": "We extend the conceptual model developed by Amelinckx et al. (2008) by relating electronic reverse auction (ERA) project outcomes to ERA project satisfaction. We formulate hypotheses about the relationships among organizational and project antecedents, a set of financial, operational, and strategic ERA project outcomes, and ERA project satisfaction. We empirically test the extended model with a sample of 180 buying professionals from ERA project teams at large global companies. Our results show that operational and strategic outcomes are positively related to ERA project satisfaction, while price savings are not. We also find positive relationships between financial outcomes and project team expertise; operational outcomes and organizational commitment, cross-functional project team composition, and procedural fairness ; and strategic outcomes and top management support, organizational commitment, and procedural fairness. An electronic reverse auction (ERA) is ''an online, real-time dynamic auction between a buying organization and a group of pre-qualified suppliers who compete against each other to win the business to supply goods or services that have clearly defined specifications for design, quantity, quality, delivery, and related terms and conditions. These suppliers compete by bidding against each other online over the Internet using specialized software by submitting successively lower priced bids during a scheduled time period'' (Beall et al. 2003). Over the past two decades, ERAs have been used in various industries, (Beall et al. 2003, Ray et al. 2011, Wang et al. 2013). ERAs are increasingly popular among buying organizations, although their use sparks controversy and ethical concerns in the sourcing world (Charki et al. 2010). Indeed, the one-sided focus on price savings in ERAs is considered to be at odds with the benefits of long-term cooperative buyer–supplier relationships (Beall et al. 2003, Hunt et al. 2006). However, several researchers have declared that ERAs are here to stay, as they are relatively easy to install and use and have resulted in positive outcomes across a range of offerings and contexts (Beall et al. 2003, Hur et al. 2006). In prior research work on ERAs, Amelinckx et al. (2008) developed a conceptual model based on an extensive review of the electronic sourcing literature and exploratory research involving multiple case studies. The authors identified operational and strategic outcomes that buying organizations can obtain in ERAs, in addition to financial gains. Furthermore, the authors asserted that the different outcomes can be obtained jointly, through the implementation of important organizational and project antecedents, and as such alleviate …", "title": "" }, { "docid": "73edaa7319dcf225c081f29146bbb385", "text": "Sign language is a specific area of human gesture communication and a full-edged complex language that is used by various deaf communities. In Bangladesh, there are many deaf and dumb people. It becomes very difficult to communicate with them for the people who are unable to understand the Sign Language. In this case, an interpreter can help a lot. So it is desirable to make computer to understand the Bangladeshi sign language that can serve as an interpreter. In this paper, a Computer Vision-based Bangladeshi Sign Language Recognition System (BdSL) has been proposed. In this system, separate PCA (Principal Component Analysis) is used for Bengali Vowels and Bengali Numbers recognition. The system is tested for 6 Bengali Vowels and 10 Bengali Numbers.", "title": "" }, { "docid": "f1ef6a16c85a874250148d1863ce3756", "text": "In this paper, a triple band capacitive-fed circular patch antenna with arc-shaped slots is proposed for 1.575 GHz GPS and Wi-Fi 2.4/5.2 GHz communications on unmanned aerial vehicle (UAV) applications. In order to enhance the impedance bandwidth of the antenna, a double-layered geometry is applied in this design with a circular feeding disk placed between two layers. The antenna covers 2380 - 2508 MHz and 5100 - 6030 MHz for full support of the Wi-Fi communication between UAV and ground base station. The foam-Duroid stacked geometry can further enhance the bandwidths for both GPS and Wi-Fi bands when compared to purely Duroid form. The simulation and measurement results are reported in this paper.", "title": "" }, { "docid": "4db29a3fd1f1101c3949d3270b15ef07", "text": "Human goal-directed action emerges from the interaction between stimulus-driven sensorimotor online systems and slower-working control systems that relate highly processed perceptual information to the construction of goal-related action plans. This distribution of labor requires the acquisition of enduring action representations; that is, of memory traces which capture the main characteristics of successful actions and their consequences. It is argued here that these traces provide the building blocks for off-line prospective action planning, which renders the search through stored action representations an essential part of action control. Hence, action planning requires cognitive search (through possible options) and might have led to the evolution of cognitive search routines that humans have learned to employ for other purposes as well, such as searching for perceptual events and through memory. Thus, what is commonly considered to represent different types of search operations may all have evolved from action planning and share the same characteristics. Evidence is discussed which suggests that all types of cognitive search—be it in searching for perceptual events, for suitable actions, or through memory—share the characteristic of following a fi xed sequence of cognitive operations: divergent search followed by convergent search.", "title": "" }, { "docid": "cce477dd5efd3ecbabc57dfb237b72c9", "text": "In this paper we present BabelDomains, a unified resource which provides lexical items with information about domains of knowledge. We propose an automatic method that uses knowledge from various lexical resources, exploiting both distributional and graph-based clues, to accurately propagate domain information. We evaluate our methodology intrinsically on two lexical resources (WordNet and BabelNet), achieving a precision over 80% in both cases. Finally, we show the potential of BabelDomains in a supervised learning setting, clustering training data by domain for hypernym discovery.", "title": "" }, { "docid": "cbdbe103bcc85f76f9e6ac09eed8ea4c", "text": "Using the evidence collection and analysis methodology for Android devices proposed by Martini, Do and Choo (2015), we examined and analyzed seven popular Android cloud-based apps. Firstly, we analyzed each app in order to see what information could be obtained from their private app storage and SD card directories. We collated the information and used it to aid our investigation of each app’s database files and AccountManager data. To complete our understanding of the forensic artefacts stored by apps we analyzed, we performed further analysis on the apps to determine if the user’s authentication credentials could be collected for each app based on the information gained in the initial analysis stages. The contributions of this research include a detailed description of artefacts, which are of general forensic interest, for each app analyzed.", "title": "" }, { "docid": "80c522a65fafb98886d1d3d848605e77", "text": "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: //github.com/ramprs/grad-cam/ along with a demo on CloudCV [2] and video at youtu.be/COjUB9Izk6E.", "title": "" }, { "docid": "15a24d02f998f0b515e35ce4c66a6dc1", "text": "Nowadays chronic diseases are the leading cause of deaths in India. These diseases which include various ailments in the form of diabetes, stroke, cardiovascular diseases, mental health illness, cancers, and chronic lung diseases. Chronic diseases are the biggest challenge for India and these diseases are the main cause of hospitalization for elder people. People who have suffered from chronic diseases are needed to repeatedly monitor the vital signs periodically. The number of nurses in hospital is relative low compared to the number of patients in hospital, there may be a chance to miss to monitor any patient vital signs which may affect patient health. In this paper, real time monitoring vital signs of a patient is developed using wearable sensors. Without nurse help, patient know the vital signs from the sensors and the system stored the sensor value in the form of text document. By using data mining approaches, the system is trained for vital sign data. Patients give their text document to the system which in turn they know their health status without any nurse help. This system enables high risk patients to be timely checked and enhance the quality of a life of patients.", "title": "" }, { "docid": "84fe6840461b63a5ccf007450f0eeef8", "text": "The canonical Wnt cascade has emerged as a critical regulator of stem cells. In many tissues, activation of Wnt signalling has also been associated with cancer. This has raised the possibility that the tightly regulated self-renewal mediated by Wnt signalling in stem and progenitor cells is subverted in cancer cells to allow malignant proliferation. Insights gained from understanding how the Wnt pathway is integrally involved in both stem cell and cancer cell maintenance and growth in the intestinal, epidermal and haematopoietic systems may serve as a paradigm for understanding the dual nature of self-renewal signals.", "title": "" }, { "docid": "0d723c344ab5f99447f7ad2ff72c0455", "text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.", "title": "" }, { "docid": "31d5e64bfc92d0987f17666841e6e648", "text": "BACKGROUND AND PURPOSE\nThe semiquantitative noncontrast CT Alberta Stroke Program Early CT Score (ASPECTS) and RAPID automated computed tomography (CT) perfusion (CTP) ischemic core volumetric measurements have been used to quantify infarct extent. We aim to determine the correlation between ASPECTS and CTP ischemic core, evaluate the variability of core volumes within ASPECTS strata, and assess the strength of their association with clinical outcomes.\n\n\nMETHODS\nReview of a prospective, single-center database of consecutive thrombectomies of middle cerebral or intracranial internal carotid artery occlusions with pretreatment CTP between September 2010 and September 2015. CTP was processed with RAPID software to identify ischemic core (relative cerebral blood flow<30% of normal tissue).\n\n\nRESULTS\nThree hundred and thirty-two patients fulfilled inclusion criteria. Median age was 66 years (55-75), median ASPECTS was 8 (7-9), whereas median CTP ischemic core was 11 cc (2-27). Median time from last normal to groin puncture was 5.8 hours (3.9-8.8), and 90-day modified Rankin scale score 0 to 2 was observed in 54%. The correlation between CTP ischemic core and ASPECTS was fair (R=-0.36; P<0.01). Twenty-six patients (8%) had ASPECTS <6 and CTP core ≤50 cc (37% had modified Rankin scale score 0-2, whereas 29% were deceased at 90 days). Conversely, 27 patients (8%) had CTP core >50 cc and ASPECTS ≥6 (29% had modified Rankin scale 0-2, whereas 21% were deceased at 90 days). Moderate correlations between ASPECTS and final infarct volume (R=-0.42; P<0.01) and between CTP ischemic core and final infarct volume (R=0.50; P<0.01) were observed; coefficients were not significantly influenced by the time from stroke onset to presentation. Multivariable regression indicated ASPECTS ≥6 (odds ratio 4.10; 95% confidence interval, 1.47-11.46; P=0.01) and CTP core ≤50 cc (odds ratio 3.86; 95% confidence interval, 1.22-12.15; P=0.02) independently and comparably predictive of good outcome.\n\n\nCONCLUSIONS\nThere is wide variability of CTP-derived core volumes within ASPECTS strata. Patient selection may be affected by the imaging selection method.", "title": "" }, { "docid": "353d9add247202dc1a31f69064c68c5c", "text": "Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership.\n In this paper, we generalize the \"digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.", "title": "" } ]
scidocsrr
18637c14834f664424798541ff9e3d6b
Secure storage system and key technologies
[ { "docid": "21d84bd9ea7896892a3e69a707b03a6a", "text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.", "title": "" } ]
[ { "docid": "271f6291ab2c97b5e561cf06b9131f9d", "text": "Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set.", "title": "" }, { "docid": "c6ef33607a015c4187ac77b18d903a8a", "text": "OBJECTIVE\nA systematic review was conducted to identify effective intervention strategies for communication in individuals with Down syndrome.\n\n\nMETHODS\nWe updated and extended previous reviews by examining: (1) participant characteristics; (2) study characteristics; (3) characteristics of effective interventions (e.g., strategies and intensity); (4) whether interventions are tailored to the Down syndrome behavior phenotype; and (5) the effectiveness (i.e., percentage nonoverlapping data and Cohen's d) of interventions.\n\n\nRESULTS\nThirty-seven studies met inclusion criteria. The majority of studies used behavior analytic strategies and produced moderate gains in communication targets. Few interventions were tailored to the needs of the Down syndrome behavior phenotype.\n\n\nCONCLUSION\nThe results suggest that behavior analytic strategies are a promising approach, and future research should focus on replicating the effects of these interventions with greater methodological rigor.", "title": "" }, { "docid": "9244acef01812d757639bd4f09631c22", "text": "This paper describes the results of the first shared task on Multilingual Emoji Prediction, organized as part of SemEval 2018. Given the text of a tweet, the task consists of predicting the most likely emoji to be used along such tweet. Two subtasks were proposed, one for English and one for Spanish, and participants were allowed to submit a system run to one or both subtasks. In total, 49 teams participated in the English subtask and 22 teams submitted a system run to the Spanish subtask. Evaluation was carried out emoji-wise, and the final ranking was based on macro F-Score. Data and further information about this task can be found at https://competitions. codalab.org/competitions/17344.", "title": "" }, { "docid": "521699fc8fc841e8ac21be51370b439f", "text": "Scene understanding is an essential technique in semantic segmentation. Although there exist several datasets that can be used for semantic segmentation, they are mainly focused on semantic image segmentation with large deep neural networks. Therefore, these networks are not useful for real time applications, especially in autonomous driving systems. In order to solve this problem, we make two contributions to semantic segmentation task. The first contribution is that we introduce the semantic video dataset, the Highway Driving dataset, which is a densely annotated benchmark for a semantic video segmentation task. The Highway Driving dataset consists of 20 video sequences having a 30Hz frame rate, and every frame is densely annotated. Secondly, we propose a baseline algorithm that utilizes a temporal correlation. Together with our attempt to analyze the temporal correlation, we expect the Highway Driving dataset to encourage research on semantic video segmentation.", "title": "" }, { "docid": "255a707951238ace366ef1ea0df833fc", "text": "During the last decade, researchers have verified that clothing can provide information for gender recognition. However, before extracting features, it is necessary to segment the clothing region. We introduce a new clothes segmentation method based on the application of the GrabCut technique over a trixel mesh, obtaining very promising results for a close to real time system. Finally, the clothing features are combined with facial and head context information to outperform previous results in gender recognition with a public database.", "title": "" }, { "docid": "288383c6a6d382b6794448796803699f", "text": "A transresistance instrumentation amplifier (dual-input transresistance amplifier) was designed, and a prototype was fabricated and tested in a gamma-ray dosimeter. The circuit, explained in this letter, is a differential amplifier which is suitable for amplification of signals from current-source transducers. In the dosimeter application, the amplifier proved superior to a regular (single) transresistance amplifier, giving better temperature stability and better common-mode rejection.", "title": "" }, { "docid": "07817eb2722fb434b1b8565d936197cf", "text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.", "title": "" }, { "docid": "c5dc7a1ff0a3db20232fdff9cfb65381", "text": "We replace the output layer of deep neural nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical neural nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and the code is available at https://github.com/ BaoWangMath/DNN-DataDependentActivation.", "title": "" }, { "docid": "363a465d626fec38555563722ae92bb1", "text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.", "title": "" }, { "docid": "e66ae650db7c4c75a88ee6cf1ea8694d", "text": "Traditional recommender systems minimize prediction error with respect to users' choices. Recent studies have shown that recommender systems have a positive effect on the provider's revenue.\n In this paper we show that by providing a set of recommendations different than the one perceived best according to user acceptance rate, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to reveal, and the business, which wants to promote specific content.\n We performed a large body of experiments comparing a commercial state-of-the-art recommendation engine with a modified recommendation list, which takes into account the utility (or revenue) which the business obtains from each suggestion that is accepted by the user. We show that the modified recommendation list is more desirable for the business, as the end result gives the business a higher utility (or revenue). To study possible reduce in satisfaction by providing the user worse suggestions, we asked the users how they perceive the list of recommendation that they received. Differences in user satisfaction between the lists is negligible, and not statistically significant.\n We also uncover a phenomenon where movie consumers prefer watching and even paying for movies that they have already seen in the past than movies that are new to them.", "title": "" }, { "docid": "2b095980aaccd7d35d079260738279c5", "text": "Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance when embedded in large vocabulary continuous speech recognition (LVCSR) systems due to its capability of modeling local correlations and reducing translational variations. In all previous related works for ASR, only up to two convolutional layers are employed. In light of the recent success of very deep CNNs in image classification, it is of interest to investigate the deep structure of CNNs for speech recognition in detail. In contrast to image classification, the dimensionality of the speech feature, the span size of input feature and the relationship between temporal and spectral domain are new factors to consider while designing very deep CNNs. In this work, very deep CNNs are introduced for LVCSR task, by extending depth of convolutional layers up to ten. The contribution of this work is two-fold: performance improvement of very deep CNNs is investigated under different configurations; further, a better way to perform convolution operations on temporal dimension is proposed. Experiments showed that very deep CNNs offer a 8-12% relative improvement over baseline DNN system, and a 4-7% relative improvement over baseline CNN system, evaluated on both a 15-hr Callhome and a 51-hr Switchboard LVCSR tasks.", "title": "" }, { "docid": "ce7175f868e2805e9e08e96a1c9738f4", "text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.", "title": "" }, { "docid": "5c892e59bed54f149697dbdf4024fbd1", "text": "In this paper, an online tracking system has been developed to control the arm and head of a Nao robot using Kinect sensor. The main goal of this work is to achieve that the robot is able to follow the motion of a human user in real time to track. This objective has been achieved using a RGB-D camera (Kinect v2) and a Nao robot, which is a humanoid robot with 5 degree of freedom (DOF) for each arm. The joint motions of the operator's head and arm in the real world captured by a Kinect camera can be transferred into the workspace mathematically via forward and inverse kinematics, realitically through data based UDP connection between the robot and Kinect sensor. The satisfactory performance of the proposed approaches have been achieved, which is shown in experimental results.", "title": "" }, { "docid": "1a8e346b6f2cd1c368f449f9a9474e5c", "text": "Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions performed on an initial program input, the fuzzing agent learns a policy that can next generate new higher-reward inputs. We have implemented this new approach, and preliminary empirical evidence shows that reinforcement fuzzing can outperform baseline random fuzzing.", "title": "" }, { "docid": "b127e63ac45c81ce9fa9aa6240ce5154", "text": "This paper examines the use of social learning platforms in conjunction with the emergent pedagogy of the `flipped classroom'. In particular the attributes of the social learning platform “Edmodo” is considered alongside the changes in the way in which online learning environments are being implemented, especially within British education. Some observations are made regarding the use and usefulness of these platforms along with a consideration of the increasingly decentralized nature of education in the United Kingdom.", "title": "" }, { "docid": "fd69e05a9be607381c4b8cd69d758f41", "text": "The increase in electronically mediated self-servic e technologies in the banking industry has impacted on the way banks service consumers. Despit e a large body of research on electronic banking channels, no study has been undertaken to e xplor the fit between electronic banking channels and banking tasks. Nor has there been rese a ch into how the ‘task-channel fit’ and other factors impact on consumers’ intention to use elect ronic banking channels. This paper proposes a theoretical model addressing these gaps. An explora tory study was first conducted, investigating industry experts’ perceptions towards the concept o f ‘task-channel fit’ and its relationship to other electronic banking channel variables. The findings demonstrated that the concept was perceived as being highly relevant by bank managers. A resear ch model was then developed drawing on the existing literature. To evaluate the research mode l quantitatively, a survey will be developed and validated, administered to a sample of consumers, a nd the resulting data used to test both measurement and structural aspects of the research model.", "title": "" }, { "docid": "92cecd8329343bc3a9b0e46e2185eb1c", "text": "The spondylo and spondylometaphyseal dysplasias (SMDs) are characterized by vertebral changes and metaphyseal abnormalities of the tubular bones, which produce a phenotypic spectrum of disorders from the mild autosomal-dominant brachyolmia to SMD Kozlowski to autosomal-dominant metatropic dysplasia. Investigations have recently drawn on the similar radiographic features of those conditions to define a new family of skeletal dysplasias caused by mutations in the transient receptor potential cation channel vanilloid 4 (TRPV4). This review demonstrates the significance of radiography in the discovery of a new bone dysplasia family due to mutations in a single gene.", "title": "" }, { "docid": "bd9f584e7dbc715327b791e20cd20aa9", "text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "title": "" }, { "docid": "fa42192f3ffd08332e35b98019e622ff", "text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.", "title": "" }, { "docid": "939b2faa63e24c0f303b823481682c4c", "text": "Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral 'form' (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion.", "title": "" } ]
scidocsrr
fc15c7921e0abe34c8a123cf78699293
The Basic AI Drives
[ { "docid": "15004021346a3c79924733bfc38bbe82", "text": "Self-improving systems are a promising new approach to developing artificial intelligence. But will their behavior be predictable? Can we be sure that they will behave as we intended even after many generations of selfimprovement? This paper presents a framework for answering questions like these. It shows that self-improvement causes systems to converge on an", "title": "" } ]
[ { "docid": "0e218dd5654ae9125d40bdd5c0a326d6", "text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.", "title": "" }, { "docid": "6c0f3240b86677a0850600bf68e21740", "text": "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https://github.com/layumi/2016_person_re-ID.", "title": "" }, { "docid": "40714e8b4c58666e4044789ffe344493", "text": "The paper presents a novel calibration method for fisheye lens. Five parameters, which fully reflect characters of fisheye lens, are proposed. Linear displacement platform is used to acquire precise sliding displacement between the target image and fisheye lens. Laser calibration method is designed to obtain the precise value of optical center. A convenient method, which is used to calculate the virtual focus of the fisheye lens, is proposed. To verify the result, indoor environment is built up to measure the localization error of omni-directional robot. Image including landmarks is acquired by fisheye lens and delivered to DSP (Digital Signal Processor) to futher process. Error analysis to localization of omni-directional robot is showed in the conclusion.", "title": "" }, { "docid": "b0356ab3a4a3917386bfe928a68031f5", "text": "Even when Ss fail to recall a solicited target, they can provide feeling-of-knowing (FOK) judgments about its availability in memory. Most previous studies addressed the question of FOK accuracy, only a few examined how FOK itself is determined, and none asked how the processes assumed to underlie FOK also account for its accuracy. The present work examined all 3 questions within a unified model, with the aim of demystifying the FOK phenomenon. The model postulates that the computation of FOK is parasitic on the processes involved in attempting to retrieve the target, relying on the accessibility of pertinent information. It specifies the links between memory strength, accessibility of correct and incorrect information about the target, FOK judgments, and recognition memory. Evidence from 3 experiments is presented. The results challenge the view that FOK is based on a direct, privileged access to an internal monitor.", "title": "" }, { "docid": "495be81dda82d3e4d90a34b6716acf39", "text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.", "title": "" }, { "docid": "3dc4384744f2f85983bc58b0a8a241c6", "text": "OBJECTIVE\nTo define a map of interradicular spaces where miniscrew can be likely placed at a level covered by attached gingiva, and to assess if a correlation between crowding and availability of space exists.\n\n\nMETHODS\nPanoramic radiographs and digital models of 40 patients were selected according to the inclusion criteria. Interradicular spaces were measured on panoramic radiographs, while tooth size-arch length discrepancy was assessed on digital models. Statistical analysis was performed to evaluate if interradicular spaces are influenced by the presence of crowding.\n\n\nRESULTS\nIn the mandible, the most convenient sites for miniscrew insertion were in the spaces comprised between second molars and first premolars; in the maxilla, between first molars and second premolars as well as between canines and lateral incisors and between the two central incisors. The interradicular spaces between the maxillary canines and lateral incisors, and between mandibular first and second premolars revealed to be influenced by the presence of dental crowding.\n\n\nCONCLUSIONS\nThe average interradicular sites map hereby proposed can be used as a general guide for miniscrew insertion at the very beginning of orthodontic treatment planning. Then, the clinician should consider the amount of crowding: if this is large, the actual interradicular space in some areas might be significantly different from what reported on average. Individualized radiographs for every patient are still recommended.", "title": "" }, { "docid": "f161b9891e8b1a828b2a177c5f9e6761", "text": "This paper focuses on molten aluminum and aluminum alloy droplet generation for application to net-form manufacturing of structural components. The mechanism of droplet formation from capillary stream break-up provides the allure for use in net-form manufacturing due to the intrinsic uniformity of droplets generated under proper forcing conditions and the high rates at which they are generated. Additionally, droplet formation from capillary stream break-up allows the customization of droplet streams for a particular application. The current status of the technology under development is presented, and issues affecting the microstructure and the mechanical properties of the manufactured components are studied in an effort to establish a relationship between processing parameters and properties. ∗ Corresponding author Introduction High precision droplet-based net-form manufacturing of structural components is gaining considerable academic and industrial interest due to the promise of improved component quality resulting from rapid solidification processing and the economic benefits associated with fabricating a structural component in one integrated operation. A droplet based net-form manufacturing technique is under development at UCI which is termed Precision Droplet-Based Net-Form Manufacturing (PDM). The crux of the technique lies in the ability to generate highly uniform streams of molten metal droplets such as aluminum or aluminum alloys. Though virtually any Newtonian fluid that can be contained in a crucible is suitable for the technology, this work concentrates on the generation and deposition of molten aluminum alloy (2024) droplets that are generated and deposited in an inert environment. Figure 1 is a conceptual schematic of the current status of PDM. Droplets are generated from capillary stream break-up in an inert environment and are deposited onto a substrate whose motion is controlled by a programmable x-y table. In this way, tubes with circular, square, and triangular cross sections have been fabricated such as those illustrated in Figure 2. Tubes have been fabricated with heights as great as 11.0 cm. The surface morphology of the component is governed by the thermal conditions at the substrate. If we denote the solidified component and the substrate the \"effective substrate\", then the newly arriving droplets must have sufficient thermal energy to locally remelt a thin layer (with dimensions on the order of 10 microns or less) of the effective substrate. Remelting action of the previously deposited and solidified material will insure the removal of individual splat boundaries and result in a more homogeneous component. The thermal requirements for remelting have been studied analytically in reference [1]. It was shown in that work that there exists a minimum substrate temperature for a given droplet impingement temperature that results in remelting. The \"bump iness\" apparent in the circular cylinder shown in Figure 2 is due to the fact that the initial substrate temperature was insufficient to initiate the onset of remelting. As the component grows in height by successive droplet deliveries, the effective substrate temperature increases due to the fact that droplets are delivered at rates too high to allow cooling before the arrival of the next layer of droplets. Therefore, within the constraints of the current embodiment of the technology, there exists a certain height of the component for which remelting will occur. This height is demarcated at the location where the \"bumpiness\" is eliminated and relative\"smoothness\" prevails, as can be seen in the circular cylinder. As the component grows beyond this height, the remelting depth will continue to increase due to increased heating to the effective substrate. Hence, the component walls will thicken due to slower solidification rates. The objective of Eighth International Conference on Liquid Atomization and Spray Systems, Pasadena, CA, USA, July 2000 ongoing work (not presented here) is to identify the heat flux required for the minimum remelting of the effective substrate, and to develop processing conditions for which this heat flux seen by the substrate remains constant for each geometry desired. In this manner, the fidelity of the microstructure, mechanical properties, and geometry will remain intact. As is evident from Figure 1, the research presented in this work did not employ electrostatic charging and deflection. However, in the final realization of the technology, charging and deflection will be utilized in order to control the droplet density as a function of the component geometry, or to print fine details at high speed and at high precision. The charging and deflection of droplets bears many similarities to the technology of ink-jet printing, except that in the current application of PDM, large lateral areas are printed, thereby requiring significantly higher droplet charges than in ink-jet printing. The high charges on the closely spaced droplets result in mutual inter-droplet interactions that are not apparent in the application of ink-jet printing. Recent experimental and numerical results on the subject of droplet interactions due the application of high electrostatic charges are presented elsewhere [2]. Though not yet utilized for net-form manufacturing of structural components, droplet charging and deflection has been successfully applied to the \"printing\" of electronic components such as BGA's (Ball Grid Arrays). These results can be found in reference [3]. Research on controlled droplet formation from capillary stream break-up over the past decade has enabled ultra-precise charged droplet formation, deflection, and deposition that makes feasible many emerging applications in net-form manufacturing and electronic component fabrication [4-9]. Unlike the Drop-on-Demand mode of droplet formation, droplets can be generated at rates typically on the order of 10,000 to 20,000 droplets per second, from capillary stream break-up and can be electrostatically charged and deflected onto a substrate with a measured accuracy of ± 12.5 μm. Other net-form manufacturing technologies that rely on uniform droplet formation include 3D Printing (3DP) [10-12] and Shape Deposition Manufacturing (SDM) [13-15]. In 3DP, parts are manufactured by generating droplets of a binder material with the Drop-on-Demand mode of generation and depositing them onto selected areas of a layer of metal or ceramic powder. After the binder dries, the print bed is lowered and another layer of powder is spread in order to repeat the process. The process is repeated until the 3-D component is fabricated. Like PDM, the process of SDM relies on uniform generation of molten metal droplets. However the droplet generation technique is markedly different than droplet generation from capillary stream Figure 2: Examples of preliminary comp onents fabricated with PDM. The tall square tube shown horizontally is 11.0 cm. Figure 1: Conceptual schematic of cylinder fabrication on a flat-plate substrate with controlled droplet deposition. Molten droplet", "title": "" }, { "docid": "917c703c04ec76bd209c3b6f9e2b868d", "text": "Crowd simulation for virtual environments offers many challenges centered on the trade-offs between rich behavior, control and computational cost. In this paper we present a new approach to controlling the behavior of agents in a crowd. Our method is scalable in the sense that increasingly complex crowd behaviors can be created without a corresponding increase in the complexity of the agents. Our approach is also more authorable; users can dynamically specify which crowd behaviors happen in various parts of an environment. Finally, the character motion produced by our system is visually convincing. We achieve our aims with a situation-based control structure. Basic agents have very limited behaviors. As they enter new situations, additional, situation-specific behaviors are composed on the fly to enable agents to respond appropriately. The composition is done using a probabilistic mechanism. We demonstrate our system with three environments including a city street and a theater.", "title": "" }, { "docid": "7b717d6c4506befee2a374333055e2d1", "text": "This is the pre-acceptance version, to read the final version please go to IEEE Geoscience and Remote Sensing Magazine on IEEE XPlore. Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a “black-box” solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. X. Zhu and L. Mou are with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany and with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Germany, E-mails: xiao.zhu@dlr.de; lichao.mou@dlr.de. D. Tuia was with the Department of Geography, University of Zurich, Switzerland. He is now with the Laboratory of GeoInformation Science and Remote Sensing, Wageningen University of Research, the Netherlands. E-mail: devis.tuia@wur.nl. G.-S Xia and L. Zhang are with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University. E-mail:guisong.xia@whu.edu.cn; zlp62@whu.edu.cn. F. Xu is with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan Univeristy. E-mail: fengxu@fudan.edu.cn. F. Fraundorfer is with the Institute of Computer Graphics and Vision, TU Graz, Austria and with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany. E-mail: fraundorfer@icg.tugraz.at. The work of X. Zhu and L. Mou are supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087], Acronym: So2Sat), Helmholtz Association under the framework of the Young Investigators Group “SiPEO” (VH-NG-1018, www.sipeo.bgu.tum.de) and China Scholarship Council. The work of D. Tuia is supported by the Swiss National Science Foundation (SNSF) under the project NO. PP0P2 150593. The work of G.-S. Xia and L. Zhang are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 41501462 and No. 41431175. The work of F. Xu are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 61571134. October 12, 2017 DRAFT ar X iv :1 71 0. 03 95 9v 1 [ cs .C V ] 1 1 O ct 2 01 7 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, IN PRESS. 2", "title": "" }, { "docid": "57c0db8c200b94baa28779ff4f47d630", "text": "The development of the Web services lets many users easily provide their opinions recently. Automatic summarization of enormous sentiments has been expected. Intuitively, we can summarize a review with traditional document summarization methods. However, such methods have not well-discussed “aspects”. Basically, a review consists of sentiments with various aspects. We summarize reviews for each aspect so that the summary presents information without biasing to a specific topic. In this paper, we propose a method for multiaspects review summarization based on evaluative sentence extraction. We handle three features; ratings of aspects, the tf -idf value, and the number of mentions with a similar topic. For estimating the number of mentions, we apply a clustering algorithm. By integrating these features, we generate a more appropriate summary. The experiment results show the effectiveness of our method.", "title": "" }, { "docid": "af8fbdfbc4c4958f69b3936ff2590767", "text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.", "title": "" }, { "docid": "7003d59d401bce0f6764cc6aa25b5dd2", "text": "This paper presents a 13 bit 50 MS/s fully differential ring amplifier based SAR-assisted pipeline ADC, implemented in 65 nm CMOS. We introduce a new fully differential ring amplifier, which solves the problems of single-ended ring amplifiers while maintaining the benefits of high gain, fast slew based charging and an almost rail-to-rail output swing. We implement a switched-capacitor (SC) inter-stage residue amplifier that uses this new fully differential ring amplifier to give accurate amplification without calibration. In addition, a new floated detect-and-skip (FDAS) capacitive DAC (CDAC) switching method reduces the switching energy and improves linearity of first-stage CDAC. With these techniques, the prototype ADC achieves measured SNDR, SNR, and SFDR of 70.9 dB (11.5b), 71.3 dB and 84.6 dB, respectively, with a Nyquist frequency input. The prototype achieves 13 bit linearity without calibration and consumes 1 mW. This measured performance is equivalent to Walden and Schreier FoMs of 6.9 fJ/conversion ·step and 174.9 dB, respectively.", "title": "" }, { "docid": "9c183992492880d8b6e1a644e014a72f", "text": "Repeated measures analyses of variance are the method of choice in many studies from experimental psychology and the neurosciences. Data from these fields are often characterized by small sample sizes, high numbers of factor levels of the within-subjects factor(s), and nonnormally distributed response variables such as response times. For a design with a single within-subjects factor, we investigated Type I error control in univariate tests with corrected degrees of freedom, the multivariate approach, and a mixed-model (multilevel) approach (SAS PROC MIXED) with Kenward-Roger's adjusted degrees of freedom. We simulated multivariate normal and nonnormal distributions with varied population variance-covariance structures (spherical and nonspherical), sample sizes (N), and numbers of factor levels (K). For normally distributed data, as expected, the univariate approach with Huynh-Feldt correction controlled the Type I error rate with only very few exceptions, even if samples sizes as low as three were combined with high numbers of factor levels. The multivariate approach also controlled the Type I error rate, but it requires N ≥ K. PROC MIXED often showed acceptable control of the Type I error rate for normal data, but it also produced several liberal or conservative results. For nonnormal data, all of the procedures showed clear deviations from the nominal Type I error rate in many conditions, even for sample sizes greater than 50. Thus, none of these approaches can be considered robust if the response variable is nonnormally distributed. The results indicate that both the variance heterogeneity and covariance heterogeneity of the population covariance matrices affect the error rates.", "title": "" }, { "docid": "3a75cf54ace0ebb56b985e1452151a91", "text": "Ubiquitous networks support the roaming service for mobile communication devices. The mobile user can use the services in the foreign network with the help of the home network. Mutual authentication plays an important role in the roaming services, and researchers put their interests on the authentication schemes. Recently, in 2016, Gope and Hwang found that mutual authentication scheme of He et al. for global mobility networks had security disadvantages such as vulnerability to forgery attacks, unfair key agreement, and destitution of user anonymity. Then, they presented an improved scheme. However, we find that the scheme cannot resist the off-line guessing attack and the de-synchronization attack. Also, it lacks strong forward security. Moreover, the session key is known to HA in that scheme. To get over the weaknesses, we propose a new two-factor authentication scheme for global mobility networks. We use formal proof with random oracle model, formal verification with the tool Proverif, and informal analysis to demonstrate the security of the proposed scheme. Compared with some very recent schemes, our scheme is more applicable. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "52357eff7eda659bcf225d0ab70cb8d2", "text": "BACKGROUND\nFlexibility is an important physical quality. Self-myofascial release (SMFR) methods such as foam rolling (FR) increase flexibility acutely but how long such increases in range of motion (ROM) last is unclear. Static stretching (SS) also increases flexibility acutely and produces a cross-over effect to contralateral limbs. FR may also produce a cross-over effect to contralateral limbs but this has not yet been identified.\n\n\nPURPOSE\nTo explore the potential cross-over effect of SMFR by investigating the effects of a FR treatment on the ipsilateral limb of 3 bouts of 30 seconds on changes in ipsilateral and contralateral ankle DF ROM and to assess the time-course of those effects up to 20 minutes post-treatment.\n\n\nMETHODS\nA within- and between-subject design was carried out in a convenience sample of 26 subjects, allocated into FR (n=13) and control (CON, n=13) groups. Ankle DF ROM was recorded at baseline with the in-line weight-bearing lunge test for both ipsilateral and contralateral legs and at 0, 5, 10, 15, 20 minutes following either a two-minute seated rest (CON) or 3 3 30 seconds of FR of the plantar flexors of the dominant leg (FR). Repeated measures ANOVA was used to examine differences in ankle DF ROM.\n\n\nRESULTS\nNo significant between-group effect was seen following the intervention. However, a significant within-group effect (p<0.05) in the FR group was seen between baseline and all post-treatment time-points (0, 5, 10, 15 and 20 minutes). Significant within-group effects (p<0.05) were also seen in the ipsilateral leg between baseline and at all post-treatment time-points, and in the contralateral leg up to 10 minutes post-treatment, indicating the presence of a cross-over effect.\n\n\nCONCLUSIONS\nFR improves ankle DF ROM for at least 20 minutes in the ipsilateral limb and up to 10 minutes in the contralateral limb, indicating that FR produces a cross-over effect into the contralateral limb. The mechanism producing these cross-over effects is unclear but may involve increased stretch tolerance, as observed following SS.\n\n\nLEVELS OF EVIDENCE\n2c.", "title": "" }, { "docid": "605a078c74d37007654094b4b426ece8", "text": "Currently, blockchain technology, which is decentralized and may provide tamper-resistance to recorded data, is experiencing exponential growth in industry and research. In this paper, we propose the MIStore, a blockchain-based medical insurance storage system. Due to blockchain’s the property of tamper-resistance, MIStore may provide a high-credibility to users. In a basic instance of the system, there are a hospital, patient, insurance company and n servers. Specifically, the hospital performs a (t, n)-threshold MIStore protocol among the n servers. For the protocol, any node of the blockchain may join the protocol to be a server if the node and the hospital wish. Patient’s spending data is stored by the hospital in the blockchain and is protected by the n servers. Any t servers may help the insurance company to obtain a sum of a part of the patient’s spending data, which servers can perform homomorphic computations on. However, the n servers cannot learn anything from the patient’s spending data, which recorded in the blockchain, forever as long as more than n − t servers are honest. Besides, because most of verifications are performed by record-nodes and all related data is stored at the blockchain, thus the insurance company, servers and the hospital only need small memory and CPU. Finally, we deploy the MIStore on the Ethererum blockchain and give the corresponding performance evaluation.", "title": "" }, { "docid": "03a036bea8fac6b1dfa7d9a4783eef66", "text": "Face recognition from the real data, capture images, sensor images and database images is challenging problem due to the wide variation of face appearances, illumination effect and the complexity of the image background. Face recognition is one of the most effective and relevant applications of image processing and biometric systems. In this paper we are discussing the face recognition methods, algorithms proposed by many researchers using artificial neural networks (ANN) which have been used in the field of image processing and pattern recognition. How ANN will used for the face recognition system and how it is effective than another methods will also discuss in this paper. There are many ANN proposed methods which give overview face recognition using ANN. Therefore, this research includes a general review of face detection studies and systems which based on different ANN approaches and algorithms. The strengths and limitations of these literature studies and systems were included, and also the performance analysis of different ANN approach and algorithm is analysing in this research study.", "title": "" }, { "docid": "da2bc0813d4108606efef507e50100e3", "text": "Prediction is one of the most attractive aspects in data mining. Link prediction has recently attracted the attention of many researchers as an effective technique to be used in graph based models in general and in particular for social network analysis due to the recent popularity of the field. Link prediction helps to understand associations between nodes in social communities. Existing link prediction-related approaches described in the literature are limited to predict links that are anticipated to exist in the future. To the best of our knowledge, none of the previous works in this area has explored the prediction of links that could disappear in the future. We argue that the latter set of links are important to know about; they are at least equally important as and do complement the positive link prediction process in order to plan better for the future. In this paper, we propose a link prediction model which is capable of predicting both links that might exist and links that may disappear in the future. The model has been successfully applied in two different though very related domains, namely health care and gene expression networks. The former application concentrates on physicians and their interactions while the second application covers genes and their interactions. We have tested our model using different classifiers and the reported results are encouraging. Finally, we compare our approach with the internal links approach and we reached the conclusion that our approach performs very well in both bipartite and non-bipartite graphs.", "title": "" }, { "docid": "3b2cbc85f5fb17aba8a872c12ba4928a", "text": "For over five decades, liquid injectable silicone has been used for soft-tissue augmentation. Its use has engendered polarized reactions from the public and from physicians. Adherents of this product tout its inert chemical structure, ease of use, and low cost. Opponents of silicone cite the many reports of complications, including granulomas, pneumonitis, and disfiguring nodules that are usually the result of large-volume injection and/or industrial grade or adulterated material. Unfortunately, as recently as 2006, reports in The New England Journal of Medicine and The New York Times failed to distinguish between the use of medical grade silicone injected by physicians trained in the microdroplet technique and the use of large volumes of industrial grade products injected by unlicensed or unskilled practitioners. This review separates these two markedly different procedures. In addition, it provides an overview of the chemical structure of liquid injectable silicone, the immunology of silicone reactions within the body, treatment for cosmetic improvement including human immunodeficiency virus lipoatrophy, technical considerations for its injection, complications seen following injections, and some considerations of the future for silicone soft-tissue augmentation.", "title": "" } ]
scidocsrr
bc772df5bd360e4dcaac189ee483a6b8
RGB-D object modelling for object recognition and tracking
[ { "docid": "d02af961d8780a06ae0162647603f8bb", "text": "We contribute an empirically derived noise model for the Kinect sensor. We systematically measure both lateral and axial noise distributions, as a function of both distance and angle of the Kinect to an observed surface. The derived noise model can be used to filter Kinect depth maps for a variety of applications. Our second contribution applies our derived noise model to the KinectFusion system to extend filtering, volumetric fusion, and pose estimation within the pipeline. Qualitative results show our method allows reconstruction of finer details and the ability to reconstruct smaller objects and thinner surfaces. Quantitative results also show our method improves pose estimation accuracy.", "title": "" } ]
[ { "docid": "02156199912027e9230b3c000bcbe87b", "text": "Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. In this work, we assume that the training data partly include parallel speech data and propose sequence-to-sequence learning between the source and target posterior probabilities. The conversion models perform non-linear and variable-length transformation from the source probability sequence to the target one. Further, we propose a joint training algorithm for the modules. In contrast to conventional VC, which separately trains the speech recognition that estimates posterior probabilities and the speech synthesis that predicts target speech parameters, our proposed method jointly trains these modules along with the proposed probability conversion modules. Experimental results demonstrate that our approach outperforms the conventional VC.", "title": "" }, { "docid": "ec6f53bd2cbc482c1450934b1fd9e463", "text": "Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.", "title": "" }, { "docid": "42303331bf6713c1809468532c153693", "text": "................................................................................................................................................ V Table of", "title": "" }, { "docid": "57f1671f7b73f0b888f55a1f31a9f1a1", "text": "The ongoing high relevance of business intelligence (BI) for the management and competitiveness of organizations requires a continuous, transparent, and detailed assessment of existing BI solutions in the enterprise. This paper presents a BI maturity model (called biMM) that has been developed and refined over years. It is used for both, in surveys to determine the overall BI maturity in German speaking countries and for the individual assessment in organizations. A recently conducted survey shows that the current average BI maturity can be assigned to the third stage (out of five stages). Comparing future (planned) activities and current challenges allows the derivation of a BI research agenda. The need for action includes among others emphasizing BI specific organizational structures, such as the establishment of BI competence centers, a stronger focus on profitability, and improved effectiveness of the BI architecture.", "title": "" }, { "docid": "d679fb65265fb48cc53ae771b0f254af", "text": "This paper presents a tunable transmission line (t-line) structure, featuring independent control of line inductance and capacitance. The t-line provides variable delay while maintaining relatively constant characteristic impedance using direct digital control through FET switches. As an application of this original structure, a 60 GHz RF-phase shifter for phased-array applications is implemented in a 32 nm SOI process attaining state-of-the-art performance. Measured data from two phase shifter variants at 60 GHz showed phase changes of 175° and 185°, S21 losses of 3.5-7.1 dB and 6.1-7.6 dB, RMS phase errors of 2° and 3.2°, and areas of 0.073 mm2 and 0.099 mm2 respectively.", "title": "" }, { "docid": "5ca14c0581484f5618dd806a6f994a03", "text": "Many of existing criteria for evaluating Web sites quality require methods such as heuristic evaluations, or/and empirical usability tests. This paper aims at defining a quality model and a set of characteristics relating internal and external quality factors and giving clues about potential problems, which can be measured by automated tools. The first step in the quality assessment process is an automatic check of the source code, followed by manual evaluation, possibly supported by an appropriate user panel. As many existing tools can check sites (mainly considering accessibility issues), the general architecture will be based upon a conceptual model of the site/page, and the tools will export their output to a Quality Data Base, which is the basis for subsequent actions (checking, reporting test results, etc.).", "title": "" }, { "docid": "50603dae3b5131ba4e6d956d57402e10", "text": "Due to the spread of color laser printers to the general public, numerous forgeries are made by color laser printers. Printer identification is essential to preventing damage caused by color laser printed forgeries. This paper presents a new method to identify a color laser printer using photographed halftone images. First, we preprocess the photographed images to extract the halftone pattern regardless of the variation of the illumination conditions. Then, 15 halftone texture features are extracted from the preprocessed images. A support vector machine is used to be trained and classify the extracted features. Experiments are performed on seven color laser printers. The experimental results show that the proposed method is suitable for identifying the source color laser printer using photographed images.", "title": "" }, { "docid": "74f8127bc620fa1c9797d43dedea4d45", "text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.", "title": "" }, { "docid": "a60752274fdae6687c713538215d0269", "text": "Some soluble phosphate salts, heavily used in agriculture as highly effective phosphorus (P) fertilizers, cause surface water eutrophication, while solid phosphates are less effective in supplying the nutrient P. In contrast, synthetic apatite nanoparticles could hypothetically supply sufficient P nutrients to crops but with less mobility in the environment and with less bioavailable P to algae in comparison to the soluble counterparts. Thus, a greenhouse experiment was conducted to assess the fertilizing effect of synthetic apatite nanoparticles on soybean (Glycine max). The particles, prepared using one-step wet chemical method, were spherical in shape with diameters of 15.8 ± 7.4 nm and the chemical composition was pure hydroxyapatite. The data show that application of the nanoparticles increased the growth rate and seed yield by 32.6% and 20.4%, respectively, compared to those of soybeans treated with a regular P fertilizer (Ca(H2PO4)2). Biomass productions were enhanced by 18.2% (above-ground) and 41.2% (below-ground). Using apatite nanoparticles as a new class of P fertilizer can potentially enhance agronomical yield and reduce risks of water eutrophication.", "title": "" }, { "docid": "149ffd270f39a330f4896c7d3aa290be", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "9e0cbbe8d95298313fd929a7eb2bfea9", "text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.", "title": "" }, { "docid": "bd91ef7524a262fb40083d3fb34f8d0e", "text": "Simulators have become an integral part of the computer architecture research and design process. Since they have the advantages of cost, time, and flexibility, architects use them to guide design space exploration and to quantify the efficacy of an enhancement. However, long simulation times and poor accuracy limit their effectiveness. To reduce the simulation time, architects have proposed several techniques that increase the simulation speed or throughput. To increase the accuracy, architects try to minimize the amount of error in their simulators and have proposed adding statistical rigor to their simulation methodology. Since a wide range of approaches exist and since many of them overlap, this paper describes, classifies, and compares them to aid the computer architect in selecting the most appropriate one.", "title": "" }, { "docid": "65d3d020ee63cdeb74cb3da159999635", "text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.", "title": "" }, { "docid": "c45d911aea9d06208a4ef273c9ab5ff3", "text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.", "title": "" }, { "docid": "accf1445bcf32b7e3c03443bf722a882", "text": "The Chua circuit is among the simplest non-linear circuits that shows most complex dynamical behavior, including chaos which exhibits a variety of bifurcation phenomena and attractors. In this paper, Chua attractor’s chaotic oscillator, synchronization and masking communication circuits were designed and simulated. The electronic circuit oscilloscope outputs of the realized Chua system is also presented. Simulation and oscilloscope outputs are used to illustrate the accuracy of the designed and realized Chua chaotic oscillator circuits. The Chua system is addressed suitable for chaotic synchronization circuits and chaotic masking communication circuits using Matlab® and MultiSIM® software. Simulation results are used to visualize and illustrate the effectiveness of Chua chaotic system in synchronization and application of secure communication.", "title": "" }, { "docid": "28f6751a043201fd8313944b4f79101f", "text": "FLLL 2 Preface This is a printed collection of the contents of the lecture \" Genetic Algorithms: Theory and Applications \" which I gave first in the winter semester 1999/2000 at the Johannes Kepler University in Linz. The reader should be aware that this manuscript is subject to further reconsideration and improvement. Corrections, complaints, and suggestions are cordially welcome. The sources were manifold: Chapters 1 and 2 were written originally for these lecture notes. All examples were implemented from scratch. The third chapter is a distillation of the books of Goldberg [13] and Hoffmann [15] and a handwritten manuscript of the preceding lecture on genetic algorithms which was given by Andreas Stöckl in 1993 at the Johannes Kepler University. Chapters 4, 5, and 7 contain recent adaptations of previously published material from my own master thesis and a series of lectures which was given by Francisco Herrera and myself at the Second Summer School on Advanced Control at the Slovak Technical University, Bratislava, in summer 1997 [4]. Chapter 6 was written originally, however, strongly influenced by A. Geyer-Schulz's works and H. Hörner's paper on his C++ GP kernel [18]. I would like to thank all the students attending the first GA lecture in Winter 1999/2000, for remaining loyal throughout the whole term and for contributing much to these lecture notes with their vivid, interesting, and stimulating questions, objections, and discussions. Last but not least, I want to express my sincere gratitude to Sabine Lumpi and Susanne Saminger for support in organizational matters, and Pe-ter Bauer for proofreading .", "title": "" }, { "docid": "1906aa92c26bb95b4cb79b4bfe7e362f", "text": "As Artificial Intelligence (AI) techniques become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use.", "title": "" }, { "docid": "cac8f1df581628a7e64e779751fafaf0", "text": "The vast majority of Web services and sites are hosted in various kinds of cloud services, and ordering some level of quality of service (QoS) in such systems requires effective load-balancing policies that choose among multiple clouds. Recently, software-defined networking (SDN) is one of the most promising solutions for load balancing in cloud data center. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. By using these technologies, SDN and cloud computing can improve cloud reliability, manageability, scalability and controllability. SDN-based cloud is a new type cloud in which SDN technology is used to acquire control on network infrastructure and to provide networking-as-a-service (NaaS) in cloud computing environments. In this paper, we introduce an SDN-enhanced Inter cloud Manager (S-ICM) that allocates network flows in the cloud environment. S-ICM consists of two main parts, monitoring and decision making. For monitoring, S-ICM uses SDN control message that observes and collects data, and decision-making is based on the measured network delay of packets. Measurements are used to compare S-ICM with a round robin (RR) allocation of jobs between clouds which spreads the workload equitably, and with a honeybee foraging algorithm (HFA). We see that S-ICM is better at avoiding system saturation than HFA and RR under heavy load formula using RR job scheduler. Measurements are also used to evaluate whether a simple queueing formula can be used to predict system performance for several clouds being operated under an RR scheduling policy, and show the validity of the theoretical approximation.", "title": "" }, { "docid": "3fda6dfa4aa7973725baa1dd9dc7f542", "text": "This paper presents a novel cognitive management architecture developed within the H2020 CogNet project to manage 5G networks. We also present the instantiation of this architecture for two Operator use cases, namely ‘SLA enforcement’ and ‘Mobile Quality Predictor’. The SLA enforcement use case tackles the SLA management with machine learning techniques, precisely, LSTM (Long Short Term Memory). The second use case, Mobile Quality Predictor, proposes a framework using machine learning to enable an accurate bandwidth prediction for each mobile subscriber in real-time. A problem statement, stakeholders, an instantiation of the cognitive management architecture, a related work as well as an evaluation results are presented for each use case.", "title": "" }, { "docid": "6a3210307c98b4311271c29da142b134", "text": "Accelerating innovation in renewable energy (RE) requires not just more finance, but finance servicing the entire innovation landscape. Given that finance is not ‘neutral’, more information is required on the quality of finance that meets technology and innovation stage-specific financing needs for the commercialization of RE technologies. We investigate the relationship between different financial actors with investment in different RE technologies. We construct a new deal-level dataset of global RE asset finance from 2004 to 2014 based on Bloomberg New Energy Finance data, that distinguishes 10 investor types (e.g. private banks, public banks, utilities) and 11 RE technologies into which they invest. We also construct a heuristic investment risk measure that varies with technology, time and country of investment. We find that particular investor types have preferences for particular risk levels, and hence particular types of RE. Some investor types invested into far riskier portfolios than others, and financing of individual high-risk technologies depended on investment by specific investor types. After the 2008 financial crisis, state-owned or controlled companies and banks emerged as the high-risk taking locomotives of RE asset finance. We use these preliminary results to formulate new questions for future RE policy, and encourage further research.", "title": "" } ]
scidocsrr
9af8e7dc3fea72d4cc8a202a17ebf31e
Personalization Method for Tourist Point of Interest (POI) Recommendation
[ { "docid": "bd9f584e7dbc715327b791e20cd20aa9", "text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "title": "" } ]
[ { "docid": "f78d0dae400b331d6dcb4de9d10ca2f0", "text": "How ontologies provide the semantics, as explained here with the help of Harry Potter and his owl Hedwig.", "title": "" }, { "docid": "2579a6082d157d8b9940b3ca8084f741", "text": "In general, conventional Arbiter-based Physically Unclonable Functions (PUFs) generate responses with low unpredictability. The N-XOR Arbiter PUF, proposed in 2007, is a well-known technique for improving this unpredictability. In this paper, we propose a novel design for Arbiter PUF, called Double Arbiter PUF, to enhance the unpredictability on field programmable gate arrays (FPGAs), and we compare our design to conventional N-XOR Arbiter PUFs. One metric for judging the unpredictability of responses is to measure their tolerance to machine-learning attacks. Although our previous work showed the superiority of Double Arbiter PUFs regarding unpredictability, its details were not clarified. We evaluate the dependency on the number of training samples for machine learning, and we discuss the reason why Double Arbiter PUFs are more tolerant than the N-XOR Arbiter PUFs by evaluating intrachip variation. Further, the conventional Arbiter PUFs and proposed Double Arbiter PUFs are evaluated according to other metrics, namely, their uniqueness, randomness, and steadiness. We demonstrate that 3-1 Double Arbiter PUF archives the best performance overall.", "title": "" }, { "docid": "597b893e42df1bfba3d17b2d3ec31539", "text": "Genetic Programming (GP) is an evolutionary algorithm that has received a lot of attention lately due to its success in solving hard real-world problems. Lately, there has been considerable interest in GP's community to develop semantic genetic operators, i.e., operators that work on the phenotype. In this contribution, we describe EvoDAG (Evolving Directed Acyclic Graph) which is a Python library that implements a steady-state semantic Genetic Programming with tournament selection using an extension of our previous crossover operators based on orthogonal projections in the phenotype space. To show the effectiveness of EvoDAG, it is compared against state-of-the-art classifiers on different benchmark problems, experimental results indicate that EvoDAG is very competitive.", "title": "" }, { "docid": "ba291f7d938f73946969476fdc96f0df", "text": "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results.", "title": "" }, { "docid": "a9201c32c903eba5cc25a744134a1c3c", "text": "This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator’s advantages over existing approaches, including its robustness, adaptivity to different sparsity patterns and analytical tractability. We prove two theorems: one that characterizes the horseshoe estimator’s tail robustness and the other that demonstrates a super-efficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using both real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers obtained by Bayesian model averaging under a point-mass mixture prior.", "title": "" }, { "docid": "2c48dfb1ea7bc0defbe1643fa4708614", "text": "Text in natural images is an important source of information, which can be utilized for many real-world applications. This work focuses on a new problem: distinguishing images that contain text from a large volume of natural images. To address this problem, we propose a novel convolutional neural network variant, called Multi-scale Spatial Partition Network (MSP-Net). The network classifies images that contain text or not, by predicting text existence in all image blocks, which are spatial partitions at multiple scales on an input image. The whole image is classified as a text image (an image containing text) as long as one of the blocks is predicted to contain text. The network classifies images very efficiently by predicting all blocks simultaneously in a single forward propagation. Through experimental evaluations and comparisons on public datasets, we demonstrate the effectiveness and robustness of the proposed method.", "title": "" }, { "docid": "4bce72901777783578637fc6bfeb6267", "text": "This study examines the causal relationship between carbon dioxide emissions, electricity consumption and economic growth within a panel vector error correction model for five ASEAN countries over the period 1980 to 2006. The long-run estimates indicate that there is a statistically significant positive association between electricity consumption and emissions and a non-linear relationship between emissions and real output, consistent with the Environmental Kuznets Curve. The long-run estimates, however, do not indicate the direction of causality between the variables. The results from the Granger causality tests suggest that in the long-run there is unidirectional Granger causality running from electricity consumption and emissions to economic growth. The results also point to unidirectional Granger causality running from emissions to electricity consumption in the short-run.", "title": "" }, { "docid": "4468a8d7f01c1b3e6adcf316bdc34f81", "text": "Hyper-connected and digitized governments are increasingly advancing a vision of data-driven government as producers and consumers of big data in the big data ecosystem. Despite the growing interests in the potential power of big data, we found paucity of empirical research on big data use in government. This paper explores organizational capability challenges in transforming government through big data use. Using systematic literature review approach we developed initial framework for examining impacts of socio-political, strategic change, analytical, and technical capability challenges in enhancing public policy and service through big data. We then applied the framework to conduct case study research on two large-size city governments’ big data use. The findings indicate the framework’s usefulness, shedding new insights into the unique government context. Consequently, the framework was revised by adding big data public policy, political leadership structure, and organizational culture to further explain impacts of organizational capability challenges in transforming government.", "title": "" }, { "docid": "4f64b2b2b50de044c671e3d0d434f466", "text": "Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances , while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior. Motion analysis is one of the main tasks of computer vision. From an applicative viewpoint, the information brought by the dynamical behavior of observed objects or by the movement of the camera itself is a decisive element for the interpretation of observed phenomena. The motion characterizations can be extremely variable among the large number of application domains. Indeed, one can be interested in tracking objects, quantifying deformations, retrieving dominant motion, detecting abnormal behaviors, and so on. The most low-level characterization is the estimation of a dense motion field, corresponding to the displacement of each pixel, which is called optical flow. Most high-level motion analysis tasks employ optical flow as a fundamental basis upon which more semantic interpretation is built. Optical flow estimation has given rise to a tremendous quantity of works for 35 years. If a certain continuity can be found since the seminal works of [120,170], a number of methodological innovations have progressively changed the field and improved performances. Evaluation benchmarks and applicative domains have followed this progress by proposing new challenges allowing methods to face more and more difficult situations in terms of motion discontinuities, large displacements, illumination changes or computational costs. Despite great advances, handling these issues in a unique method still remains an open problem. Comprehensive surveys of optical flow literature were carried out in the nineties [21,178,228]. More recently, reviewing works have focused on variational approaches [264], benchmark results [13], specific applications [115], or tutorials restricted to a certain subset of methods [177,260]. However, covering all the main estimation approaches and including recent developments in a comprehensive classification is still lacking in the optical flow field. This survey …", "title": "" }, { "docid": "7e557091d8cfe6209b1eda3b664ab551", "text": "With the increasing penetration of mobile phones, problematic use of mobile phone (PUMP) deserves attention. In this study, using a path model we examined the relationship between depression and PUMP, with motivations as mediators. Findings suggest that depressed people may rely on mobile phone to alleviate their negative feelings and spend more time on communication activities via mobile phone, which in turn can deteriorate into PUMP. However, face-to-face communication with others played a moderating role, weakening the link between use of mobile phone for communication activities and dete-", "title": "" }, { "docid": "5b1241edf4a9853614a18139323f74eb", "text": "This paper presents a W-band SPDT switch implemented using PIN diodes in a new 90 nm SiGe BiCMOS technology. The SPDT switch achieves a minimum insertion loss of 1.4 dB and an isolation of 22 dB at 95 GHz, with less than 2 dB insertion loss from 77-134 GHz, and greater than 20 dB isolation from 79-129 GHz. The input and output return losses are greater than 10 dB from 73-133 GHz. By reverse biasing the off-state PIN diodes, the P1dB is larger than +24 dBm. To the authors' best knowledge, these results demonstrate the lowest loss and highest power handling capability achieved by a W-band SPDT switch in any silicon-based technology reported to date.", "title": "" }, { "docid": "d88059813c4064ec28c58a8ab23d3030", "text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.", "title": "" }, { "docid": "0c0d0b6d4697b1a0fc454b995bcda79a", "text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.", "title": "" }, { "docid": "7343d29bfdc1a4466400f8752dce4622", "text": "We present a novel method for detecting occlusions and in-painting unknown areas of a light field photograph, based on previous work in obstruction-free photography and light field completion. An initial guess at separating the occluder from the rest of the photograph is computed by aligning backgrounds of the images and using this information to generate an occlusion mask. The masked pixels are then synthesized using a patch-based texture synthesis algorithm, with the median image as the source of each patch.", "title": "" }, { "docid": "2b71cfacf2b1e0386094711d8b326ff7", "text": "In-car navigation systems are designed with effectiveness and efficiency (e.g., guiding accuracy) in mind. However, finding a way and discovering new places could also be framed as an adventurous, stimulating experience for the driver and passengers. Inspired by Gaver and Martin's (2000) notion of \"ambiguity and detour\" and Hassenzahl's (2010) Experience Design, we built ExplorationRide, an in-car navigation system to foster exploration. An empirical in situ exploration demonstrated the system's ability to create an exploration experience, marked by a relaxed at-mosphere, a loss of sense of time, excitement about new places and an intensified relationship with the landscape.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "53518256d6b4f3bb4e8dcf28a35f9284", "text": "Customers often evaluate products at brick-and-mortar stores to identify their “best fit” product but buy it for a lower price at a competing online retailer. This free-riding behavior by customers is referred to as “showrooming” and we show that this is detrimental to the profits of the brick-and-mortar stores. We first analyze price matching as a short-term strategy to counter showrooming. Since customers purchase from the store at lower than store posted price when they ask for price-matching, one would expect the price matching strategy to be less effective as the fraction of customers who seek the matching increases. However, our results show that with an increase in the fraction of customers who seek price matching, the stores profits initially decrease and then increase. While price-matching could be used even when customers do not exhibit showrooming behavior, we find that it is more effective when customers do showrooming. We then study exclusivity of product assortments as a long-term strategy to counter showrooming. This strategy can be implemented in two different ways. One, by arranging for exclusivity of known brands (e.g. Macy’s has such an arrangement with Tommy Hilfiger), or, two, through creation of store brands at the brick-and-mortar store (T.J.Maxx uses a large number of store brands). Our analysis suggests that implementing exclusivity through store brands is better than exclusivity through known brands when the product category has few digital attributes. However, when customers do not showroom, the known brand strategy dominates the store brand strategy.", "title": "" }, { "docid": "91f45641d96b519dd65bf00249571a99", "text": "Tissue perfusion is determined by both blood vessel geometry and the rheological properties of blood. Blood is a nonNewtonian fluid, its viscosity being dependent on flow conditions. Blood and plasma viscosities, as well as the rheological properties of blood cells (e.g., deformability and aggregation of red blood cells), are influenced by disease processes and extreme physiological conditions. These rheological parameters may in turn affect the blood flow in vessels, and hence tissue perfusion. Unfortunately it is not always possible to determine if a change in rheological parameters is the cause or the result of a disease process. The hemorheology-tissue perfusion relationship is further complicated by the distinct in vivo behavior of blood. Besides the special hemodynamic mechanisms affecting the composition of blood in various regions of the vascular system, autoregulation based on vascular control mechanisms further complicates this relationship. Hemorheological parameters may be especially important for adequate tissue perfusion if the vascular system is geometrically challenged.", "title": "" }, { "docid": "dd34e763b3fdf0a0a903b773fe1a84be", "text": "Natural language processing (NLP) is a vibrant field of interdisciplinary Computer Science research. Ultimately, NLP seeks to build intelligence into software so that software will be able to process a natural language as skillfully and artfully as humans. Prolog, a general purpose logic programming language, has been used extensively to develop NLP applications or components thereof. This report is concerned with introducing the interested reader to the broad field of NLP with respect to NLP applications that are built in Prolog or from Prolog components.", "title": "" }, { "docid": "27ba6cfdebdedc58ab44b75a15bbca05", "text": "OBJECTIVES\nTo assess the influence of material/technique selection (direct vs. CAD/CAM inlays) for large MOD composite adhesive restorations and its effect on the crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slot-type tooth preparation was applied to 32 extracted maxillary molars (5mm depth and 5mm bucco-palatal width) including immediately sealed dentin for the inlay group. Fifteen teeth were restored with direct composite resin restoration (Miris2) and 17 teeth received milled inlays using Paradigm MZ100 block in the CEREC machine. All inlays were adhesively luted with a light curing composite resin (Filtek Z100). Enamel shrinkage-induced cracks were tracked with photography and transillumination. Cyclic isometric chewing (5 Hz) was simulated, starting with a load of 200 N (5000 cycles), followed by stages of 400, 600, 800, 1000, 1200 and 1400 N at a maximum of 30,000 cycles each. Samples were loaded until fracture or to a maximum of 185,000 cycles.\n\n\nRESULTS\nTeeth restored with the direct technique fractured at an average load of 1213 N and two of them withstood all loading cycles (survival=13%); with inlays, the survival rate was 100%. Most failures with Miris2 occurred above the CEJ and were re-restorable (67%), but generated more shrinkage-induced cracks (47% of the specimen vs. 7% for inlays).\n\n\nSIGNIFICANCE\nCAD/CAM MZ100 inlays increased the accelerated fatigue resistance and decreased the crack propensity of large MOD restorations when compared to direct restorations. While both restorative techniques yielded excellent fatigue results at physiological masticatory loads, CAD/CAM inlays seem more indicated for high-load patients.", "title": "" } ]
scidocsrr
d15b94152661b013e935f44373d6bc23
The Good, The Bad and the Ugly: A Meta-analytic Review of Positive and Negative Effects of Violent Video Games
[ { "docid": "a52fce0b7419d745a85a2bba27b34378", "text": "Playing action video games enhances several different aspects of visual processing; however, the mechanisms underlying this improvement remain unclear. Here we show that playing action video games can alter fundamental characteristics of the visual system, such as the spatial resolution of visual processing across the visual field. To determine the spatial resolution of visual processing, we measured the smallest distance a distractor could be from a target without compromising target identification. This approach exploits the fact that visual processing is hindered as distractors are brought close to the target, a phenomenon known as crowding. Compared with nonplayers, action-video-game players could tolerate smaller target-distractor distances. Thus, the spatial resolution of visual processing is enhanced in this population. Critically, similar effects were observed in non-video-game players who were trained on an action video game; this result verifies a causative relationship between video-game play and augmented spatial resolution.", "title": "" } ]
[ { "docid": "bbeebb29c7220009c8d138dc46e8a6dd", "text": "Let’s begin with a problem that many of you have seen before. It’s a common question in technical interviews. You’re given as input an array A of length n, with the promise that it has a majority element — a value that is repeated in strictly more than n/2 of the array’s entries. Your task is to find the majority element. In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, your post-CS161 toolbox already contains a subroutine that gives a linear-time solution — just compute the median of A. (Note: it must be the majority element.) So let’s be more ambitious: can we compute the majority element with a single left-to-right pass through the array? If you haven’t seen it before, here’s the solution:", "title": "" }, { "docid": "45dbc5a3adacd0cc1374f456fb421ee9", "text": "The purpose of this article is to discuss current techniques used with poly-l-lactic acid to safely and effectively address changes observed in the aging face. Several important points deserve mention. First, this unique agent is not a filler but a stimulator of the host's own collagen, which then acts to volumize tissue in a gradual, progressive, and predictable manner. The technical differences between the use of biostimulatory agents and replacement fillers are simple and straightforward, but are critically important to the safe and successful use of these products and will be reviewed in detail. Second, in addition to gains in technical insights that have improved our understanding of how to use the product to best advantage, where to use the product to best advantage in facial filling has also improved with ever-evolving insights into the changes observed in the aging face. Finally, it is important to recognize that a patient's final outcome, and the amount of product and work it will take to get there, is a reflection of the quality of tissues with which they start. This is, of course, an issue of patient selection and not product selection.", "title": "" }, { "docid": "dd741d612ee466aecbb03f5e1be89b90", "text": "To date, many of the methods for information extraction of biological information from scientific articles are restricted to the abstract of the article. However, full text articles in electronic version, which offer larger sources of data, are currently available. Several questions arise as to whether the effort of scanning full text articles is worthy, or whether the information that can be extracted from the different sections of an article can be relevant. In this work we addressed those questions showing that the keyword content of the different sections of a standard scientific article (abstract, introduction, methods, results, and discussion) is very heterogeneous. Although the abstract contains the best ratio of keywords per total of words, other sections of the article may be a better source of biologically relevant data.", "title": "" }, { "docid": "7f368ea27e9aa7035c8da7626c409740", "text": "The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.", "title": "" }, { "docid": "0d6a28cc55d52365986382f43c28c42c", "text": "Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics (PVA) pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics.", "title": "" }, { "docid": "a91add591aacaa333e109d77576ba463", "text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.", "title": "" }, { "docid": "8c79eb51cfbc9872a818cf6467648693", "text": "A compact frequency-reconfigurable slot antenna for LTE (2.3 GHz), AMT-fixed service (4.5 GHz), and WLAN (5.8 GHz) applications is proposed in this letter. A U-shaped slot with short ends and an L-shaped slot with open ends are etched in the ground plane to realize dual-band operation. By inserting two p-i-n diodes inside the slots, easy reconfigurability of three frequency bands over a frequency ratio of 2.62:1 can be achieved. In order to reduce the cross polarization of the antenna, another L-shaped slot is introduced symmetrically. Compared to the conventional reconfigurable slot antenna, the size of the antenna is reduced by 32.5%. Simulated and measured results show that the antenna can switch between two single-band modes (2.3 and 5.8 GHz) and two dual-band modes (2.3/4.5 and 4.5/5.8 GHz). Also, stable radiation patterns are obtained.", "title": "" }, { "docid": "94631c7be7b2a992d006cd642dcc502c", "text": "This paper describes nagging, a technique for parallelizing search in a heterogeneous distributed computing environment. Nagging exploits the speedup anomaly often observed when parallelizing problems by playing multiple reformulations of the problem or portions of the problem against each other. Nagging is both fault tolerant and robust to long message latencies. In this paper, we show how nagging can be used to parallelize several different algorithms drawn from the artificial intelligence literature, and describe how nagging can be combined with partitioning, the more traditional search parallelization strategy. We present a theoretical analysis of the advantage of nagging with respect to partitioning, and give empirical results obtained on a cluster of 64 processors that demonstrate nagging’s effectiveness and scalability as applied to A* search, α β minimax game tree search, and the Davis-Putnam algorithm.", "title": "" }, { "docid": "0e5eb8191cea7d3a59f192aa32a214c4", "text": "Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate humangenerated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copyand reconstructionbased extensions lead to noticeable improvements.", "title": "" }, { "docid": "54b094c7747c8ac0b1fbd1f93e78fd8e", "text": "It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.", "title": "" }, { "docid": "0fd61e297560ebb8bcf1aafdf011ae67", "text": "Research is fundamental to the advancement of medicine and critical to identifying the most optimal therapies unique to particular societies. This is easily observed through the dynamics associated with pharmacology, surgical technique and the medical equipment used today versus short years ago. Advancements in knowledge synthesis and reporting guidelines enhance the quality, scope and applicability of results; thus, improving health science and clinical practice and advancing health policy. While advancements are critical to the progression of optimal health care, the high cost associated with these endeavors cannot be ignored. Research fundamentally needs to be evaluated to identify the most efficient methods of evaluation. The primary objective of this paper is to look at a specific research methodology when applied to the area of clinical research, especially extracorporeal circulation and its prognosis for the future.", "title": "" }, { "docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33", "text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.", "title": "" }, { "docid": "8863a617cee49b578a3902d12841053b", "text": "N Engl J Med 2009;361:1475-85. Copyright © 2009 Massachusetts Medical Society. DNA damage has emerged as a major culprit in cancer and many diseases related to aging. The stability of the genome is supported by an intricate machinery of repair, damage tolerance, and checkpoint pathways that counteracts DNA damage. In addition, DNA damage and other stresses can trigger a highly conserved, anticancer, antiaging survival response that suppresses metabolism and growth and boosts defenses that maintain the integrity of the cell. Induction of the survival response may allow interventions that improve health and extend the life span. Recently, the first candidate for such interventions, rapamycin (also known as sirolimus), has been identified.1 Compromised repair systems in tumors also offer opportunities for intervention, making it possible to attack malignant cells in which maintenance of the genome has been weakened. Time-dependent accumulation of damage in cells and organs is associated with gradual functional decline and aging.2 The molecular basis of this phenomenon is unclear,3-5 whereas in cancer, DNA alterations are the major culprit. In this review, I present evidence that cancer and diseases of aging are two sides of the DNAdamage problem. An examination of the importance of DNA damage and the systems of genome maintenance in relation to aging is followed by an account of the derailment of genome guardian mechanisms in cancer and of how this cancerspecific phenomenon can be exploited for treatment.", "title": "" }, { "docid": "e9750bf1287847b6587ad28b19e78751", "text": "Biomedical engineering handles the organization and functioning of medical devices in the hospital. This is a strategic function of the hospital for its balance, development, and growth. This is a major focus in internal and external reports of the hospital. It's based on piloting of medical devices needs and the procedures of biomedical teams’ intervention. Multi-year projects of capital and operating expenditure in medical devices are planned as coherently as possible with the hospital's financial budgets. An information system is an essential tool for monitoring medical devices engineering and relationship with medical services.", "title": "" }, { "docid": "1203f22bfdfc9ecd211dbd79a2043a6a", "text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.", "title": "" }, { "docid": "1dcc48994fada1b46f7b294e08f2ed5d", "text": "This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.", "title": "" }, { "docid": "222c51f079c785bb2aa64d2937e50ff0", "text": "Security and privacy in cloud computing are critical components for various organizations that depend on the cloud in their daily operations. Customers' data and the organizations' proprietary information have been subject to various attacks in the past. In this paper, we develop a set of Moving Target Defense (MTD) strategies that randomize the location of the Virtual Machines (VMs) to harden the cloud against a class of Multi-Armed Bandit (MAB) policy-based attacks. These attack policies capture the behavior of adversaries that seek to explore the allocation of VMs in the cloud and exploit the ones that provide the highest rewards (e.g., access to critical datasets, ability to observe credit card transactions, etc). We assess through simulation experiments the performance of our MTD strategies, showing that they can make MAB policy-based attacks no more effective than random attack policies. Additionally, we show the effects of critical parameters – such as discount factors, the time between randomizing the locations of the VMs and variance in the rewards obtained – on the performance of our defenses. We validate our results through simulations and a real OpenStack system implementation in our lab to assess migration times and down times under different system loads.", "title": "" }, { "docid": "cf999fc9b1a604dadfc720cf1bbfafdc", "text": "The characteristics of the extracellular polymeric substances (EPS) extracted with nine different extraction protocols from four different types of anaerobic granular sludge were studied. The efficiency of four physical (sonication, heating, cationic exchange resin (CER), and CER associated with sonication) and four chemical (ethylenediaminetetraacetic acid, ethanol, formaldehyde combined with heating, or NaOH) EPS extraction methods was compared to a control extraction protocols (i.e., centrifugation). The nucleic acid content and the protein/polysaccharide ratio of the EPS extracted show that the extraction does not induce abnormal cellular lysis. Chemical extraction protocols give the highest EPS extraction yields (calculated by the mass ratio between sludges and EPS dry weight (DW)). Infrared analyses as well as an extraction yield over 100% or organic carbon content over 1 g g−1 of DW revealed, nevertheless, a carry-over of the chemical extractants into the EPS extracts. The EPS of the anaerobic granular sludges investigated are predominantly composed of humic-like substances, proteins, and polysaccharides. The EPS content in each biochemical compound varies depending on the sludge type and extraction technique used. Some extraction techniques lead to a slightly preferential extraction of some EPS compounds, e.g., CER gives a higher protein yield.", "title": "" }, { "docid": "22719028c913aa4d0407352caf185d7a", "text": "Although the fact that genetic predisposition and environmental exposures interact to shape development and function of the human brain and, ultimately, the risk of psychiatric disorders has drawn wide interest, the corresponding molecular mechanisms have not yet been elucidated. We found that a functional polymorphism altering chromatin interaction between the transcription start site and long-range enhancers in the FK506 binding protein 5 (FKBP5) gene, an important regulator of the stress hormone system, increased the risk of developing stress-related psychiatric disorders in adulthood by allele-specific, childhood trauma–dependent DNA demethylation in functional glucocorticoid response elements of FKBP5. This demethylation was linked to increased stress-dependent gene transcription followed by a long-term dysregulation of the stress hormone system and a global effect on the function of immune cells and brain areas associated with stress regulation. This identification of molecular mechanisms of genotype-directed long-term environmental reactivity will be useful for designing more effective treatment strategies for stress-related disorders.", "title": "" }, { "docid": "44bd4ef644a18dc58a672eb91c873a98", "text": "Reactive oxygen species (ROS) contain one or more unpaired electrons and are formed as intermediates in a variety of normal biochemical reactions. However, when generated in excess amounts or not appropriately controlled, ROS initiate extensive cellular damage and tissue injury. ROS have been implicated in the progression of cancer, cardiovascular disease and neurodegenerative and neuroinflammatory disorders, such as multiple sclerosis (MS). In the last decade there has been a major interest in the involvement of ROS in MS pathogenesis and evidence is emerging that free radicals play a key role in various processes underlying MS pathology. To counteract ROS-mediated damage, the central nervous system is equipped with an intrinsic defense mechanism consisting of endogenous antioxidant enzymes. Here, we provide a comprehensive overview on the (sub)cellular origin of ROS during neuroinflammation as well as the detrimental effects of ROS in processing underlying MS lesion development and persistence. In addition, we will discuss clinical and experimental studies highlighting the therapeutic potential of antioxidant protection in the pathogenesis of MS.", "title": "" } ]
scidocsrr
4534df7a48326def1badb12418df5c36
Internet of things for sleep quality monitoring system: A survey
[ { "docid": "8bcc223389b7cc2ce2ef4e872a029489", "text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.", "title": "" } ]
[ { "docid": "b4e1fdeb6d467eddfea074b802558fb8", "text": "This paper proposes a novel and more accurate iris segmentation framework to automatically segment iris region from the face images acquired with relaxed imaging under visible or near-infrared illumination, which provides strong feasibility for applications in surveillance, forensics and the search for missing children, etc. The proposed framework is built on a novel total-variation based formulation which uses l1 norm regularization to robustly suppress noisy texture pixels for the accurate iris localization. A series of novel and robust post processing operations are introduced to more accurately localize the limbic boundaries. Our experimental results on three publicly available databases, i.e., FRGC, UBIRIS.v2 and CASIA.v4-distance, achieve significant performance improvement in terms of iris segmentation accuracy over the state-of-the-art approaches in the literature. Besides, we have shown that using iris masks generated from the proposed approach helps to improve iris recognition performance as well. Unlike prior work, all the implementations in this paper are made publicly available to further advance research and applications in biometrics at-d-distance.", "title": "" }, { "docid": "49dcfa6459c83b20f731c61f3a1ed7cf", "text": "The number of unmanned vehicles and devices deployed underwater is increasing. New communication systems and networking protocols are required to handle this growth. Underwater free-space optical communication is poised to augment acoustic communication underwater, especially for short-range, mobile, multi-user environments in future underwater systems. Existing systems are typically point-to-point links with strict pointing and tracking requirements. In this paper we demonstrate compact smart transmitters and receivers for underwater free-space optical communications. The receivers have segmented wide field of view and are capable of estimating angle of arrival of signals. The transmitters are highly directional with individually addressable LEDs for electronic switched beamsteering, and are capable of estimating water quality from its backscattered light collected by its co-located receiver. Together they form enabling technologies for non-traditional networking schemes in swarms of unmanned vehicles underwater.", "title": "" }, { "docid": "9d979b8cf09dd54b28e314e2846f02a6", "text": "Purpose – The objective of this paper is to analyse whether individuals’ socioeconomic characteristics – age, gender and income – influence their online shopping behaviour. The individuals analysed are experienced e-shoppers i.e. individuals who often make purchases on the internet. Design/methodology/approach – The technology acceptance model was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behaviour of e-shoppers are based on their own experiences. The information obtained has been tested using causal and multi-sample analyses. Findings – The results show that socioeconomic variables moderate neither the influence of previous use of the internet nor the perceptions of e-commerce; in short, they do not condition the behaviour of the experienced e-shopper. Practical implications – The results obtained help to determine that once individuals attain the status of experienced e-shoppers their behaviour is similar, independently of their socioeconomic characteristics. The internet has become a marketplace suitable for all ages and incomes and both genders, and thus the prejudices linked to the advisability of selling certain products should be revised. Originality/value – Previous research related to the socioeconomic variables affecting e-commerce has been aimed at forecasting who is likely to make an initial online purchase. In contrast to the majority of existing studies, it is considered that the current development of the online environment should lead to analysis of a new kind of e-shopper (experienced purchaser), whose behaviour differs from that studied at the outset of this research field. The experience acquired with online shopping nullifies the importance of socioeconomic characteristics.", "title": "" }, { "docid": "5a3b8a2ec8df71956c10b2eb10eabb99", "text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.", "title": "" }, { "docid": "29e5d267bebdeb2aa22b137219b4407e", "text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.", "title": "" }, { "docid": "6c2a033b374b4318cd94f0a617ec705a", "text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.", "title": "" }, { "docid": "755535335da1eb05e4b4a163a8f3d2ac", "text": "Calcium pyrophosphate (CPP) crystal deposition (CPPD) is associated with ageing and osteoarthritis, and with uncommon disorders such as hyperparathyroidism, hypomagnesemia, hemochromatosis and hypophosphatasia. Elevated levels of synovial fluid pyrophosphate promote CPP crystal formation. This extracellular pyrophosphate originates either from the breakdown of nucleotide triphosphates by plasma-cell membrane glycoprotein 1 (PC-1) or from pyrophosphate transport by the transmembrane protein progressive ankylosis protein homolog (ANK). Although the etiology of apparent sporadic CPPD is not well-established, mutations in the ANK human gene (ANKH) have been shown to cause familial CPPD. In this Review, the key regulators of pyrophosphate metabolism and factors that lead to high extracellular pyrophosphate levels are described. Particular emphasis is placed on the mechanisms by which mutations in ANKH cause CPPD and the clinical phenotype of these mutations is discussed. Cartilage factors predisposing to CPPD and CPP-crystal-induced inflammation and current treatment options for the management of CPPD are also described.", "title": "" }, { "docid": "84ece888e2302d13775973f552c6b810", "text": "We present a qualitative study of hospitality exchange processes that take place via the online peer-to-peer platform Airbnb. We explore 1) what motivates individuals to monetize network hospitality and 2) how the presence of money ties in with the social interaction related to network hospitality. We approach the topic from the perspective of hosts -- that is, Airbnb users who participate by offering accommodation for other members in exchange for monetary compensation. We found that participants were motivated to monetize network hospitality for both financial and social reasons. Our analysis indicates that the presence of money can provide a helpful frame for network hospitality, supporting hosts in their efforts to accomplish desired sociability, select guests consistent with their preferences, and control the volume and type of demand. We conclude the paper with a critical discussion of the implications of our findings for network hospitality and, more broadly, for the so-called sharing economy.", "title": "" }, { "docid": "1a22f7d1d57a00669f3052f8906ac4fa", "text": "BACKGROUND\nThere have been previous representative nutritional status surveys conducted in Hungary, but this is the first one that examines overweight and obesity prevalence according to the level of urbanization and in different geographic regions among 6-8-year-old children. We also assessed whether these variations were different by sex.\n\n\nMETHODS\nThis survey was part of the fourth data collection round of World Health Organization (WHO) Childhood Obesity Surveillance Initiative which took place during the academic year 2016/2017. The representative sample was determined by two-stage cluster sampling. A total of 5332 children (48.4% boys; age 7.54 ± 0.64 years) were measured from all seven geographic regions including urban (at least 500 inhabitants per square kilometer; n = 1598), semi-urban (100 to 500 inhabitants per square kilometer; n = 1932) and rural (less than 100 inhabitants per square kilometer; n = 1802) areas.\n\n\nRESULTS\nUsing the WHO reference, prevalence of overweight and obesity within the whole sample were 14.2, and 12.7%, respectively. According to the International Obesity Task Force (IOTF) reference, rates were 12.6 and 8.6%. Northern Hungary and Southern Transdanubia were the regions with the highest obesity prevalence of 11.0 and 12.0%, while Central Hungary was the one with the lowest obesity rate (6.1%). The prevalence of overweight and obesity tended to be higher in rural areas (13.0 and 9.8%) than in urban areas (11.9 and 7.0%). Concerning differences in sex, girls had higher obesity risk in rural areas (OR = 2.0) but boys did not. Odds ratios were 2.0-3.4 in different regions for obesity compared to Central Hungary, but only among boys.\n\n\nCONCLUSIONS\nOverweight and obesity are emerging problems in Hungary. Remarkable differences were observed in the prevalence of obesity by geographic regions. These variations can only be partly explained by geographic characteristics.\n\n\nTRIAL REGISTRATION\nStudy protocol was approved by the Scientific and Research Ethics Committee of the Medical Research Council ( 61158-2/2016/EKU ).", "title": "" }, { "docid": "ab231cbc45541b5bdbd0da82571b44ca", "text": "ABSTRACT Evidence of Sedona magnetic anomaly and brainwave EEG synchronization can be demonstrated with portable equipment on site in the field, during sudden magnetic events. Previously, we have demonstrated magnetic anomaly charts recorded in both known and unrecognized Sedona vortex activity locations. We have also shown a correlation or amplification of vortex phenomena with Schumann Resonance. Adding the third measurable parameter of brain wave activity, we demonstrate resonance and amplification among them. We suggest tiny magnetic crystals, biogenic magnetite, make human beings highly sensitive to ELF field fluctuations. Biological Magnetite could act as a transducer of both low frequency magnetic fields and RF fields.", "title": "" }, { "docid": "3f723663369de329a05ac258d36379eb", "text": "This paper reviews the history of aerosol therapy; discusses patient, drug, and device factors that can influence the success of aerosol therapy; and identifies trends that will drive the science of aerosol therapy in the future. Aerosol medication is generally less expensive, works more rapidly, and produces fewer side effects than the same drug given systemically. Aerosol therapy has been used for thousands of years by steaming and burning plant material. In the 50 years since the invention of the pressurized metered-dose inhaler, advances in drugs and devices have made aerosols the most commonly used way to deliver therapy for asthma and COPD. The requirements for aerosol therapy depend on the target site of action and the underlying disease. Medication to treat airways disease should deposit on the conducting airways. Effective deposition of airway particles generally requires particle size between 0.5 and 5 microm mass median aerodynamic diameter; however, a smaller particle size neither equates to greater side effects nor greater effectiveness. However, medications like peptides intended for systemic absorption, need to deposit on the alveolar capillary bed. Thus ultrafine particles, a slow inhalation, and relatively normal airways that do not hinder aerosol penetration will optimize systemic delivery. Aerosolized antimicrobials are often used for the treatment of cystic fibrosis or bronchiectasis, and mucoactive agents to promote mucus clearance have been delivered by aerosol. As technology improves, a greater variety of novel medications are being developed for aerosol delivery, including for therapy of pulmonary hypertension, as vaccines, for decreasing dyspnea, to treat airway inflammation, for migraine headache, for nicotine and drug addiction, and ultimately for gene therapy. Common reasons for therapeutic failure of aerosol medications include the use of inactive or depleted medications, inappropriate use of the aerosol device, and, most importantly, poor adherence to prescribed therapy. The respiratory therapist plays a key role in patient education, device selection, and outcomes assessment.", "title": "" }, { "docid": "448040bcefe4a67a2a8c4b2cf75e7ebc", "text": "Visual analytics has been widely studied in the past decade. One key to make visual analytics practical for both research and industrial applications is the appropriate definition and implementation of the visual analytics pipeline which provides effective abstractions for designing and implementing visual analytics systems. In this paper we review the previous work on visual analytics pipelines and individual modules from multiple perspectives: data, visualization, model and knowledge. In each module we discuss various representations and descriptions of pipelines inside the module, and compare the commonalities and the differences among them.", "title": "" }, { "docid": "0f87cd3209d3cc28b60425eeab37f1a4", "text": "This paper presents low-loss 3-D transmission lines and vertical interconnects fabricated by aerosol jet printing (AJP) which is an additive manufacturing technology. AJP stacks up multiple layers with minimum feature size as small as 20 μm in the xy-direction and 0.7 μm in the z-direction. It also solves the problem of fabricating vias to realize the vertical transition by 3-D printing. The loss of the stripline is measured to be 0.53 dB/mm at 40 GHz. The vertical transition achieves a broadband bandwidth from 0.1 to 40 GHz. The results of this paper demonstrate the feasibility of utilizing 3-D printing for low-cost multilayer system-on-package RF/millimeter-wave front-ends.", "title": "" }, { "docid": "b7851d3e08d29d613fd908d930afcd6b", "text": "Word sense embeddings represent a word sense as a low-dimensional numeric vector. While this representation is potentially useful for NLP applications, its interpretability is inherently limited. We propose a simple technique that improves interpretability of sense vectors by mapping them to synsets of a lexical resource. Our experiments with AdaGram sense embeddings and BabelNet synsets show that it is possible to retrieve synsets that correspond to automatically learned sense vectors with Precision of 0.87, Recall of 0.42 and AUC of 0.78.", "title": "" }, { "docid": "74ffa7a819d415ed6381f4128cc04fdd", "text": "The process of identifying the actual meanings of words in a given text fragment has a long history in the field of computational linguistics. Due to its importance in understanding the semantics of natural language, it is considered one of the most challenging problems facing this field. In this article we propose a new unsupervised similarity-based word sense disambiguation (WSD) algorithm that operates by computing the semantic similarity between glosses of the target word and a context vector. The sense of the target word is determined as that for which the similarity between gloss and context vector is greatest. Thus, whereas conventional unsupervised WSD methods are based on measuring pairwise similarity between words, our approach is based on measuring semantic similarity between sentences. This enables it to utilize a higher degree of semantic information, and is more consistent with the way that human beings disambiguate; that is, by considering the greater context in which the word appears. We also show how performance can be further improved by incorporating a preliminary step in which the relative importance of words within the original text fragment is estimated, thereby providing an ordering that can be used to determine the sequence in which words should be disambiguated. We provide empirical results that show that our method performs favorably against the state-of-the-art unsupervised word sense disambiguation methods, as evaluated on several benchmark datasets through different models of evaluation.", "title": "" }, { "docid": "3e817504c0db80831d9edbda60254247", "text": "OBJECTIVES\nThe purpose of this descriptive study was to investigate the current situation of clinical alarms in intensive care unit (ICU), nurses' recognition of and fatigue in relation to clinical alarms, and obstacles in alarm management.\n\n\nMETHODS\nSubjects were ICU nurses and devices from 48 critically ill patient cases. Data were collected through direct observation of alarm occurrence and questionnaires that were completed by the ICU nurses. The observation time unit was one hour block. One bed out of 56 ICU beds was randomly assigned to each observation time unit.\n\n\nRESULTS\nOverall 2,184 clinical alarms were counted for 48 hours of observation, and 45.5 clinical alarms occurred per hour per subject. Of these, 1,394 alarms (63.8%) were categorized as false alarms. The alarm fatigue score was 24.3 ± 4.0 out of 35. The highest scoring item was \"always get bothered due to clinical alarms\". The highest scoring item in obstacles was \"frequent false alarms, which lead to reduced attention or response to alarms\".\n\n\nCONCLUSIONS\nNurses reported that they felt some fatigue due to clinical alarms, and false alarms were also obstacles to proper management. An appropriate hospital policy should be developed to reduce false alarms and nurses' alarm fatigue.", "title": "" }, { "docid": "6964d3ac400abd6ace1ed48c36d68d06", "text": "Sentiment Analysis (SA) is indeed a fascinating area of research which has stolen the attention of researchers as it has many facets and more importantly it promises economic stakes in the corporate and governance sector. SA has been stemmed out of text analytics and established itself as a separate identity and a domain of research. The wide ranging results of SA have proved to influence the way some critical decisions are taken. Hence, it has become relevant in thorough understanding of the different dimensions of the input, output and the processes and approaches of SA.", "title": "" }, { "docid": "ee9cb495280dc6e252db80c23f2f8c2b", "text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.", "title": "" }, { "docid": "8f0805ba67919e349f2cd506378a5171", "text": "Cycloastragenol (CAG) is an aglycone of astragaloside IV. It was first identified when screening Astragalus membranaceus extracts for active ingredients with antiaging properties. The present study demonstrates that CAG stimulates telomerase activity and cell proliferation in human neonatal keratinocytes. In particular, CAG promotes scratch wound closure of human neonatal keratinocyte monolayers in vitro. The distinct telomerase-activating property of CAG prompted evaluation of its potential application in the treatment of neurological disorders. Accordingly, CAG induced telomerase activity and cAMP response element binding (CREB) activation in PC12 cells and primary neurons. Blockade of CREB expression in neuronal cells by RNA interference reduced basal telomerase activity, and CAG was no longer efficacious in increasing telomerase activity. CAG treatment not only induced the expression of bcl2, a CREB-regulated gene, but also the expression of telomerase reverse transcriptase in primary cortical neurons. Interestingly, oral administration of CAG for 7 days attenuated depression-like behavior in experimental mice. In conclusion, CAG stimulates telomerase activity in human neonatal keratinocytes and rat neuronal cells, and induces CREB activation followed by tert and bcl2 expression. Furthermore, CAG may have a novel therapeutic role in depression.", "title": "" }, { "docid": "444bcff9a7fdcb80041aeb01b8724eed", "text": "The morphologic anatomy of the liver is described as 2 main and 2 accessory lobes. The more recent functional anatomy of the liver is based on the distribution of the portal pedicles and the location of the hepatic veins. The liver is divided into 4 sectors, some of them composed of 2 segments. In all, there are 8 segments. According to the anatomy, typical hepatectomies (or “réglées”) are those which are performed along anatomical scissurae. The 2 main technical conceptions of typical hepatectomies are those with preliminary vascular control (Lortat-Jacob's technique) and hepatectomies with primary parenchymatous transection (Ton That Tung's technique). A good knowledge of the anatomy of the liver is a prerequisite for anatomical surgery of this organ. L'anatomie morphologique du foie permet d'individualiser 2 lobes principaux et 2 lobes accessoires. L'anatomie fonctionnelle du foie, plus récemment décrite, est fondée sur la distribution des pédicules portaux et sur la localisation des veines sus-hépatiques. Le foie est divisé en 4 secteurs, eux-mÊmes composés en général de 2 segments. Au total, il y a 8 segments. Selon les données anatomiques, les hépatectomies typiques (ou réglées) sont celles qui sont réalisées le long des scissures anatomiques. Les deux conceptions principales des exérèses hépatiques typiques sont, du point de vue technique, les hépatectomies avec contrÔle vasculaire préalable (technique de Lortat-Jacob) et les hépatectomies avec abord transparenchymateux premier (technique de Ton That Tung). Une connaissance approfondie de l'anatomie du foie est une condition préalable à la réalisation d'une chirurgie anatomique de cet organe.", "title": "" } ]
scidocsrr
385f6d4010c29fe15ae103b795f138d7
Predicting customer churn in banking industry using neural networks
[ { "docid": "310e525bc7a78da2987d8c6d6a0ff46b", "text": "This tutorial provides an overview of the data mining process. The tutorial also provides a basic understanding of how to plan, evaluate and successfully refine a data mining project, particularly in terms of model building and model evaluation. Methodological considerations are discussed and illustrated. After explaining the nature of data mining and its importance in business, the tutorial describes the underlying machine learning and statistical techniques involved. It describes the CRISP-DM standard now being used in industry as the standard for a technology-neutral data mining process model. The paper concludes with a major illustration of the data mining process methodology and the unsolved problems that offer opportunities for research. The approach is both practical and conceptually sound in order to be useful to both academics and practitioners.", "title": "" } ]
[ { "docid": "e64608f39ab082982178ad2b3539890f", "text": "Hoeschele, Michael David. M.S., Purdue University, May, 2006, Detecting Social Engineering. Major Professor: Marcus K. Rogers. This study consisted of creating and evaluating a proof of concept model of the Social Engineering Defense Architecture (SEDA) as theoretically proposed by Hoeschele and Rogers (2005). The SEDA is a potential solution to the problem of Social Engineering (SE) attacks perpetrated over the phone lines. The proof of concept model implemented some simple attack detection processes and the database to store all gathered information. The model was tested by generating benign telephone conversations in addition to conversations that include Social Engineering (SE) attacks. The conversations were then processed by the model to determine its accuracy to detect attacks. The model was able to detect all attacks and to store all of the correct data in the database, resulting in 100% accuracy.", "title": "" }, { "docid": "c16f21fd2b50f7227ea852882004ef5b", "text": "We study a stock dealer’s strategy for submitting bid and ask quotes in a limit order book. The agent faces an inventory risk due to the diffusive nature of the stock’s mid-price and a transactions risk due to a Poisson arrival of market buy and sell orders. After setting up the agent’s problem in a maximal expected utility framework, we derive the solution in a two step procedure. First, the dealer computes a personal indifference valuation for the stock, given his current inventory. Second, he calibrates his bid and ask quotes to the market’s limit order book. We compare this ”inventory-based” strategy to a ”naive” best bid/best ask strategy by simulating stock price paths and displaying the P&L profiles of both strategies. We find that our strategy has a P&L profile that has both a higher return and lower variance than the benchmark strategy.", "title": "" }, { "docid": "f3860c0ed0803759e44133a0110a60bb", "text": "Using comment information available from Digg we define a co-participation network between users. We focus on the analysis of this implicit network, and study the behavioral characteristics of users. Using an entropy measure, we infer that users at Digg are not highly focused and participate across a wide range of topics. We also use the comment data and social network derived features to predict the popularity of online content linked at Digg using a classification and regression framework. We show promising results for predicting the popularity scores even after limiting our feature extraction to the first few hours of comment activity that follows a Digg submission.", "title": "" }, { "docid": "49215cb8cb669aef5ea42dfb1e7d2e19", "text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author", "title": "" }, { "docid": "84d2e697b2f2107d34516909f22768c6", "text": "PURPOSE\nSchema therapy was first applied to individuals with borderline personality disorder (BPD) over 20 years ago, and more recent work has suggested efficacy across a range of disorders. The present review aimed to systematically synthesize evidence for the efficacy and effectiveness of schema therapy in reducing early maladaptive schema (EMS) and improving symptoms as applied to a range of mental health disorders in adults including BPD, other personality disorders, eating disorders, anxiety disorders, and post-traumatic stress disorder.\n\n\nMETHODS\nStudies were identified through electronic searches (EMBASE, PsycINFO, MEDLINE from 1990 to January 2016).\n\n\nRESULTS\nThe search produced 835 titles, of which 12 studies were found to meet inclusion criteria. A significant number of studies of schema therapy treatment were excluded as they failed to include a measure of schema change. The Clinical Trial Assessment Measure was used to rate the methodological quality of studies. Schema change and disorder-specific symptom change was found in 11 of the 12 studies.\n\n\nCONCLUSIONS\nSchema therapy has demonstrated initial significant results in terms of reducing EMS and improving symptoms for personality disorders, but formal mediation analytical studies are lacking and rigorous evidence for other mental health disorders is currently sparse.\n\n\nPRACTITIONER POINTS\nFirst review to investigate whether schema therapy leads to reduced maladaptive schemas and symptoms across mental health disorders. Limited evidence for schema change with schema therapy in borderline personality disorder (BPD), with only three studies conducting correlational analyses. Evidence for schema and symptom change in other mental health disorders is sparse, and so use of schema therapy for disorders other than BPD should be based on service user/patient preference and clinical expertise and/or that the theoretical underpinnings of schema therapy justify the use of it therapeutically. Further work is needed to develop the evidence base for schema therapy for other disorders.", "title": "" }, { "docid": "2cba0f9b3f4b227dfe0b40e3bebd99e4", "text": "In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.", "title": "" }, { "docid": "e5a6a42edcfd66dc16e6caa09cc67a10", "text": "Eosinophilic esophagitis is an adaptive immune response to patient-specific antigens, mostly foods. Eosinophilic esophagitis is not solely IgE-mediated and is likely characterized by Th2 lymphocytes with an impaired esophageal barrier function. The key cytokines and chemokines are thymic stromal lymphopoeitin, interleukin-13, CCL26/eotaxin-3, and transforming growth factor-β, all involved in eosinophil recruitment and remodeling. Chronic food dysphagia and food impactions, the feared late complications, are related in part to dense subepithelial fibrosis, likely induced by interleukin-13 and transforming growth factor-β.", "title": "" }, { "docid": "edcdae3f9da761cedd52273ccd850520", "text": "Extracting information from Web pages requires the ability to work at Web scale in terms of the number of documents, the number of domains and domain complexity. Recent approaches have used existing knowledge bases to learn to extract information with promising results. In this paper we propose the use of distant supervision for relation extraction from the Web. Distant supervision is a method which uses background information from the Linking Open Data cloud to automatically label sentences with relations to create training data for relation classifiers. Although the method is promising, existing approaches are still not suitable for Web extraction as they suffer from three main issues: data sparsity, noise and lexical ambiguity. Our approach reduces the impact of data sparsity by making entity recognition tools more robust across domains, as well as extracting relations across sentence boundaries. We reduce the noise caused by lexical ambiguity by employing statistical methods to strategically select training data. Our experiments show that using a more robust entity recognition approach and expanding the scope of relation extraction results in about 8 times the number of extractions, and that strategically selecting training data can result in an error reduction of about 30%.", "title": "" }, { "docid": "2272325860332d5d41c02f317ab2389e", "text": "For a developing nation, deploying big data (BD) technology and introducing data science in higher education is a challenge. A pessimistic scenario is: Mis-use of data in many possible ways, waste of trained manpower, poor BD certifications from institutes, under-utilization of resources, disgruntled management staff, unhealthy competition in the market, poor integration with existing technical infrastructures. Also, the questions in the minds of students, scientists, engineers, teachers and managers deserve wider attention. Besides the stated perceptions and analyses perhaps ignoring socio-political and scientific temperaments in developing nations, the following questions arise: How did the BD phenomenon naturally occur, post technological developments in Computer and Communications Technology and how did different experts react to it? Are academicians elsewhere agreeing on the fact that BD is a new science? Granted that big data science is a new science what are its foundations as compared to conventional topics in Physics, Chemistry or Biology? Or, is it similar in an esoteric sense to astronomy or nuclear science? What are the technological and engineering implications locally and globally and how these can be advantageously used to augment business intelligence, for example? In other words, will the industry adopt the changes due to tactical advantages? How can BD success stories be faithfully carried over elsewhere? How will BD affect the Computer Science and other curricula? How will BD benefit different segments of our society on a large scale? To answer these, an appreciation of the BD as a science and as a technology is necessary. This paper presents a quick BD overview, relying on the contemporary literature; it addresses: characterizations of BD and the BD people, the background required for the students and teachers to join the BD bandwagon, the management challenges in embracing BD so that the bottomline is clear.", "title": "" }, { "docid": "514bf9c9105dd3de95c3965bb86ebe36", "text": "Origami is the centuries-old art of folding paper, and recently, it is investigated as computer science: Given an origami with creases, the problem to determine if it can be flat after folding all creases is NP-hard. Another hundreds-old art of folding paper is a pop-up book. A model for the pop-up book design problem is given, and its computational complexity is investigated. We show that both of the opening book problem and the closing book problem are NP-hard.", "title": "" }, { "docid": "1c60ddeb7e940992094cb8f3913e811a", "text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet", "title": "" }, { "docid": "88c592bdd7bb9c9348545734a9508b7b", "text": "environments: An introduction C.-S. Li B. L. Brech S. Crowder D. M. Dias H. Franke M. Hogstrom D. Lindquist G. Pacifici S. Pappe B. Rajaraman J. Rao R. P. Ratnaparkhi R. A. Smith M. D. Williams During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructureVcompute, storage and networkVis becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.", "title": "" }, { "docid": "540099388527a2e8dd5b43162b697fea", "text": "This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.", "title": "" }, { "docid": "2bfd884e92a26d017a7854be3dfb02e8", "text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.", "title": "" }, { "docid": "abeb22a9a8066091e5f508e61d17f101", "text": "• I. What is Artificial Intelligence (AI)? • II. What are Expert Systems (ES)? ◦ Functional Components ◦ Structural Components • III. How do People Reason? • IV. How do Computers Reason? ◦ IV-1. Frames ◦ IV-2. Rule Based Reasoning ◾ IV-2a. Knowledge Engineering ◦ IV-3. Case-Based Reasoning ◦ IV-4. Neural Networks • V. Advantages and Disadvantages • VI. Additional Sources of Information ◦ VI-1. Additional Sources on World Wide Web ◾ Accounting Expert Systems Applications compiled by Carol E. Brown ◾ Artificial Intelligence in Business by Daniel E. O'Leary ◾ Artificial Intelligence / Expert Systems Section of the American Accounting Association ◾ International Journal of Intelligent Systems in Accounting, Finance and Management ◾ VI-2. Recent Books of Readings ◾ VI-3. References Used for Definitions • Photocopy Permission", "title": "" }, { "docid": "4a31889cf90d39b7c49d02174a425b5b", "text": "Inter-vehicle communication (IVC) protocols have the potential to increase the safety, efficiency, and convenience of transportation systems involving planes, trains, automobiles, and robots. The applications targeted include peer-to-peer networks for web surfing, coordinated braking, runway incursion prevention, adaptive traffic control, vehicle formations, and many others. The diversity of the applications and their potential communication protocols has challenged a systematic literature survey. We apply a classification technique to IVC applications to provide a taxonomy for detailed study of their communication requirements. The applications are divided into type classes which share common communication organization and performance requirements. IVC protocols are surveyed separately and their fundamental characteristics are revealed. The protocol characteristics are then used to determine the relevance of specific protocols to specific types of IVC applications.", "title": "" }, { "docid": "922a4369bf08f23e1c0171dc35d5642b", "text": "Most automated facial expression analysis methods treat the face as a 2D object, flat like a sheet of paper. That works well provided images are frontal or nearly so. In real-world conditions, moderate to large head rotation is common and system performance to recognize expression degrades. Multi-view Convolutional Neural Networks (CNNs) have been proposed to increase robustness to pose, but they require greater model sizes and may generalize poorly across views that are not included in the training set. We propose FACSCaps architecture to handle multi-view and multi-label facial action unit (AU) detection within a single model that can generalize to novel views. Additionally, FACSCaps's ability to synthesize faces enables insights into what is leaned by the model. FACSCaps models video frames using matrix capsules, where hierarchical pose relationships between face parts are built into internal representations. The model is trained by jointly optimizing a multi-label loss and the reconstruction accuracy. FACSCaps was evaluated using the FERA 2017 facial expression dataset that includes spontaneous facial expressions in a wide range of head orientations. FACSCaps outperformed both state-of-the-art CNNs and their temporal extensions.", "title": "" }, { "docid": "e36659351fcd339533b73fd3dd77f261", "text": "Past research provided abundant evidence that exposure to violent video games increases aggressive tendencies and decreases prosocial tendencies. In contrast, research on the effects of exposure to prosocial video games has been relatively sparse. The present research found support for the hypothesis that exposure to prosocial video games is positively related to prosocial affect and negatively related to antisocial affect. More specifically, two studies revealed that playing a prosocial (relative to a neutral) video game increased interpersonal empathy and decreased reported pleasure at another's misfortune (i.e., schadenfreude). These results lend further credence to the predictive validity of the General Learning Model (Buckley & Anderson, 2006) for the effects of media exposure on social tendencies.", "title": "" }, { "docid": "e5bea734149b69a05455c5fec2d802e3", "text": "This article introduces a collection of essays on continuity and discontinuity in cognitive development. In his lead essay, J. Kagan (2008) argues that limitations in past research (e.g., on number concepts, physical solidarity, and object permanence) render conclusions about continuity premature. Commentaries respectively (1) argue that longitudinal contexts are essential for interpreting developmental data, (2) illustrate the value of converging measures, (3) identify qualitative change via dynamical systems theory, (4) redirect the focus from states to process, and (5) review epistemological premises of alternative research traditions. Following an overview of the essays, this introductory article discusses how the search for developmental structures, continuity, and process differs between mechanistic-contextualist and organismic-contextualist metatheoretical frameworks, and closes by highlighting continuities in Kagan's scholarship over the past half century.", "title": "" }, { "docid": "11d418decc0d06a3af74be77d4c71e5e", "text": "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.", "title": "" } ]
scidocsrr
9d2b360c9c72fc379b84c5966beb05c3
Fetal intracranial translucency and cisterna magna at 11 to 14 weeks : reference ranges and correlation with chromosomal abnormalities.
[ { "docid": "557da3544fd738ecfc3edf812b92720b", "text": "OBJECTIVES\nTo describe the sonographic appearance of the structures of the posterior cranial fossa in fetuses at 11 + 3 to 13 + 6 weeks of pregnancy and to determine whether abnormal findings of the brain and spine can be detected by sonography at this time.\n\n\nMETHODS\nThis was a prospective study including 692 fetuses whose mothers attended Innsbruck Medical University Hospital for first-trimester sonography. In 3% (n = 21) of cases, measurement was prevented by fetal position. Of the remaining 671 cases, in 604 there was either a normal anomaly scan at 20 weeks or delivery of a healthy child and in these cases the transcerebellar diameter (TCD) and the anteroposterior diameter of the cisterna magna (CM), measured at 11 + 3 to 13 + 6 weeks, were analyzed. In 502 fetuses, the anteroposterior diameter of the fourth ventricle (4V) was also measured. In 25 fetuses, intra- and interobserver repeatability was calculated.\n\n\nRESULTS\nWe observed a linear correlation between crown-rump length (CRL) and CM (CM = 0.0536 × CRL - 1.4701; R2 = 0.688), TCD (TCD = 0.1482 × CRL - 1.2083; R2 = 0.701) and 4V (4V = 0.0181 × CRL + 0.9186; R2 = 0.118). In three patients with posterior fossa cysts, measurements significantly exceeded the reference values. One fetus with spina bifida had an obliterated CM and the posterior border of the 4V could not be visualized.\n\n\nCONCLUSIONS\nTransabdominal sonographic assessment of the posterior fossa is feasible in the first trimester. Measurements of the 4V, the CM and the TCD performed at this time are reliable. The established reference values assist in detecting fetal anomalies. However, findings must be interpreted carefully, as some supposed malformations might be merely delayed development of brain structures.", "title": "" }, { "docid": "7170110b2520fb37e282d08ed8774d0f", "text": "OBJECTIVE\nTo examine the performance of the 11-13 weeks scan in detecting non-chromosomal abnormalities.\n\n\nMETHODS\nProspective first-trimester screening study for aneuploidies, including basic examination of the fetal anatomy, in 45 191 pregnancies. Findings were compared to those at 20-23 weeks and postnatal examination.\n\n\nRESULTS\nAneuploidies (n = 332) were excluded from the analysis. Fetal abnormalities were observed in 488 (1.1%) of the remaining 44 859 cases; 213 (43.6%) of these were detected at 11-13 weeks. The early scan detected all cases of acrania, alobar holoprosencephaly, exomphalos, gastroschisis, megacystis and body stalk anomaly, 77% of absent hand or foot, 50% of diaphragmatic hernia, 50% of lethal skeletal dysplasias, 60% of polydactyly, 34% of major cardiac defects, 5% of facial clefts and 14% of open spina bifida, but none of agenesis of the corpus callosum, cerebellar or vermian hypoplasia, echogenic lung lesions, bowel obstruction, most renal defects or talipes. Nuchal translucency (NT) was above the 95th percentile in 34% of fetuses with major cardiac defects.\n\n\nCONCLUSION\nAt 11-13 weeks some abnormalities are always detectable, some can never be and others are potentially detectable depending on their association with increased NT, the phenotypic expression of the abnormality with gestation and the objectives set for such a scan.", "title": "" } ]
[ { "docid": "e58036f93195603cb7dc7265b9adeb25", "text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.", "title": "" }, { "docid": "fc9061348b46fc1bf7039fa5efcbcea1", "text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.", "title": "" }, { "docid": "70622607a75305882251c073536aa282", "text": "a r t i c l e i n f o", "title": "" }, { "docid": "3a1019c31ff34f8a45c65703c1528fc4", "text": "The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to sprouting of a relatively new yet extremely rewarding sphere of technology. The fusion of current deep reinforcement algorithms with physical advantages of a soft bio-inspired structure certainly directs us to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment to achieve a task they have been assigned. For soft robotics structure possessing countless degrees of freedom, it is often not easy (something not even possible) to formulate mathematical constraints necessary for training a deep reinforcement learning (DRL) agent for the task in hand, hence, we resolve to imitation learning techniques due to ease of manually performing such tasks like manipulation that could be comfortably mimicked by our agent. Deploying current imitation learning algorithms on soft robotic systems have been observed to provide satisfactory results but there are still challenges in doing so. This review article thus posits an overview of various such algorithms along with instances of them being applied to real world scenarios and yielding state-of-the-art results followed by brief descriptions on various pristine branches of DRL research that may be centers of future research in this field of interest.", "title": "" }, { "docid": "12d625fe60790761ff604ab8aa70c790", "text": "We describe a system designed to monitor the gaze of a user working naturally at a computer workstation. The system consists of three cameras situated between the keyboard and the monitor. Free head movements are allowed within a three-dimensional volume approximately 40 centimeters in diameter. Two fixed, wide-field \"face\" cameras equipped with active-illumination systems enable rapid localization of the subject's pupils. A third steerable \"eye\" camera has a relatively narrow field of view, and acquires the images of the eyes which are used for gaze estimation. Unlike previous approaches which construct an explicit three-dimensional representation of the subject's head and eye, we derive mappings for steering control and gaze estimation using a procedure we call implicit calibration. Implicit calibration is performed by collecting a \"training set\" of parameters and associated measurements, and solving for a set of coefficients relating the measurements back to the parameters of interest. Preliminary data on three subjects indicate an median gaze estimation error of ap-proximately 0.8 degree.", "title": "" }, { "docid": "a02f1ee7b77d00809d89c4a8fad462ed", "text": "In a modern vehicle systems one of the main goals to achieve is driver's safety, and many sophisticated systems are made for that purpose. Vibration isolation for the vehicle seats, and at the same time for the driver, is one of the challenging problems. Parameters of the controller used for the isolation can be tuned for a different road types, making the isolation better (specially for the vehicles like dampers, tractors, field machinery, bulldozers, etc.). In this paper we propose the method where neural networks are used for road type recognition. The main goal is to obtain a good road recognition for the purpose of better vibration damping of a driver's semi active controllable seat. The recognition of a specific road type will be based on the measurable parameters of a vehicle. Discrete Fourier Transform of measurable parameters is obtained and used for the neural network learning. The dimension of the input vector, as the main parameter that decides the speed of road recognition, is varied.", "title": "" }, { "docid": "462248d6ebad4ed197b0322a5ab09406", "text": "The purpose of this study was to quantify the response of the forearm musculature to combinations of wrist and forearm posture and grip force. Ten healthy individuals performed five relative handgrip efforts (5%, 50%, 70% and 100% of maximum, and 50 N) for combinations of three wrist postures (flexed, neutral and extended) and three forearm postures (pronated, neutral and supinated). 'Baseline' extensor muscle activity (associated with holding the dynamometer without exerting grip force) was greatest with the forearm pronated and the wrist extended, while flexor activity was largest in supination when the wrist was flexed. Extensor activity was generally larger than that of flexors during low to mid-range target force levels, and was always greater when the forearm was pronated. Flexor activation only exceeded the extensor activation at the 70% and 100% target force levels in some postures. A flexed wrist reduced maximum grip force by 40-50%, but EMG amplitude remained elevated. Women produced 60-65% of the grip strength of men, and required 5-10% more of both relative force and extensor activation to produce a 50 N grip. However, this appeared to be due to strength rather than gender. Forearm rotation affected grip force generation only when the wrist was flexed, with force decreasing from supination to pronation (p < 0.005). The levels of extensor activation observed, especially during baseline and low level grip exertions, suggest a possible contributing mechanism to the development of lateral forearm muscle pain in the workplace.", "title": "" }, { "docid": "194156892cbdb0161e9aae6a01f78703", "text": "Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.", "title": "" }, { "docid": "2cbb2af6ed4ef193aad77c2f696a45c5", "text": "Consider mutli-goal tasks that involve static environments and dynamic goals. Examples of such tasks, such as goaldirected navigation and pick-and-place in robotics, abound. Two types of Reinforcement Learning (RL) algorithms are used for such tasks: model-free or model-based. Each of these approaches has limitations. Model-free RL struggles to transfer learned information when the goal location changes, but achieves high asymptotic accuracy in single goal tasks. Model-based RL can transfer learned information to new goal locations by retaining the explicitly learned state-dynamics, but is limited by the fact that small errors in modelling these dynamics accumulate over long-term planning. In this work, we improve upon the limitations of model-free RL in multigoal domains. We do this by adapting the Floyd-Warshall algorithm for RL and call the adaptation Floyd-Warshall RL (FWRL). The proposed algorithm learns a goal-conditioned action-value function by constraining the value of the optimal path between any two states to be greater than or equal to the value of paths via intermediary states. Experimentally, we show that FWRL is more sample-efficient and learns higher reward strategies in multi-goal tasks as compared to Q-learning, model-based RL and other relevant baselines in a tabular domain.", "title": "" }, { "docid": "49ca8739b6e28f0988b643fc97e7c6b1", "text": "Stroke is a leading cause of severe physical disability, causing a range of impairments.  Frequently stroke survivors are left with partial paralysis on one side of the body and movement can be severely restricted in the affected side’s hand and arm.  We know that effective rehabilitation must be early, intensive and repetitive, which leads to the challenge of how to maintain motivation for people undergoing therapy.  This paper discusses why games may be an effective way of addressing the problem of engagement in therapy and analyses which game design patterns may be important for rehabilitation.  We present a number of serious games that our group has developed for upper limb rehabilitation.  Results of an evaluation of the games are presented which indicate that they may be appropriate for people with stroke.", "title": "" }, { "docid": "8892e3f007967f8274b0513e4c451aed", "text": "Research on narcissism and envy suggests a variable relationship that may reflect differences between how vulnerable and grandiose narcissism relate to precursors of envy. Accordingly, we proposed a model in which dispositional envy and relative deprivation differentially mediate envy's association with narcissistic vulnerability, grandiosity, and entitlement. To test the model, 330 young adults completed dispositional measures of narcissism, entitlement, and envy; one week later, participants reported on deprivation and envy feelings toward a peer who outperformed others on an intelligence test for a cash prize (Study 1) or earned higher monetary payouts in a betting game (Study 2). In both studies, structural equation modeling broadly supported the proposed model. Vulnerable narcissism robustly predicted episodic envy via dispositional envy. Entitlement-a narcissistic facet common to grandiosity and vulnerability-was a significant indirect predictor via relative deprivation. Study 2 also found that (a) the grandiose leadership/authority facet indirectly curbed envy feelings via dispositional envy, and (b) episodic envy contributed to schadenfreude feelings, which promoted efforts to sabotage a successful rival. Whereas vulnerable narcissists appear dispositionally envy-prone, grandiose narcissists may be dispositionally protected. Both, however, are susceptible to envy through entitlement when relative deprivation is encountered.", "title": "" }, { "docid": "9533193407869250854157e89d2815eb", "text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.", "title": "" }, { "docid": "06bfa716dd067d05229c92dc66757772", "text": "Although many critics are reluctant to accept the trustworthiness of qualitative research, frameworks for ensuring rigour in this form of work have been in existence for many years. Guba’s constructs, in particular, have won considerable favour and form the focus of this paper. Here researchers seek to satisfy four criteria. In addressing credibility, investigators attempt to demonstrate that a true picture of the phenomenon under scrutiny is being presented. To allow transferability, they provide sufficient detail of the context of the fieldwork for a reader to be able to decide whether the prevailing environment is similar to another situation with which he or she is familiar and whether the findings can justifiably be applied to the other setting. The meeting of the dependability criterion is difficult in qualitative work, although researchers should at least strive to enable a future investigator to repeat the study. Finally, to achieve confirmability, researchers must take steps to demonstrate that findings emerge from the data and not their own predispositions. The paper concludes by suggesting that it is the responsibility of research methods teachers to ensure that this or a comparable model for ensuring trustworthiness is followed by students undertaking a qualitative inquiry.", "title": "" }, { "docid": "36cc985d2d86c4047533550293e8c7f4", "text": "The pyISC is a Python API and extension to the C++ based Incremental Stream Clustering (ISC) anomaly detection and classification framework. The framework is based on parametric Bayesian statistical inference using the Bayesian Principal Anomaly (BPA), which enables to combine the output from several probability distributions. pyISC is designed to be easy to use and integrated with other Python libraries, specifically those used for data science. In this paper, we show how to use the framework and we also compare its performance to other well-known methods on 22 real-world datasets. The simulation results show that the performance of pyISC is comparable to the other methods. pyISC is part of the Stream toolbox developed within the STREAM project.", "title": "" }, { "docid": "6e2d4a24764265cf86c097d5b750113c", "text": "BACKGROUND\nMusic has been used for medicinal purposes throughout history due to its variety of physiological, psychological and social effects.\n\n\nOBJECTIVE\nTo identify the effects of prenatal music stimulation on the vital signs of pregnant women at full term, on the modification of fetal cardiac status during a fetal monitoring cardiotocograph, and on anthropometric measurements of newborns taken after birth.\n\n\nMATERIAL AND METHOD\nA randomized controlled trial was implemented. The four hundred and nine pregnant women coming for routine prenatal care were randomized in the third trimester to receive either music (n = 204) or no music (n = 205) during a fetal monitoring cardiotocograph. All of the pregnant women were evaluated by measuring fetal cardiac status (basal fetal heart rate and fetal reactivity), vital signs before and after a fetal monitoring cardiotocograph (maternal heart rate and systolic and diastolic blood pressure), and anthropometric measurements of the newborns were taken after birth (weight, height, head circumference and chest circumference).\n\n\nRESULTS\nThe strip charts showed a significantly increased basal fetal heart rate and higher fetal reactivity, with accelerations of fetal heart rate in pregnant women with music stimulation. After the fetal monitoring cardiotocograph, a statistically significant decrease in systolic blood pressure, diastolic blood pressure and heart rate in women receiving music stimulation was observed.\n\n\nCONCLUSION\nMusic can be used as a tool which improves the vital signs of pregnant women during the third trimester, and can influence the fetus by increasing fetal heart rate and fetal reactivity.", "title": "" }, { "docid": "aaba4377acbd22cbc52681d4d15bf9af", "text": "This paper presents a new human body communication (HBC) technique that employs magnetic resonance for data transfer in wireless body-area networks (BANs). Unlike electric field HBC (eHBC) links, which do not necessarily travel well through many biological tissues, the proposed magnetic HBC (mHBC) link easily travels through tissue, offering significantly reduced path loss and, as a result, reduced transceiver power consumption. In this paper the proposed mHBC concept is validated via finite element method simulations and measurements. It is demonstrated that path loss across the body under various postures varies from 10-20 dB, which is significantly lower than alternative BAN techniques.", "title": "" }, { "docid": "f5b72167077481ca04e339ad4dc4da3c", "text": "We have implemented a MATLAB source code for VES forward modeling and its inversion using a genetic algorithm (GA) optimization technique. The codes presented here are applied to the Schlumberger electrode arrangement. In the forward modeling computation, we have developed code to generate theoretical apparent resistivity curves from a specified layered earth model. The input to this program consists of the number of layers, the layer resistivity and thickness. The output of this program is apparent resistivity versus electrode spacing incorporated in the inversion process as apparent resistivity data. For the inversion, we have developed a MATLAB code to invert (for layer resistivity and thickness) the apparent resistivity data by the genetic algorithm optimization technique. The code also has some function files involving the basic stages in the GA inversion. Our inversion procedure addressed calculates forward solutions from sets of random input, to find the apparent resistivity. Then, it evolves the models by better sets of inputs through processes that imitate natural mating, selection, crossover, and mutation in each generation. The aim of GA inversion is to find the best correlation between model and theoretical apparent resistivity curves. In this study, we present three synthetic examples that demonstrate the effectiveness and usefulness of this program. Our numerical modeling shows that the GA optimization technique can be applied for resolving layer parameters with reasonably low error values.", "title": "" }, { "docid": "c0954a0e283c27f1dba130ad8f907b64", "text": "Optical techniques for measurement-interferometry, spectrometry and polarimetry\"have long been used in materials measurement and environmental evaluation. The optical fiber lends get more flexibility in the implementation of these basic concepts. Fiber-optic technology has, for over 30 years, made important contributions to the science of measurement. The paper presents a perspective on these contributions which while far from exhaustive highlights the important conceptual advances made in the early days of optical fiber technology and the breadth of application which has emerged. There are also apparent opportunities for yet more imaginative research in applying guided-wave optics to emerging and challenging measurement requirements ranging from microsystems characterization to cellular biochemistry to art restoration.", "title": "" }, { "docid": "0deda73c3cb7e87bcf3e1df0716e13d2", "text": "The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists’ judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images.", "title": "" }, { "docid": "786f1bbc10cfb952c7709b635ec01fcf", "text": "Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91% power reduction of multiplication led to classification accuracy degradation of less than 2.80%. Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.", "title": "" } ]
scidocsrr
002423c52965056329ebe4f7d4f13715
Sudarshan Kriya Yogic breathing in the treatment of stress, anxiety, and depression. Part II--clinical applications and guidelines.
[ { "docid": "ee2c37fd2ebc3fd783bfe53213e7470e", "text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.", "title": "" }, { "docid": "6f0ffda347abfd11dc78c0b76ceb11f8", "text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.", "title": "" } ]
[ { "docid": "1d6733d6b017248ef935a833ecfe6f0d", "text": "Users increasingly rely on crowdsourced information, such as reviews on Yelp and Amazon, and liked posts and ads on Facebook. This has led to a market for blackhat promotion techniques via fake (e.g., Sybil) and compromised accounts, and collusion networks. Existing approaches to detect such behavior relies mostly on supervised (or semi-supervised) learning over known (or hypothesized) attacks. They are unable to detect attacks missed by the operator while labeling, or when the attacker changes strategy. We propose using unsupervised anomaly detection techniques over user behavior to distinguish potentially bad behavior from normal behavior. We present a technique based on Principal Component Analysis (PCA) that models the behavior of normal users accurately and identifies significant deviations from it as anomalous. We experimentally validate that normal user behavior (e.g., categories of Facebook pages liked by a user, rate of like activity, etc.) is contained within a low-dimensional subspace amenable to the PCA technique. We demonstrate the practicality and effectiveness of our approach using extensive ground-truth data from Facebook: we successfully detect diverse attacker strategies—fake, compromised, and colluding Facebook identities—with no a priori labeling while maintaining low false-positive rates. Finally, we apply our approach to detect click-spam in Facebook ads and find that a surprisingly large fraction of clicks are from anomalous users.", "title": "" }, { "docid": "9f9c51b8e657fd9625b6cf22b1f003ab", "text": "Most popular deep models for action recognition split video sequences into short sub-sequences consisting of a few frames, frame-based features are then pooled for recognizing the activity. Usually, this pooling step discards the temporal order of the frames, which could otherwise be used for better recognition. Towards this end, we propose a novel pooling method, generalized rank pooling (GRP), that takes as input, features from the intermediate layers of a CNN that is trained on tiny sub-sequences, and produces as output the parameters of a subspace which (i) provides a low-rank approximation to the features and (ii) preserves their temporal order. We propose to use these parameters as a compact representation for the video sequence, which is then used in a classification setup. We formulate an objective for computing this subspace as a Riemannian optimization problem on the Grassmann manifold, and propose an efficient conjugate gradient scheme for solving it. Experiments on several activity recognition datasets show that our scheme leads to state-of-the-art performance.", "title": "" }, { "docid": "e364db9141c85b1f260eb3a9c1d42c5b", "text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557", "title": "" }, { "docid": "a4e733379c2720e731d448ec80599c53", "text": "As digitalization sustainably alters industries and societies, small and medium-sized enterprises (SME) must initiate a digital transformation to remain competitive and to address the increasing complexity of customer needs. Although many enterprises encounter challenges in practice, research does not yet provide practicable recommendations to increase the feasibility of digitalization. Furthermore, SME frequently fail to fully realize the implications of digitalization for their organizational structures, strategies, and operations, and have difficulties to identify a suitable starting point for corresponding initiatives. In order to address these challenges, this paper uses the concept of Business Process Management (BPM) to define a set of capabilities for a management framework, which builds upon the paradigm of process orientation to cope with the various requirements of digital transformation. Our findings suggest that enterprises can use a functioning BPM as a starting point for digitalization, while establishing necessary digital capabilities subsequently.", "title": "" }, { "docid": "d69e8f1e75d74345a93f4899b2a0f073", "text": "CONTEXT\nThis paper provides an overview of the contribution of medical education research which has employed focus group methodology to evaluate both undergraduate education and continuing professional development.\n\n\nPRACTICALITIES AND PROBLEMS\nIt also examines current debates about the ethics and practicalities involved in conducting focus group research. It gives guidance as to how to go about designing and planning focus group studies, highlighting common misconceptions and pitfalls, emphasising that most problems stem from researchers ignoring the central assumptions which underpin the qualitative research endeavour.\n\n\nPRESENTING AND DEVELOPING FOCUS GROUP RESEARCH\nParticular attention is paid to analysis and presentation of focus group work and the uses to which such information is put. Finally, it speculates about the future of focus group research in general and research in medical education in particular.", "title": "" }, { "docid": "df94e8f3c2cef683db432e3e767fe913", "text": "The design and manufacture of present-day CPUs causes inherent variation in supercomputer architectures such as variation in power and temperature of the chips. The variation also manifests itself as frequency differences among processors under Turbo Boost dynamic overclocking. This variation can lead to unpredictable and suboptimal performance in tightly coupled HPC applications. In this study, we use compute-intensive kernels and applications to analyze the variation among processors in four top supercomputers: Edison, Cab, Stampede, and Blue Waters. We observe that there is an execution time difference of up to 16% among processors on the Turbo Boost-enabled supercomputers: Edison, Cab, Stampede. There is less than 1% variation on Blue Waters, which does not have a dynamic overclocking feature. We analyze measurements from temperature and power instrumentation and find that intrinsic differences in the chips' power efficiency is the culprit behind the frequency variation. Moreover, we analyze potential solutions such as disabling Turbo Boost, leaving idle cores and replacing slow chips to mitigate the variation. We also propose a speed-aware dynamic task redistribution (load balancing) algorithm to reduce the negative effects of performance variation. Our speed-aware load balancing algorithm improves the performance up to 18% compared to no load balancing performance and 6% better than the non-speed aware counterpart.", "title": "" }, { "docid": "b25416a09c04697f0cbc7eb907bca4f0", "text": "This paper investigates the reaction of financial markets to the announcement of a business combination between software firms. Based on the theory of economic networks, this article argues that mergers of software firms should lead to greater wealth creation because of the network effect theoretically linked to the combination of software products. This hypothesis is partially supported, as only the targets in software/software outperform those in the other categories, yielding abnormal returns of great magnitude. In addition, we could not conclude that controlling position in the target enabled bidders to make the appropriate technological decisions to ensure the emergence of network effects in the portfolio of the new entity and create additional wealth for the shareholders of both the bidder and the target. Future research is needed to better understand the effect of the different properties of the software pooled inside the product portfolio of the new entity.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "3dfdc8abe03dd77730fe485f07588f43", "text": "Background\nThe most common neurodegenerative disease is dementia. Family of dementia patients says that their lives have been changed extensively after happening of dementia to their patients. One of the problems of family and caregivers is depression of the caregiver. In this study, we aimed to find the prevalence of depression and factors can affect depression in the dementia caregivers.\n\n\nMaterials and Methods\nThis study was cross-sectional study with convenient sampling method. Our society was 96 main caregivers of dementia patients in the year 2015 in Iran. We had two questionnaires, a demographic and Beck Depression Inventory (BDI). BDI Cronbach's alpha is 0.86 for psychiatric patients and 0.81 for nonpsychiatric persons, and Beck's scores are between 0 and 64. We used SPSS version 22 for statistical analysis.\n\n\nResults\nAccording to Beck depression test, 69.8% (n = 67 out of 96) of all caregivers had scores in the range of depression. In bivariate analysis, we found higher dementia severity and lower support of other family members from the caregiver can predict higher depression in the caregiver. As well, in regression analysis using GLM model, we found higher age and lower educational level of the caregiver can predict higher depression in the caregiver. Moreover, regression analysis approved findings about severity and support of other family members in bivariate analysis.\n\n\nConclusion\nHigh-level depression is found in caregivers of dementia patients. It needs special attention from healthcare managers, clinicians and all of health-care personnel who deals with dementia patients and their caregivers.", "title": "" }, { "docid": "2edd599684751b95ddde1bf3847dfadb", "text": "Partially shaded (PS) photovoltaic (PV) arrays have multiple peaks at their P–V characteristic. Although conventional maximum power point tracking (MPPT) algorithms are successful when PV arrays are under uniform irradiance conditions (UICs), their tracking speeds are low and may fail to track global maximum power point (GMPP) for PS arrays. Several MPPT algorithms have been proposed for PS arrays. Most of them require numerous samplings which decreases MPPT speed and increases energy loss. The proposed method in this paper gets the GMPP deterministically and very fast. It intelligently takes some samples from the array's P–V curve and divides the search voltage range into small subregions. Then, it approximates the I–V curve of each subregion with a simple curve, and accordingly estimates an upper limit for the array power in that subregion. Next, by comparing the measured real power values with the estimated upper limits, the search region of GMPP is limited, and based on some defined criteria, the vicinity of GMPP is determined. Simulation and experimental results and comparisons are presented to highlight the performance and superiority of the proposed approach.", "title": "" }, { "docid": "b2962d473a4b2d1a20996ae578ceccd4", "text": "In this paper, we examine the logic and methodology of engineering design from the perspective of the philosophy of science. The fundamental characteristics of design problems and design processes are discussed and analyzed. These characteristics establish the framework within which different design paradigms are examined. Following the discussions on descriptive properties of design, and the prescriptive role of design paradigms, we advocate the plausible hypothesis that there is a direct resemblance between the structure of design processes and the problem solving of scientific communities. The scientific community metaphor has been useful in guiding the development of general purpose highly effective design process meta-tools [73], [125].", "title": "" }, { "docid": "8553229613282672e12a175bfaca554d", "text": "The K Nearest Neighbor (kNN) method has widely been used in the applications of data mining and machine learning due to its simple implementation and distinguished performance. However, setting all test data with the same k value in the previous kNN methods has been proven to make these methods impractical in real applications. This article proposes to learn a correlation matrix to reconstruct test data points by training data to assign different k values to different test data points, referred to as the Correlation Matrix kNN (CM-kNN for short) classification. Specifically, the least-squares loss function is employed to minimize the reconstruction error to reconstruct each test data point by all training data points. Then, a graph Laplacian regularizer is advocated to preserve the local structure of the data in the reconstruction process. Moreover, an ℓ1-norm regularizer and an ℓ2, 1-norm regularizer are applied to learn different k values for different test data and to result in low sparsity to remove the redundant/noisy feature from the reconstruction process, respectively. Besides for classification tasks, the kNN methods (including our proposed CM-kNN method) are further utilized to regression and missing data imputation. We conducted sets of experiments for illustrating the efficiency, and experimental results showed that the proposed method was more accurate and efficient than existing kNN methods in data-mining applications, such as classification, regression, and missing data imputation.", "title": "" }, { "docid": "f5182ad077b1fdaa450d16544d63f01b", "text": "This article paves the knowledge about the next generation Bluetooth Standard-BT 5 that will bring some mesmerizing upgrades including increased range, speed, and broadcast messaging capacity. Further, three relevant queries such as what is better about BT 5, why does that matter, and how will it affect IoT have been explained to gather related information so that developers, practitioners, and naive people could formulate BT 5 into IoT based applications while assimilating the need of short range communication in true sense.", "title": "" }, { "docid": "be317160d07d0430787f99cf006172c4", "text": "Chromium (VI) is a widely used industrial chemical, extensively used in paints, metal finishes, steel including stainless steel manufacturing, alloy cast irons, chrome, and wood treatment. On the contrary, chromium (III) salts such as chromium polynicotinate, chromium chloride and chromium picolinate, are used as micronutrients and nutritional supplements, and have been demonstrated to exhibit a significant number of health benefits in rodents and humans. However, the cause for the hexavalent chromium to induce cytotoxicity is not entirely understood. A series of in vitro and in vivo studies have demonstrated that chromium (VI) induces an oxidative stress through enhanced production of reactive oxygen species (ROS) leading to genomic DNA damage and oxidative deterioration of lipids and proteins. A cascade of cellular events occur following chromium (VI)‐induced oxidative stress including enhanced production of superoxide anion and hydroxyl radicals, increased lipid peroxidation and genomic DNA fragmentation, modulation of intracellular oxidized states, activation of protein kinase C, apoptotic cell death and altered gene expression. In this paper, we have demonstrated concentration‐ and time‐dependent effects of sodium dichromate (chromium (VI) or Cr (VI)) on enhanced production of superoxide anion and hydroxyl radicals, changes in intracellular oxidized states as determined by laser scanning confocal microscopy, DNA fragmentation and apoptotic cell death (by flow cytometry) in human peripheral blood mononuclear cells. These results were compared with the concentration-dependent effects of chromium (VI) on chronic myelogenous leukemic K562 cells and J774A.1 murine macrophage cells. Chromium (VI)‐induced enhanced production of ROS, as well as oxidative tissue and DNA damage were observed in these cells. More pronounced effect was observed on chronic myelogenous leukemic K562 cells and J774A.1 murine macrophage cells. Furthermore, we have assessed the effect of a single oral LD50 dose of chromium (VI) on female C57BL/6Ntac and p53‐deficient C57BL/6TSG p53 mice on enhanced production of superoxide anion, lipid peroxidation and DNA fragmentation in the hepatic and brain tissues. Chromium (VI)‐induced more pronounced oxidative damage in p53 deficient mice. This in vivo study highlighted that apoptotic regulatory protein p53 may play a major role in chromium (VI)‐induced oxidative stress and toxicity. Taken together, oxidative stress and oxidative tissue damage, and a cascade of cellular events including modulation of apoptotic regulatory gene p53 are involved in chromium (VI)‐induced toxicity and carcinogenesis.", "title": "" }, { "docid": "beb365aacc5f66eea05d8aaebf97f275", "text": "In this paper, we study the effects of three different kinds of search engine rankings on consumer behavior and search engine revenues: direct ranking effect, interaction effect between ranking and product ratings, and personalized ranking effect. We combine a hierarchical Bayesian model estimated on approximately one million online sessions from Travelocity, together with randomized experiments using a real-world hotel search engine application. Our archival data analysis and randomized experiments are consistent in demonstrating the following: (1) a consumer utility-based ranking mechanism can lead to a significant increase in overall search engine revenue. (2) Significant interplay occurs between search engine ranking and product ratings. An inferior position on the search engine affects “higher-class” hotels more adversely. On the other hand, hotels with a lower customer rating are more likely to benefit from being placed on the top of the screen. These findings illustrate that product search engines could benefit from directly incorporating signals from social media into their ranking algorithms. (3) Our randomized experiments also reveal that an “active” (wherein users can interact with and customize the ranking algorithm) personalized ranking system leads to higher clicks but lower purchase propensities and lower search engine revenue compared to a “passive” (wherein users cannot interact with the ranking algorithm) personalized ranking system. This result suggests that providing more information during the decision-making process may lead to fewer consumer purchases because of information overload. Therefore, product search engines should not adopt personalized ranking systems by default. Overall, our study unravels the economic impact of ranking and its interaction with social media on product search engines.", "title": "" }, { "docid": "52755d4ace354c031368167a9da91547", "text": "One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem.", "title": "" }, { "docid": "12b075837d52d5c73a155466c28f2996", "text": "Banks in Nigeria need to understand the perceptual difference in both male and female employees to better develop adequate policy on sexual harassment. This study investigated the perceptual differences on sexual harassment among male and female bank employees in two commercial cities (Kano and Lagos) of Nigeria.Two hundred and seventy five employees (149 males, 126 females) were conveniently sampled for this study. A survey design with a questionnaire adapted from Sexual Experience Questionnaire (SEQ) comprises of three dimension scalesof sexual harassment was used. The hypotheses were tested with independent samples t-test. The resultsindicated no perceptual differences in labelling sexual harassment clues between male and female bank employees in Nigeria. Thus, the study recommends that bank managers should support and establish the tone for sexual harassment-free workplace. KeywordsGender Harassment, Sexual Coercion, Unwanted Sexual Attention, Workplace.", "title": "" }, { "docid": "0f3d520a6d09c136816a9e0493c45db1", "text": "Specular reflection exists widely in photography and causes the recorded color deviating from its true value, thus, fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an L2 chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter, illumination parallel subspace, a property called pure diffuse pixels distribution rule helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.", "title": "" }, { "docid": "83305a3f13a943b1226cf92375c30ab4", "text": "The recent availability of Intel Haswell processors marks the transition of hardware transactional memory from research toys to mainstream reality. DBX is an in-memory database that uses Intel's restricted transactional memory (RTM) to achieve high performance and good scalability across multi-core machines. The main limitation (and also key to practicality) of RTM is its constrained working set size: an RTM region that reads or writes too much data will always be aborted. The design of DBX addresses this challenge in several ways. First, DBX builds a database transaction layer on top of an underlying shared-memory store. The two layers use separate RTM regions to synchronize shared memory access. Second, DBX uses optimistic concurrency control to separate transaction execution from its commit. Only the commit stage uses RTM for synchronization. As a result, the working set of the RTMs used scales with the meta-data of reads and writes in a database transaction as opposed to the amount of data read/written. Our evaluation using TPC-C workload mix shows that DBX achieves 506,817 transactions per second on a 4-core machine.", "title": "" } ]
scidocsrr
9cadeeec720d0c8287566cc07ffd6fd6
Keyphrase Extraction Based on Prior Knowledge
[ { "docid": "956d052c1599e90d31358735d9ea73aa", "text": "We present a keyphrase extraction algorithm for scientific p ublications. Different from previous work, we introduce features that capture the positions of phrases in document with respect to logical section s f und in scientific discourse. We also introduce features that capture salient morphological phenomena found in scientific keyphrases, such as whether a candida te keyphrase is an acronyms or uses specific terminologically productive suffi xes. We have implemented these features on top of a baseline feature set used by Kea [1]. In our evaluation using a corpus of 120 scientific publications mul tiply annotated for keyphrases, our system significantly outperformed Kea at th e p < .05 level. As we know of no other existing multiply annotated keyphrase do cument collections, we have also made our evaluation corpus publicly avai lable. We hope that this contribution will spur future comparative research.", "title": "" } ]
[ { "docid": "7c1146ddc6e0904e0b30266b164e91f7", "text": "The number of digital images that needs to be acquired, analyzed, classified, stored and retrieved in the medical centers is exponentially growing with the advances in medical imaging technology. Accordingly, medical image classification and retrieval has become a popular topic in the recent years. Despite many projects focusing on this problem, proposed solutions are still far from being sufficiently accurate for real-life implementations. Interpreting medical image classification and retrieval as a multi-class classification task, in this work, we investigate the performance of five different feature types in a SVM-based learning framework for classification of human body X-Ray images into classes corresponding to body parts. Our comprehensive experiments show that four conventional feature types provide performances comparable to the literature with low per-class accuracies, whereas local binary patterns produce not only very good global accuracy but also good class-specific accuracies with respect to the features used in the literature.", "title": "" }, { "docid": "ec9fa7d2b0833d1b2f9fb9c7e0d3f350", "text": "Our goal in this paper is to explore two generic approaches to disrupting dark networks: kinetic and nonkinetic. The kinetic approach involves aggressive and offensive measures to eliminate or capture network members and their supporters, while the non-kinetic approach involves the use of subtle, non-coercive means for combating dark networks. Two strategies derive from the kinetic approach: Targeting and Capacity-building. Four strategies derive from the non-kinetic approach: Institution-Building, Psychological Operations, Information Operations and Rehabilitation. We use network data from Noordin Top’s South East Asian terror network to illustrate how both kinetic and non-kinetic strategies could be pursued depending on a commander’s intent. Using this strategic framework as a backdrop, we strongly advise the use of SNA metrics in developing alterative counter-terrorism strategies that are contextdependent rather than letting SNA metrics define and drive a particular strategy.", "title": "" }, { "docid": "50b5f29431b758e0df5bd6e295ef78d1", "text": "While deep convolutional neural networks (CNNs) have emerged as the driving force of a wide range of domains, their computationally and memory intensive natures hinder the further deployment in mobile and embedded applications. Recently, CNNs with low-precision parameters have attracted much research attention. Among them, multiplier-free binary- and ternary-weight CNNs are reported to be of comparable recognition accuracy with full-precision networks, and have been employed to improve the hardware efficiency. However, even with the weights constrained to binary and ternary values, large-scale CNNs still require billions of operations in a single forward propagation pass.\n In this paper, we introduce a novel approach to maximally eliminate redundancy in binary- and ternary-weight CNN inference, improving both the performance and energy efficiency. The initial kernels are transformed into much fewer and sparser ones, and the output feature maps are rebuilt from the immediate results. Overall, the number of total operations in convolution is reduced. To find an efficient transformation solution for each already trained network, we propose a searching algorithm, which iteratively matches and eliminates the overlap in a set of kernels. We design a specific hardware architecture to optimize the implementation of kernel transformation. Specialized dataflow and scheduling method are proposed. Tested on SVHN, AlexNet, and VGG-16, our architecture removes 43.4%--79.9% operations, and speeds up the inference by 1.48--3.01 times.", "title": "" }, { "docid": "b214270aacf9c9672af06e58ff26aa5a", "text": "Traditional techniques for measuring similarities between time series are based on handcrafted similarity measures, whereas more recent learning-based approaches cannot exploit external supervision. We combine ideas from timeseries modeling and metric learning, and study siamese recurrent networks (SRNs) that minimize a classification loss to learn a good similarity measure between time series. Specifically, our approach learns a vectorial representation for each time series in such a way that similar time series are modeled by similar representations, and dissimilar time series by dissimilar representations. Because it is a similarity prediction models, SRNs are particularly well-suited to challenging scenarios such as signature recognition, in which each person is a separate class and very few examples per class are available. We demonstrate the potential merits of SRNs in withindomain and out-of-domain classification experiments and in one-shot learning experiments on tasks such as signature, voice, and sign language recognition.", "title": "" }, { "docid": "1fc2c4294d4c768e5ee80fb0de1eb402", "text": "A promising approach for dealing with the increasing demand of data traffic is the use of device-to-device (D2D) technologies, in particular when the destination can be reached directly, or though few retransmissions by peer devices. Thus, the cellular network can offload local traffic that is transmitted by an ad hoc network, e.g., a mobile ad hoc network (MANET), or a vehicular ad hoc network (VANET). The cellular base station can help coordinate all the devices in the ad hoc network by reusing the software tools developed for software-defined networks (SDNs), which divide the control and the data messages, transmitted in two separate interfaces. In this paper, we present a practical implementation of an SDN MANET, describe in detail the software components that we adopted, and provide a repository for all the new components that we developed. This work can be a starting point for the wireless networking community to design new testbeds with SDN capabilities that can have the advantages of D2D data transmissions and the flexibility of a centralized network management. In order to prove the feasibility of such a network, we also showcase the performance of the proposed network implemented in real devices, as compared to a distributed ad hoc network.", "title": "" }, { "docid": "001b3155f0d67fd153173648cd483ac2", "text": "A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.", "title": "" }, { "docid": "2cbd47c2e7a1f68bd84d18413db26ea3", "text": "Horizontal gene transfer (HGT) refers to the acquisition of foreign genes by organisms. The occurrence of HGT among bacteria in the environment is assumed to have implications in the risk assessment of genetically modified bacteria which are released into the environment. First, introduced genetic sequences from a genetically modified bacterium could be transferred to indigenous micro-organisms and alter their genome and subsequently their ecological niche. Second, the genetically modified bacterium released into the environment might capture mobile genetic elements (MGE) from indigenous micro-organisms which could extend its ecological potential. Thus, for a risk assessment it is important to understand the extent of HGT and genome plasticity of bacteria in the environment. This review summarizes the present state of knowledge on HGT between bacteria as a crucial mechanism contributing to bacterial adaptability and diversity. In view of the use of GM crops and microbes in agricultural settings, in this mini-review we focus particularly on the presence and role of MGE in soil and plant-associated bacteria and the factors affecting gene transfer.", "title": "" }, { "docid": "3a86f1f91cfaa398a03a56abb34f497c", "text": "We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as nonoverlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, for example, line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation to approximate generalized blue noise properties. To generate these samples with the desired properties, we first construct a set of nonoverlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach that combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum..", "title": "" }, { "docid": "6f18b8e0a1e7c835dc6f94bfa8d96437", "text": "Recent years have witnessed the rise of the gut microbiota as a major topic of research interest in biology. Studies are revealing how variations and changes in the composition of the gut microbiota influence normal physiology and contribute to diseases ranging from inflammation to obesity. Accumulating data now indicate that the gut microbiota also communicates with the CNS — possibly through neural, endocrine and immune pathways — and thereby influences brain function and behaviour. Studies in germ-free animals and in animals exposed to pathogenic bacterial infections, probiotic bacteria or antibiotic drugs suggest a role for the gut microbiota in the regulation of anxiety, mood, cognition and pain. Thus, the emerging concept of a microbiota–gut–brain axis suggests that modulation of the gut microbiota may be a tractable strategy for developing novel therapeutics for complex CNS disorders.", "title": "" }, { "docid": "60306e39a7b281d35e8a492aed726d82", "text": "The aim of this study was to assess the efficiency of four anesthetic agents, tricaine methanesulfonate (MS-222), clove oil, 7 ketamine, and tobacco extract on juvenile rainbow trout. Also, changes of blood indices were evaluated at optimum doses of four anesthetic agents. Basal effective concentrations determined were 40 mg L−1 (induction, 111 ± 16 s and recovery time, 246 ± 36 s) for clove oil, 150 mg L−1 (induction, 287 ± 59 and recovery time, 358 ± 75 s) for MS-222, 1 mg L−1 (induction, 178 ± 38 and recovery time, 264 ± 57 s) for ketamine, and 30 mg L−1 (induction, 134 ± 22 and recovery time, 285 ± 42 s) for tobacco. According to our results, significant changes in hematological parameters including white blood cells (WBCs), red blood cells (RBCs), hematocrit (Ht), and hemoglobin (Hb) were found between four anesthetics agents. Also, significant differences were observed in some plasma parameters including cortical, glucose, and lactate between experimental treatments. Induction and recovery times for juvenile Oncorhynchus mykiss anesthetized with anesthetic agents were dose-dependent.", "title": "" }, { "docid": "ed8bcc72caefe30126ece6eb7a549243", "text": "This paper describes a new concept for locomotion of mobile robots based on single actuated tensegrity structures. To discuss the working principle, two vibration-driven locomotion systems are considered. Due to the complex dynamics of the applied tensegrity structures with pronounced mechanical compliance, the movement performance of both systems is highly dependent on the driving frequency. By using single-actuation, the system design and also their control can be kept simple. The movement of the robots is depending on their configuration uniaxial bidirectional or planar. The working principle of both systems is discussed with the help of transient dynamic analyses and verified with experimental tests for a selected prototype.", "title": "" }, { "docid": "601b06f0cdf578400b11a54f36e14d56", "text": "Advances in deep learning algorithms overshadow their security risk in software implementations. This paper discloses a set of vulnerabilities in popular deep learning frameworks including Caffe, TensorFlow, and Torch. Contrary to the small code size of deep learning models, these deep learning frameworks are complex, and they heavily depend on numerous open source packages. This paper considers the risks caused by these vulnerabilities by studying their impact on common deep learning applications such as voice recognition and image classification. By exploiting these framework implementations, attackers can launch denial-of-service attacks that crash or hang a deep learning application, or control-flow hijacking attacks that lead to either system compromise or recognition evasions. The goal of this paper is to draw attention to software implementations and call for community collaborative effort to improve security of deep learning frameworks.", "title": "" }, { "docid": "d5302f6d0633313a30fa9cb0b90dcd0e", "text": "Differing classes of abused drugs utilize different mechanisms of molecular pharmacological action yet the overuse of these same drugs frequently leads to the same outcome: addiction. Similarly, episodes of stress can lead to drug-seeking behaviors and relapse in recovering addicts. To overcome the labor-intensive headache of having to design a specific addiction-breaking intervention tailored to each drug it would be expedient to attack the cycle of addiction at targets common to such seemingly disparate classes of drugs of abuse. Recently, encouraging observations were made whereby stressful conditions and differing classes of drugs of abuse were found to impinge upon the same excitatory synapses on dopamine neurons in the midbrain. These findings will increase our understanding of the intricacies of addiction and LTP, and may lead to new interventions for breaking addiction.", "title": "" }, { "docid": "57666e9d9b7e69c38d7530633d556589", "text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.", "title": "" }, { "docid": "c6967ff67346894766f810f44a6bb6bc", "text": "Knowledge about the effects of physical exercise on brain is accumulating although the mechanisms through which exercise exerts these actions remain largely unknown. A possible involvement of adult hippocampal neurogenesis (AHN) in the effects of exercise is debated while the physiological and pathological significance of AHN is under intense scrutiny. Recently, both neurogenesis-dependent and independent mechanisms have been shown to mediate the effects of physical exercise on spatial learning and anxiety-like behaviors. Taking advantage that the stimulating effects of exercise on AHN depend among others, on serum insulin-like growth factor I (IGF-I), we now examined whether the behavioral effects of running exercise are related to variations in hippocampal neurogenesis, by either increasing or decreasing it according to serum IGF-I levels. Mutant mice with low levels of serum IGF-I (LID mice) had reduced AHN together with impaired spatial learning. These deficits were not improved by running. However, administration of exogenous IGF-I ameliorated the cognitive deficit and restored AHN in LID mice. We also examined the effect of exercise in LID mice in the novelty-suppressed feeding test, a measure of anxiety-like behavior in laboratory animals. Normal mice, but not LID mice, showed reduced anxiety after exercise in this test. However, after exercise, LID mice did show improvement in the forced swim test, a measure of behavioral despair. Thus, many, but not all of the beneficial effects of exercise on brain function depend on circulating levels of IGF-I and are associated to increased hippocampal neurogenesis, including improved cognition and reduced anxiety.", "title": "" }, { "docid": "67392cae4df0da44c8fda4b3f9eceb29", "text": "We propose a modification to weight normalization techniques that provides the same convergence benefits but requires fewer computational operations. The proposed method, FastNorm, exploits the low-rank properties of weight updates and infers the norms without explicitly calculating them, replacing anO(n) computation with an O(n) one for a fully-connected layer. It improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence. We report experimental results that illustrate the advantage of the proposed method.", "title": "" }, { "docid": "60ed46346d2992789e4ecd34e1936cc7", "text": "The aim of this study was to differentiate the effects of body load and joint movements on the leg muscle activation pattern during assisted locomotion in spinal man. Stepping movements were induced by a driven gait orthosis (DGO) on a treadmill in patients with complete para-/tetraplegia and, for comparison, in healthy subjects. All subjects were unloaded by 70% of their body weight. EMG of upper and lower leg muscles and joint movements of the DGO of both legs were recorded. In the patients, normal stepping movements and those mainly restricted to the hips (blocked knees) were associated with a pattern of leg muscle EMG activity that corresponded to that of the healthy subjects, but the amplitude was smaller. Locomotor movements restricted to imposed ankle joint movements were followed by no, or only focal EMG responses in the stretched muscles. Unilateral locomotion in the patients was associated with a normal pattern of leg muscle EMG activity restricted to the moving side, while in the healthy subjects a bilateral activation occurred. This indicates that interlimb coordination depends on a supraspinal input. During locomotion with 100% body unloading in healthy subjects and patients, no EMG activity was present. Thus, it can be concluded that afferent input from hip joints, in combination with that from load receptors, plays a crucial role in the generation of locomotor activity in the isolated human spinal cord. This is in line with observations from infant stepping experiments and experiments in cats. Afferent feedback from knee and ankle joints may be involved largely in the control of focal movements.", "title": "" }, { "docid": "3f207c3c622d1854a7ad6c5365354db1", "text": "The field of Music Information Retrieval has always acknowledged the need for rigorous scientific evaluations, and several efforts have set out to develop and provide the infrastructure, technology and methodologies needed to carry out these evaluations. The community has enormously gained from these evaluation forums, but we have reached a point where we are stuck with evaluation frameworks that do not allow us to improve as much and as well as we want. The community recently acknowledged this problem and showed interest in addressing it, though it is not clear what to do to improve the situation. We argue that a good place to start is again the Text IR field. Based on a formalization of the evaluation process, this paper presents a survey of past evaluation work in the context of Text IR, from the point of view of validity, reliability and efficiency of the experiments. We show the problems that our community currently has in terms of evaluation, point to several lines of research to improve it and make various proposals in that line.", "title": "" }, { "docid": "c89a7027de2362aa1bfe64b084073067", "text": "This paper considers pick-and-place tasks using aerial vehicles equipped with manipulators. The main focus is on the development and experimental validation of a nonlinear model-predictive control methodology to exploit the multi-body system dynamics and achieve optimized performance. At the core of the approach lies a sequential Newton method for unconstrained optimal control and a high-frequency low-level controller tracking the generated optimal reference trajectories. A low cost quadrotor prototype with a simple manipulator extending more than twice the radius of the vehicle is designed and integrated with an on-board vision system for object tracking. Experimental results show the effectiveness of model-predictive control to motivate the future use of real-time optimal control in place of standard ad-hoc gain scheduling techniques.", "title": "" }, { "docid": "c11b77f1392c79f4a03f9633c8f97f4d", "text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.", "title": "" } ]
scidocsrr
b6e8dbd872062bdab44281f822532c16
A parallel workload model and its implications for processor allocation
[ { "docid": "da1d1e9ddb5215041b9565044b9feecb", "text": "As multiprocessors with large numbers of processors become more prevalent, we face the task of developing scheduling algorithms for the multiprogrammed use of such machines. The scheduling decisions must take into account the number of processors available, the overall system load, and the ability of each application awaiting activation to make use of a given number of processors.\nThe parallelism within an application can be characterized at a number of different levels of detail. At the highest level, it might be characterized by a single parameter (such as the proportion of the application that is sequential, or the average number of processors the application would use if an unlimited number of processors were available). At the lowest level, representing all the parallelism in the application requires the full data dependency graph (which is more information than is practically manageable).\nIn this paper, we examine the quality of processor allocation decisions under multiprogramming that can be made with several different high-level characterizations of application parallelism. We demonstrate that decisions based on parallelism characterizations with two to four parameters are superior to those based on single-parameter characterizations (such as fraction sequential or average parallelism). The results are based predominantly on simulation, with some guidance from a simple analytic model.", "title": "" } ]
[ { "docid": "323f7fd7269d020ebc60af1917e90cb4", "text": "This paper describes the design concept, operating principle, analytical design, fabrication of a functional prototype, and experimental performance verification of a novel wobble motor with a XY compliant mechanism driven by shape memory alloy (SMA) wires. With the aim of realizing an SMA based motor which could generate bidirectional high-torque motion, the proposed motor is devised with wobble motor driving principle widely utilized for speed reducers. As a key mechanism which functions to guide wobbling motion, a planar XY compliant mechanism is designed and applied to the motor. Since the mechanism has monolithic flat structure with the planar mirror symmetric configuration, cyclic expansion and contraction of the SMA wires could be reliably converted into high-torque rotary motion. For systematic design of the motor, a characterization of electro-thermomechanical behavior of the SMA wire is experimentally carried out, and the design parametric analysis is conducted to determine parametric values of the XY compliant mechanism. The designed motor is fabricated as a functional prototype to experimentally investigate its operational feasibility and working performances. The observed experimental results obviously demonstrate the unique driving characteristics and practical applicability of the proposed motor.", "title": "" }, { "docid": "121f1baeaba51ebfdfc69dde5cd06ce3", "text": "Mobile operators are facing an exponential traffic growth due to the proliferation of portable devices that require a high-capacity connectivity. This, in turn, leads to a tremendous increase of the energy consumption of wireless access networks. A promising solution to this problem is the concept of heterogeneous networks, which is based on the dense deployment of low-cost and low-power base stations, in addition to the traditional macro cells. However, in such a scenario the energy consumed by the backhaul, which aggregates the traffic from each base station towards the metro/core segment, becomes significant and may limit the advantages of heterogeneous network deployments. This paper aims at assessing the impact of backhaul on the energy consumption of wireless access networks, taking into consideration different data traffic requirements (i.e., from todays to 2020 traffic levels). Three backhaul architectures combining different technologies (i.e., copper, fiber, and microwave) are considered. Results show that backhaul can amount to up to 50% of the power consumption of a wireless access network. On the other hand, hybrid backhaul architectures that combines fiber and microwave performs relatively well in scenarios where the wireless network is characterized by a high small-base-stations penetration rate.", "title": "" }, { "docid": "7fe44f62935744b5ae6ee78ae15150dd", "text": "The flexibility and general programmability offered by the Software Defined Networking (SDN) technology has supposed a disruption in the evolution of the network. It offers enormous benefits to network control and opens new ways of communication by defining powerful but simple switching elements (forwarders) that can use any single field of a packet or message to determine the outgoing port to which it will be forwarded. Such benefits can be applied to the Internet of Things (IoT) and thus resolve some of the main challenges it exposes, such as the ability to let devices connected to heterogeneous networks to communicate each other. In the present document we describe a general model to integrate SDN and IoT so that heterogeneous communications are achieved. However, it exposes other (simpler) challenges must be resolved, evaluated, and validated against current and future solutions before the design of the integrated approach can be finished.", "title": "" }, { "docid": "8468e279ff6dfcd11a5525ab8a60d816", "text": "We provide a concise introduction to basic approaches to reinforcement learning from the machine learning perspective. The focus is on value function and policy gradient methods. Some selected recent trends are highlighted.", "title": "" }, { "docid": "4507ae69ed021941ff7b0e39d8d50d22", "text": "In the last few years a new research area, called stream reasoning, emerged to bridge the gap between reasoning and stream processing. While current reasoning approaches are designed to work on mainly static data, the Web is, on the other hand, extremely dynamic: information is frequently changed and updated, and new data is continuously generated from a huge number of sources, often at high rate. In other words, fresh information is constantly made available in the form of streams of new data and updates. Despite some promising investigations in the area, stream reasoning is still in its infancy, both from the perspective of models and theories development, and from the perspective of systems and tools design and implementation. The aim of this paper is threefold: (i) we identify the requirements coming from different application scenarios, and we isolate the problems they pose; (ii) we survey existing approaches and proposals in the area of stream reasoning, highlighting their strengths and limitations; (iii) we draw a research agenda to guide the future research and development of stream reasoning. In doing so, we also analyze related research fields to extract algorithms, models, techniques, and solutions that could be useful in the area of stream reasoning.", "title": "" }, { "docid": "cf702356b3a8895f5a636cc05597b52a", "text": "This paper investigates non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> control problems for a class of uncertain nonlinear networked control systems (NCSs) with randomly occurring information, such as the controller gain fluctuation and the uncertain nonlinearity, and short time-varying delay via output feedback controller. Using the nominal point technique, the NCS is converted into a novel time-varying discrete time model with norm-bounded uncertain parameters for reducing the conservativeness. Based on linear matrix inequality framework and output feedback control strategy, design methods for general and optimal non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> controllers are presented. Meanwhile, these control laws can still be applied to linear NCSs and general fragile control NCSs while introducing random variables. Finally, three examples verify the correctness of the presented scheme.", "title": "" }, { "docid": "faa951d9c72c36c2df205c44c3f60c28", "text": "Face perception is mediated by a distributed neural system in humans that consists of multiple, bilateral regions. The functional organization of this system embodies a distinction between the representation of invariant aspects of faces, which is the basis for recognizing individuals, and the representation of changeable aspects, such as eye gaze, expression, and lip movement, which underlies the perception of information that facilitates social communication. The system also has a hierarchical organization. A core system, consisting of occipitotemporal regions in extrastriate visual cortex, mediates the visual analysis of faces. An extended system consists of regions from neural systems for other cognitive functions that can act in concert with the core system to extract meaning from faces. Of regions in the extended system for face perception, the amygdala plays a central role in processing the social relevance of information gleaned from faces, particularly when that information may signal a potential threat.", "title": "" }, { "docid": "39b2903849932dd7c4ef1dc669ec04e1", "text": "Emerging technologies such as the Internet of Things (IoT) require latency-aware computation for real-time application processing. In IoT environments, connected things generate a huge amount of data, which are generally referred to as big data. Data generated from IoT devices are generally processed in a cloud infrastructure because of the on-demand services and scalability features of the cloud computing paradigm. However, processing IoT application requests on the cloud exclusively is not an efficient solution for some IoT applications, especially time-sensitive ones. To address this issue, Fog computing, which resides in between cloud and IoT devices, was proposed. In general, in the Fog computing environment, IoT devices are connected to Fog devices. These Fog devices are located in close proximity to users and are responsible for intermediate computation and storage. One of the key challenges in running IoT applications in a Fog computing environment are resource allocation and task scheduling. Fog computing research is still in its infancy, and taxonomy-based investigation into the requirements of Fog infrastructure, platform, and applications mapped to current research is still required. This survey will help the industry and research community synthesize and identify the requirements for Fog computing. This paper starts with an overview of Fog computing in which the definition of Fog computing, research trends, and the technical differences between Fog and cloud are reviewed. Then, we investigate numerous proposed Fog computing architectures and describe the components of these architectures in detail. From this, the role of each component will be defined, which will help in the deployment of Fog computing. Next, a taxonomy of Fog computing is proposed by considering the requirements of the Fog computing paradigm. We also discuss existing research works and gaps in resource allocation and scheduling, fault tolerance, simulation tools, and Fog-based microservices. Finally, by addressing the limitations of current research works, we present some open issues, which will determine the future research direction for the Fog computing paradigm.", "title": "" }, { "docid": "e0919f53691d17c7cb495c19914683f8", "text": "Carpooling has long held the promise of reducing gas consumption by decreasing mileage to deliver coriders. Although ad hoc carpools already exist in the real world through private arrangements, little research on the topic has been done. In this article, we present the first systematic work to design, implement, and evaluate a carpool service, called coRide, in a large-scale taxicab network intended to reduce total mileage for less gas consumption. Our coRide system consists of three components, a dispatching cloud server, passenger clients, and an onboard customized device, called TaxiBox. In the coRide design, in response to the delivery requests of passengers, dispatching cloud servers calculate cost-efficient carpool routes for taxicab drivers and thus lower fares for the individual passengers.\n To improve coRide’s efficiency in mileage reduction, we formulate an NP-hard route calculation problem under different practical constraints. We then provide (1) an optimal algorithm using Linear Programming, (2) a 2-approximation algorithm with a polynomial complexity, and (3) its corresponding online version with a linear complexity. To encourage coRide’s adoption, we present a win-win fare model as the incentive mechanism for passengers and drivers to participate. We test the performance of coRide by a comprehensive evaluation with a real-world trial implementation and a data-driven simulation with 14,000 taxi data from the Chinese city Shenzhen. The results show that compared with the ground truth, our service can reduce 33% of total mileage; with our win-win fare model, we can lower passenger fares by 49% and simultaneously increase driver profit by 76%.", "title": "" }, { "docid": "908f862dea52cd9341d2127928baa7de", "text": "Arsenic's history in science, medicine and technology has been overshadowed by its notoriety as a poison in homicides. Arsenic is viewed as being synonymous with toxicity. Dangerous arsenic concentrations in natural waters is now a worldwide problem and often referred to as a 20th-21st century calamity. High arsenic concentrations have been reported recently from the USA, China, Chile, Bangladesh, Taiwan, Mexico, Argentina, Poland, Canada, Hungary, Japan and India. Among 21 countries in different parts of the world affected by groundwater arsenic contamination, the largest population at risk is in Bangladesh followed by West Bengal in India. Existing overviews of arsenic removal include technologies that have traditionally been used (oxidation, precipitation/coagulation/membrane separation) with far less attention paid to adsorption. No previous review is available where readers can get an overview of the sorption capacities of both available and developed sorbents used for arsenic remediation together with the traditional remediation methods. We have incorporated most of the valuable available literature on arsenic remediation by adsorption ( approximately 600 references). Existing purification methods for drinking water; wastewater; industrial effluents, and technological solutions for arsenic have been listed. Arsenic sorption by commercially available carbons and other low-cost adsorbents are surveyed and critically reviewed and their sorption efficiencies are compared. Arsenic adsorption behavior in presence of other impurities has been discussed. Some commercially available adsorbents are also surveyed. An extensive table summarizes the sorption capacities of various adsorbents. Some low-cost adsorbents are superior including treated slags, carbons developed from agricultural waste (char carbons and coconut husk carbons), biosorbents (immobilized biomass, orange juice residue), goethite and some commercial adsorbents, which include resins, gels, silica, treated silica tested for arsenic removal come out to be superior. Immobilized biomass adsorbents offered outstanding performances. Desorption of arsenic followed by regeneration of sorbents has been discussed. Strong acids and bases seem to be the best desorbing agents to produce arsenic concentrates. Arsenic concentrate treatment and disposal obtained is briefly addressed. This issue is very important but much less discussed.", "title": "" }, { "docid": "b43bcd460924f0b5a7366f23bf0d8fe7", "text": "Historically, it has been difficult to define paraphilias in a consistent manner or distinguish paraphilias from non-paraphilic or normophilic sexual interests (see Blanchard, 2009a; Moser & Kleinplatz, 2005). As part of the American Psychiatric Association’s (APA) process of revising the Diagnostic and Statistical Manual of Mental Disorders (DSM), Blanchard (2010a), the chair of the DSM-5 Paraphilias subworkgroup (PSWG), has proposed a new paraphilia definition: ‘‘A paraphilia is any powerful and persistent sexual interest other than sexual interest in copulatory or precopulatory behavior with phenotypicallynormal, consentingadulthumanpartners’’ (p. 367). Blanchard (2009a) acknowledges that his paraphilia ‘‘definition is not watertight’’and it already has attracted serious criticism (see Haeberle, 2010; Hinderliter, 2010; Singy, 2010). The current analysis will critique three components of Blanchard’s proposed definition (sexual interest in copulatory or precopulatory behavior, phenotypically normal, and consenting adult human partners) to determine if the definition is internally consistent andreliably distinguishes individualswith a paraphilia from individuals with normophilia. Blanchard (2009a) believes his definition ‘‘is better than no real definition,’’but that remains to be seen. According to Blanchard (2009a), the current DSM paraphilia definition (APA, 2000) is a definition by concatenation (a list of things that are paraphilias), but he believes a definition by exclusion (everything that is not normophilic) is preferable. The change is not substantive as normophilia (formerly a definitionofexclusion)nowbecomesadefinitionofconcatenation (a list of acceptable activities). Nevertheless, it seems odd to define a paraphilia on the basis of what it is not, rather than by the commonalities among the different paraphilias. Most definitions are statements of what things are, not what things are excluded or lists of things to be included. Blanchard (2009a) purposefully left ‘‘intact the distinction betweennormativeandnon-normativesexualbehavior,’’implying that these categories are meaningful. Blanchard (2010b; see alsoBlanchardetal.,2009)definesaparaphiliabyrelativeascertainment (the interest in paraphilic stimuli is greater than the interest in normophilic stimuli) rather than absolute ascertainment (the interest is intense). Using relative ascertainment confirms that one cannot be both paraphilic and normophilic; the greater interest would classify the individual as paraphilic or normophilic. Blanchard (2010a) then contradicts himself when he asserts that once ascertained with a paraphilia, the individual should retain that label, even if the powerful and persistent paraphilic sexual interest dissipates. Logically, the relative dissipation of the paraphilic and augmentation of the normophilic interests should re-categorize the individual as normophilic. The first aspect of Blanchard’s paraphilia definition is the ‘‘sexual interest incopulatoryorprecopulatorybehavior.’’Obviously, most normophilic individuals do not desire or respond sexually to all adults. Ascertaining if someone is more aroused by the coitus or their partner’s physique, attitude, attributes, etc. seems fruitless and hopelessly convoluted. I can see no other way to interpret sexual interest in copulatory or precopulatory behavior, except to conclude that coitus (between phenotypically normal consenting adults) is normophilic. Otherwise, a powerful and persistent preference for blonde (or Asian or petite) coital partners is a paraphilia. If a relative lack of sexual interest in brunettes as potential coital partners indicates a C. Moser (&) Department of Sexual Medicine, Institute for Advanced Study of Human Sexuality, 45 Castro Street, #125, San Francisco, CA 94114, USA e-mail: docx2@ix.netcom.com 1 Another version of this definition exists (Blanchard, 2009a, 2009b), but I do not believe the changes substantially alter any of my comments.", "title": "" }, { "docid": "7457c09c1068ba1397f468879bc3b0d1", "text": "Genome editing has potential for the targeted correction of germline mutations. Here we describe the correction of the heterozygous MYBPC3 mutation in human preimplantation embryos with precise CRISPR–Cas9-based targeting accuracy and high homology-directed repair efficiency by activating an endogenous, germline-specific DNA repair response. Induced double-strand breaks (DSBs) at the mutant paternal allele were predominantly repaired using the homologous wild-type maternal gene instead of a synthetic DNA template. By modulating the cell cycle stage at which the DSB was induced, we were able to avoid mosaicism in cleaving embryos and achieve a high yield of homozygous embryos carrying the wild-type MYBPC3 gene without evidence of off-target mutations. The efficiency, accuracy and safety of the approach presented suggest that it has potential to be used for the correction of heritable mutations in human embryos by complementing preimplantation genetic diagnosis. However, much remains to be considered before clinical applications, including the reproducibility of the technique with other heterozygous mutations.", "title": "" }, { "docid": "36b7b37429a8df82e611df06303a8fcb", "text": "Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs) – semantic-preserving perturbations that induce changes in the model’s predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs) – simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.", "title": "" }, { "docid": "56dabbcf36d734211acc0b4a53f23255", "text": "Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology’s (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2ac20d934cb911b6751e93d9bc750fcf", "text": "In recent years, visual saliency estimation in images has attracted much attention in the computer vision community. However, predicting saliency in videos has received relatively little attention. Inspired by the recent success of deep convolutional neural networks based static saliency models, in this work, we study two different two-stream convolutional networks for dynamic saliency prediction. To improve the generalization capability of our models, we also introduce a novel, empirically grounded data augmentation technique for this task. We test our models on DIEM dataset and report superior results against the existing models. Moreover, we perform transfer learning experiments on SALICON, a recently proposed static saliency dataset, by finetuning our models on the optical flows estimated from static images. Our experiments show that taking motion into account in this way can be helpful for static saliency estimation.", "title": "" }, { "docid": "b4714cacd13600659e8a94c2b8271697", "text": "AIM AND OBJECTIVE\nExamine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK.\n\n\nBACKGROUND\nThe non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally.\n\n\nDESIGN\nA comprehensive literature review.\n\n\nMETHOD\nLiterature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined.\n\n\nRESULTS\nThe literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nQuestions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.", "title": "" }, { "docid": "6162ad3612b885add014bd09baa5f07a", "text": "The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.", "title": "" }, { "docid": "0b135f95bfcccf34c75959a41a0a7fe6", "text": "Analogy is a kind of similarity in which the same system of relations holds across different objects. Analogies thus capture parallels across different situations. When such a common structure is found, then what is known about one situation can be used to infer new information about the other. This chapter describes the processes involved in analogical reasoning, reviews foundational research and recent developments in the field, and proposes new avenues of investigation.", "title": "" }, { "docid": "cf29cfcec35d7005641b38cae8cd4b74", "text": "University can be a difficult, stressful time for students. This stress causes problems ranging from academic difficulties and poor performance, to serious mental and physical health issues. Studies have shown that physical activity can help reduce stress, improve academic performance and contribute to a healthier campus atmosphere physically, mentally, and emotionally. Computer science is often considered among the most difficult and stressful programs offered at academic institutions. Yet the current stereotype of computer scientists includes unhealthy lifestyle choices and de-emphasizes physical activity. \n This paper analyzes the effects of introducing short periods of physical activity into an introductory CS course, during the normal lecture break. Contrary to the stereotype of CS students, participation was high, and the students enjoyed these Fit-Breaks more than alternative break activities. This small injection of physical activity also had a measurable impact on the students' overall satisfaction with life, and may have had positive impacts on stress, retention, and academic performance as well as improved student perception, especially in areas that are traditionally problematic for female computer science students. \n Fit-Breaks are low-cost, easy to replicate, and enjoyable exercises. Instead of sitting quietly for ten minutes staring at a phone; stretching, moving, and getting a short burst of physical activity has a positive benefit for students. And the good news is: they actually enjoy it.", "title": "" }, { "docid": "80477fdab96ae761dbbb7662b87e82a0", "text": "This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.", "title": "" } ]
scidocsrr
03838238c643a539dbd5bda0dc913947
To Centralize or to Distribute: That Is the Question: A Comparison of Advanced Microgrid Management Systems
[ { "docid": "13ae30bc5bcb0714fe752fbe9c7e5de8", "text": "The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.", "title": "" }, { "docid": "e3d1282b2ed8c9724cf64251df7e14df", "text": "This paper describes and evaluates the feasibility of control strategies to be adopted for the operation of a microgrid when it becomes isolated. Normally, the microgrid operates in interconnected mode with the medium voltage network; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. An evaluation of the need of storage devices and load shedding strategies is included in this paper.", "title": "" }, { "docid": "3be99b1ef554fde94742021e4782a2aa", "text": "This is the second part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examined the potential value of MAS technology to the power industry, described fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications, and presented a comprehensive review of the power engineering applications for which MAS are being investigated. It also defined the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented. Given the significant and growing interest in this field, it is imperative that the power engineering community considers the standards, tools, supporting technologies, and design methodologies available to those wishing to implement a MAS solution for a power engineering problem. This paper describes the various options available and makes recommendations on best practice. It also describes the problem of interoperability between different multi-agent systems and proposes how this may be tackled.", "title": "" } ]
[ { "docid": "38c9cee29ef1ba82e45556d87de1ff24", "text": "This paper presents a detailed characterization of the Hokuyo URG-04LX 2D laser range finder. While the sensor specifications only provide a rough estimation of the sensor accuracy, the present work analyzes issues such as time drift effects and dependencies on distance, target properties (color, brightness and material) as well as incidence angle. Since the sensor is intended to be used for measurements of a tubelike environment on an inspection robot, the characterization is extended by investigating the influence of the sensor orientation and dependency on lighting conditions. The sensor characteristics are compared to those of the Sick LMS 200 which is commonly used in robotic applications when size and weight are not critical constraints. The results show that the sensor accuracy is strongly depending on the target properties (color, brightness, material) and that it is consequently difficult to establish a calibration model. The paper also identifies cases for which the sensor returns faulty measurements, mainly when the surface has low reflectivity (dark surfaces, foam) or for high incidence angles on shiny surfaces. On the other hand, the repeatability of the sensor seems to be competitive with the LMS 200.", "title": "" }, { "docid": "525f9a7321a7b45111a19f458c9b976a", "text": "This paper provides a literature review on Adaptive Line Enhancer (ALE) methods based on adaptive noise cancellation systems. Such methods have been used in various applications, including communication systems, biomedical engineering, and industrial applications. Developments in ALE in noise cancellation are reviewed, including the principles, adaptive algorithms, and recent modifications on the filter design proposed to increase the convergence rate and reduce the computational complexity for future implementation. The advantages and drawbacks of various adaptive algorithms, such as the Least Mean Square, Recursive Least Square, Affine Projection Algorithm, and their variants, are discussed in this review. Design modifications of filter structures used in ALE are also evaluated. Such filters include Finite Impulse Response, Infinite Impulse Response, lattice, and nonlinear adaptive filters. These structural modifications aim to achieve better adaptive filter performance in ALE systems. Finally, a perspective of future research on ALE systems is presented for further consideration.", "title": "" }, { "docid": "8c1d51dd52bc14e8952d9e319eaacf16", "text": "This paper presents an approach to text recognition in natural scene images. Unlike most existing works which assume that texts are horizontal and frontal parallel to the image plane, our method is able to recognize perspective texts of arbitrary orientations. For individual character recognition, we adopt a bag-of-key points approach, in which Scale Invariant Feature Transform (SIFT) descriptors are extracted densely and quantized using a pre-trained vocabulary. Following [1, 2], the context information is utilized through lexicons. We formulate word recognition as finding the optimal alignment between the set of characters and the list of lexicon words. Furthermore, we introduce a new dataset called StreetViewText-Perspective, which contains texts in street images with a great variety of viewpoints. Experimental results on public datasets and the proposed dataset show that our method significantly outperforms the state-of-the-art on perspective texts of arbitrary orientations.", "title": "" }, { "docid": "7fd7aa4b2c721a06e3d21a2e5fe608e5", "text": "Self-organization can be approached in terms of developmental processes occurring within and between component systems of temperament. Within-system organization involves progressive shaping of cortical representations by subcortical motivational systems. As cortical representations develop, they feed back to provide motivational systems with enhanced detection and guidance capabilities. These reciprocal influences may amplify the underlying motivational functions and promote excessive impulsivity or anxiety. However, these processes also depend upon interactions arising between motivational and attentional systems. We discuss these between-system effects by considering the regulation of approach motivation by reactive attentional processes related to fear and by more voluntary processes related to effortful control. It is suggested than anxious and impulsive psychopathology may reflect limitations in these dual means of control, which can take the form of overregulation as well as underregulation.", "title": "" }, { "docid": "7b63daa48a700194f04293542c83bb20", "text": "BACKGROUND\nPresent treatment strategies for rheumatoid arthritis include use of disease-modifying antirheumatic drugs, but a minority of patients achieve a good response. We aimed to test the hypothesis that an improved outcome can be achieved by employing a strategy of intensive outpatient management of patients with rheumatoid arthritis--for sustained, tight control of disease activity--compared with routine outpatient care.\n\n\nMETHODS\nWe designed a single-blind, randomised controlled trial in two teaching hospitals. We screened 183 patients for inclusion. 111 were randomly allocated either intensive management or routine care. Primary outcome measures were mean fall in disease activity score and proportion of patients with a good response (defined as a disease activity score <2.4 and a fall in this score from baseline by >1.2). Analysis was by intention-to-treat.\n\n\nFINDINGS\nOne patient withdrew after randomisation and seven dropped out during the study. Mean fall in disease activity score was greater in the intensive group than in the routine group (-3.5 vs -1.9, difference 1.6 [95% CI 1.1-2.1], p<0.0001). Compared with routine care, patients treated intensively were more likely to have a good response (definition, 45/55 [82%] vs 24/55 [44%], odds ratio 5.8 [95% CI 2.4-13.9], p<0.0001) or be in remission (disease activity score <1.6; 36/55 [65%] vs 9/55 [16%], 9.7 [3.9-23.9], p<0.0001). Three patients assigned routine care and one allocated intensive management died during the study; none was judged attributable to treatment.\n\n\nINTERPRETATION\nA strategy of intensive outpatient management of rheumatoid arthritis substantially improves disease activity, radiographic disease progression, physical function, and quality of life at no additional cost.", "title": "" }, { "docid": "03c588f89216ee5b0b6392730fe2159f", "text": "In this paper, a three-port converter with three active full bridges, two series-resonant tanks, and a three-winding transformer is proposed. It uses a single power conversion stage with high-frequency link to control power flow between batteries, load, and a renewable source such as fuel cell. The converter has capabilities of bidirectional power flow in the battery and the load port. Use of series-resonance aids in high switching frequency operation with realizable component values when compared to existing three-port converter with only inductors. The converter has high efficiency due to soft-switching operation in all three bridges. Steady-state analysis of the converter is presented to determine the power flow equations, tank currents, and soft-switching region. Dynamic analysis is performed to design a closed-loop controller that will regulate the load-side port voltage and source-side port current. Design procedure for the three-port converter is explained and experimental results of a laboratory prototype are presented.", "title": "" }, { "docid": "a1e881c993ad507e16e55c952c6a47dc", "text": "Nowadays, most of the information available on the web is in Natural Language. Extracting such knowledge from Natural Language text is an essential work and a very remarkable research topic in the Semantic Web field. The logic programming language Prolog, based on the definite-clause formalism, is a useful tool for implementing a Natural Language Processing (NLP) systems. However, web-based services for NLP have also been developed recently, and they represent an important alternative to be considered. In this paper we present the comparison between two different approaches in NLP, for the automatic creation of an OWL ontology supporting the semantic annotation of text. The first one is a pure Prolog approach, based on grammar and logic analysis rules. The second one is based on Watson Relationship Extraction service of IBM Cloud platform Bluemix. We evaluate the two approaches in terms of performance, the quality of NLP result, OWL completeness and richness.", "title": "" }, { "docid": "18011cbde7d1a16da234c1e886371a6c", "text": "The increased prevalence of cardiovascular disease among the aging population has prompted greater interest in the field of smart home monitoring and unobtrusive cardiac measurements. This paper introduces the design of a capacitive electrocardiogram (ECG) sensor that measures heart rate with no conscious effort from the user. The sensor consists of two active electrodes and an analog processing circuit that is low cost and customizable to the surfaces of common household objects. Prototype testing was performed in a home laboratory by embedding the sensor into a couch, walker, office and dining chairs. The sensor produced highly accurate heart rate measurements (<; 2.3% error) via either direct skin contact or through one and two layers of clothing. The sensor requires no gel dielectric and no grounding electrode, making it particularly suited to the “zero-effort” nature of an autonomous smart home environment. Motion artifacts caused by deviations in body contact with the electrodes were identified as the largest source of unreliability in continuous ECG measurements and will be a primary focus in the next phase of this project.", "title": "" }, { "docid": "3bea5eeea1e3b74917ea25c98b169289", "text": "Dissociation as a clinical psychiatric condition has been defined primarily in terms of the fragmentation and splitting of the mind, and perception of the self and the body. Its clinical manifestations include altered perceptions and behavior, including derealization, depersonalization, distortions of perception of time, space, and body, and conversion hysteria. Using examples of animal models, and the clinical features of the whiplash syndrome, we have developed a model of dissociation linked to the phenomenon of freeze/immobility. Also employing current concepts of the psychobiology of posttraumatic stress disorder (PTSD), we propose a model of PTSD linked to cyclical autonomic dysfunction, triggered and maintained by the laboratory model of kindling, and perpetuated by increasingly profound dorsal vagal tone and endorphinergic reward systems. These physiologic events in turn contribute to the clinical state of dissociation. The resulting autonomic dysregulation is presented as the substrate for a diverse group of chronic diseases of unknown origin.", "title": "" }, { "docid": "d46434bbbf73460bf422ebe4bd65b590", "text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.", "title": "" }, { "docid": "8ab5ae25073b869ea28fc25df3cfdf5f", "text": "We present the TurkuNLP entry to the BioNLP Shared Task 2016 Bacteria Biotopes event extraction (BB3-event) subtask. We propose a deep learningbased approach to event extraction using a combination of several Long Short-Term Memory (LSTM) networks over syntactic dependency graphs. Features for the proposed neural network are generated based on the shortest path connecting the two candidate entities in the dependency graph. We further detail how this network can be efficiently trained to have good generalization performance even when only a very limited number of training examples are available and part-of-speech (POS) and dependency type feature representations must be learned from scratch. Our method ranked second among the entries to the shared task, achieving an F-score of 52.1% with 62.3% precision and 44.8% recall.", "title": "" }, { "docid": "19d35c0f4e3f0b90d0b6e4d925a188e4", "text": "This paper presents a new approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR)—a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since microaneurysms are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on multi-scale correlation filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, microaneurysm candidate detection (coarse level) and true microaneurysm classification (fine level). The approach was evaluated based on two public datasets—ROC (retinopathy on-line challenge, http://roc.healthcare.uiowa.edu) and DIARETDB1 (standard diabetic retinopathy database, http://www.it.lut.fi/project/imageret/diaretdb1). We conclude our method to be effective and efficient. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f20bbbd06561f9cde0f1d538667635e2", "text": "Artificial neural networks are finding many uses in the medical diagnosis application. The goal of this paper is to evaluate artificial neural network in disease diagnosis. Two cases are studied. The first one is acute nephritis disease; data is the disease symptoms. The second is the heart disease; data is on cardiac Single Proton Emission Computed Tomography (SPECT) images. Each patient classified into two categories: infected and non-infected. Classification is an important tool in medical diagnosis decision support. Feed-forward back propagation neural network is used as a classifier to distinguish between infected or non-infected person in both cases. The results of applying the artificial neural networks methodology to acute nephritis diagnosis based upon selected symptoms show abilities of the network to learn the patterns corresponding to symptoms of the person. In this study, the data were obtained from UCI machine learning repository in order to diagnosed diseases. The data is separated into inputs and targets. The targets for the neural network will be identified with 1's as infected and will be identified with 0's as non-infected. In the diagnosis of acute nephritis disease; the percent correctly classified in the simulation sample by the feed-forward back propagation network is 99 percent while in the diagnosis of heart disease; the percent correctly classified in the simulation sample by the feed-forward back propagation network is 95 percent.", "title": "" }, { "docid": "0757280353e6e1bd73b3d1cd11f6b031", "text": "OBJECTIVE\nTo investigate seasonal patterns in mood and behavior and estimate the prevalence of seasonal affective disorder (SAD) and subsyndromal seasonal affective disorder (S-SAD) in the Icelandic population.\n\n\nPARTICIPANTS AND SETTING\nA random sample generated from the Icelandic National Register, consisting of 1000 men and women aged 17 to 67 years from all parts of Iceland. It represents 6.4 per million of the Icelandic population in this age group.\n\n\nDESIGN\nThe Seasonal Pattern Assessment Questionnaire, an instrument for investigating mood and behavioral changes with the seasons, was mailed to a random sample of the Icelandic population. The data were compared with results obtained with similar methods in populations in the United States.\n\n\nMAIN OUTCOME MEASURES\nSeasonality score and prevalence rates of seasonal affective disorder and subsyndromal seasonal affective disorder.\n\n\nRESULTS\nThe prevalence of SAD and S-SAD were estimated at 3.8% and 7.5%, respectively, which is significantly lower than prevalence rates obtained with the same method on the east coast of the United States (chi 2 = 9.29 and 7.3; P < .01). The standardized rate ratios for Iceland compared with the United States were 0.49 and 0.63 for SAD and S-SAD, respectively. No case of summer SAD was found.\n\n\nCONCLUSIONS\nSeasonal affective disorder and S-SAD are more common in younger individuals and among women. The weight gained by patients during the winter does not seem to result in chronic obesity. The prevalence of SAD and S-SAD was lower in Iceland than on the East Coast of the United States, in spite of Iceland's more northern latitude. These results are unexpected since the prevalence of these disorders has been found to increase in more northern latitudes. The Icelandic population has remained remarkably isolated during the past 1000 years. It is conceivable that persons with a predisposition to SAD have been at a disadvantage and that there may have been a population selection toward increased tolerance of winter darkness.", "title": "" }, { "docid": "124c649cc8dc2d04e28043257ed8ddd4", "text": "TECSAR satellite is part of a spaceborne synthetic-aperture-radar (SAR) satellite technology demonstration program. The purpose of this program is to develop and evaluate the technologies required to achieve high-resolution images combined with large-area coverage. These requirements can be fulfilled by designing a satellite with multimode operation. The TECSAR satellite is developed by the MBT Space Division, Israel Aerospace Industries, acting as a prime contractor, which develops the satellite bus, and by ELTA Systems Ltd., which develops the SAR payload. This paper reviews the TECSAR radar system design, which enables to perform a variety of operational modes. It also describes the unique hardware components: deployable parabolic mesh antenna, multitube transmitter, and data-link transmission unit. The unique mosaic mode is presented. It is shown that this mode is the spot version of the scan mode.", "title": "" }, { "docid": "62c515d4b96f123b585a92a5aa919792", "text": "OBJECTIVE\nTo investigate the characteristics of the laryngeal mucosal microvascular network in suspected laryngeal cancer patients, using narrow band imaging, and to evaluate the value of narrow band imaging endoscopy in the early diagnosis of laryngeal precancerous and cancerous lesions.\n\n\nPATIENTS AND METHODS\nEighty-five consecutive patients with suspected precancerous or cancerous laryngeal lesions were enrolled in the study. Endoscopic narrow band imaging findings were classified into five types (I to V) according to the features of the mucosal intraepithelial papillary capillary loops assessed.\n\n\nRESULTS\nA total of 104 lesions (45 malignancies and 59 nonmalignancies) was detected under white light and narrow band imaging modes. The sensitivity and specificity of narrow band imaging in detecting malignant lesions were 88.9 and 93.2 per cent, respectively. The intraepithelial papillary capillary loop classification, as determined by narrow band imaging, was closely associated with the laryngeal lesions' histological findings. Type I to IV lesions were considered nonmalignant and type V lesions malignant. For type Va lesions, the sensitivity and specificity of narrow band imaging in detecting severe dysplasia or carcinoma in situ were 100 and 79.5 per cent, respectively. In patients with type Vb and Vc lesions, the sensitivity and specificity of narrow band imaging in detecting invasive carcinoma were 83.8 and 100 per cent, respectively.\n\n\nCONCLUSION\nNarrow band imaging is a promising approach enabling in vivo differentiation of nonmalignant from malignant laryngeal lesions by evaluating the morphology of mucosal capillaries. These results suggest endoscopic narrow band imaging may be useful in the early detection of laryngeal cancer and precancerous lesions.", "title": "" }, { "docid": "dfcb51bd990cce7fb7abfe8802dc0c4e", "text": "In this paper, we describe the machine learning approach we used in the context of the Automatic Cephalometric X-Ray Landmark Detection Challenge. Our solution is based on the use of ensembles of Extremely Randomized Trees combined with simple pixel-based multi-resolution features. By carefully tuning method parameters with cross-validation, our approach could reach detection rates ≥ 90% at an accuracy of 2.5mm for 8 landmarks. Our experiments show however a high variability between the different landmarks, with some landmarks detected at a much lower rate than others.", "title": "" }, { "docid": "6228498fed5b26c0def578251aa1c749", "text": "Observation-Level Interaction (OLI) is a sensemaking technique relying upon the interactive semantic exploration of data. By manipulating data items within a visualization, users provide feedback to an underlying mathematical model that projects multidimensional data into a meaningful two-dimensional representation. In this work, we propose, implement, and evaluate an OLI model which explicitly defines clusters within this data projection. These clusters provide targets against which data values can be manipulated. The result is a cooperative framework in which the layout of the data affects the clusters, while user-driven interactions with the clusters affect the layout of the data points. Additionally, this model addresses the OLI \"with respect to what\" problem by providing a clear set of clusters against which interaction targets are judged and computed.", "title": "" } ]
scidocsrr
ee2412b831c8c519d3e6a0993f259ac0
A new 5-transistor XOR-XNOR circuit based on the pass transistor logic
[ { "docid": "971398019db2fb255769727964f1e38a", "text": "Scaling down to deep submicrometer (DSM) technology has made noise a metric of equal importance as compared to power, speed, and area. Smaller feature size, lower supply voltage, and higher frequency are some of the characteristics for DSM circuits that make them more vulnerable to noise. New designs and circuit techniques are required in order to achieve robustness in presence of noise. Novel methodologies for designing energy-efficient noise-tolerant exclusive-OR-exclusive- NOR circuits that can operate at low-supply voltages with good signal integrity and driving capability are proposed. The circuits designed, after applying the proposed methodologies, are characterized and compared with previously published circuits for reliability, speed and energy efficiency. To test the driving capability of the proposed circuits, they are embedded in an existing 5-2 compressor design. The average noise threshold energy (ANTE) is used for quantifying the noise immunity of the proposed circuits. Simulation results show that, compared with the best available circuit in literature, the proposed circuits exhibit better noise-immunity, lower power-delay product (PDP) and good driving capability. All of the proposed circuits prove to be faster and successfully work at all ranges of supply voltage starting from 3.3 V down to 0.6 V. The savings in the PDP range from 94% to 21% for the given supply voltage range respectively and the average improvement in the ANTE is 2.67X.", "title": "" } ]
[ { "docid": "a32ea25ea3adc455dd3dfd1515c97ae3", "text": "Item-to-item collaborative filtering (aka.item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1] , our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems.", "title": "" }, { "docid": "a7607444b58f0e86000c7f2d09551fcc", "text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.", "title": "" }, { "docid": "cf219b9093dc55f09d067954d8049aeb", "text": "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.", "title": "" }, { "docid": "a9ed70274d7908193625717a80c3f2ea", "text": "Soft robotics is a growing area of research which utilizes the compliance and adaptability of soft structures to develop highly adaptive robotics for soft interactions. One area in which soft robotics has the ability to make significant impact is in the development of soft grippers and manipulators. With an increased requirement for automation, robotics systems are required to perform task in unstructured and not well defined environments; conditions which conventional rigid robotics are not best suited. This requires a paradigm shift in the methods and materials used to develop robots such that they can adapt to and work safely in human environments. One solution to this is soft robotics, which enables soft interactions with the surroundings while maintaining the ability to apply significant force. This review paper assesses the current materials and methods, actuation methods and sensors which are used in the development of soft manipulators. The achievements and shortcomings of recent technology in these key areas are evaluated, and this paper concludes with a discussion on the potential impacts of soft manipulators on industry and society.", "title": "" }, { "docid": "6c857ae5ce9db878c7ecd4263604874e", "text": "In the investigations of chaos in dynamical systems a major role is played by symbolic dynamics, i.e. the description of the system by a shift on a symbol space via conjugation. We examine whether any kind of noise can strengthen the stochastic behaviour of chaotic systems dramatically and what the consequences for the symbolic description are. This leads to the introduction of random subshifts of nite type which are appropriate for the description of quite general dynamical systems evolving under the innuence of noise and showing internal stochastic features. We investigate some of the ergodic and stochastic properties of these shifts and show situations when they behave dynamically like the common shifts. In particular we want to present examples where such random shift systems appear as symbolic descriptions.", "title": "" }, { "docid": "4ecac491b8029cf9de0ebe0d03bebec8", "text": "In this work, we aim at developing an unsupervised abstractive summarization system in the multi-document setting. We design a paraphrastic sentence fusion model which jointly performs sentence fusion and paraphrasing using skip-gram word embedding model at the sentence level. Our model improves the information coverage and at the same time abstractiveness of the generated sentences. We conduct our experiments on the human-generated multi-sentence compression datasets and evaluate our system on several newly proposed Machine Translation (MT) evaluation metrics. Furthermore, we apply our sentence level model to implement an abstractive multi-document summarization system where documents usually contain a related set of sentences. We also propose an optimal solution for the classical summary length limit problem which was not addressed in the past research. For the document level summary, we conduct experiments on the datasets of two different domains (e.g., news article and user reviews) which are well suited for multi-document abstractive summarization. Our experiments demonstrate that the methods bring significant improvements over the state-of-the-art methods.", "title": "" }, { "docid": "e7f771269ee99c04c69d1a7625a4196f", "text": "This report is a summary of Device-associated (DA) Module data collected by hospitals participating in the National Healthcare Safety Network (NHSN) for events occurring from January through December 2010 and re­ ported to the Centers for Disease Control and Prevention (CDC) by July 7, 2011. This report updates previously published DA Module data from the NHSN and provides contemporary comparative rates. This report comple­ ments other NHSN reports, including national and state-specific reports of standardized infection ratios for select health care-associated infections (HAIs). The NHSN was established in 2005 to integrate and supersede 3 legacy surveillance systems at the CDC: the National Nosocomial Infections Surveillance system, the Dialysis Surveillance Network, and the National Sur­ veillance System for Healthcare Workers. NHSN data col­ lection, reporting, and analysis are organized into 3 components—Patient Safety, Healthcare Personnel", "title": "" }, { "docid": "7f8ee14d2d185798c3864178bd450f3d", "text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.", "title": "" }, { "docid": "cd7c2eee84942324c77b6acd2b3e3e86", "text": "Learning word embeddings has received a significant amount of attention recently. Often, word embeddings are learned in an unsupervised manner from a large collection of text. The genre of the text typically plays an important role in the effectiveness of the resulting embeddings. How to effectively train word embedding models using data from different domains remains a problem that is underexplored. In this paper, we present a simple yet effective method for learning word embeddings based on text from different domains. We demonstrate the effectiveness of our approach through extensive experiments on various down-stream NLP tasks.", "title": "" }, { "docid": "aa5c22fa803a65f469236d2dbc5777a3", "text": "This article presents data on CVD and risk factors in Asian women. Data were obtained from available cohort studies and statistics for mortality from the World Health Organization. CVD is becoming an important public health problem among Asian women. There are high rates of CHD mortality in Indian and Central Asian women; rates are low in southeast and east Asia. Chinese and Indian women have very high rates and mortality from stroke; stroke is also high in central Asian and Japanese women. Hypertension and type 2 DM are as prevalent as in western women, but rates of obesity and smoking are less common. Lifestyle interventions aimed at prevention are needed in all areas.", "title": "" }, { "docid": "ca1c193e5e5af821772a5d123e84b72a", "text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.", "title": "" }, { "docid": "17247d2991fac47bcd675f547a5c8185", "text": "In this paper, we describe an approach for efficiently streaming large and highly detailed 3D city models, which is based on open standards and open source developments. This approach meets both the rendering performance requirements in WebGL enabled web browsers and the requirements by 3D Geographic Information Systems regarding data structuring, geo-referencing and accessibility of feature properties. 3D city models are assumed to be available as CityGML data sets due to its widespread adoption by public authorities. The Cesium.js open source virtual globe is used as a platform for embedding custom 3D assets. glTF and related formats are used for efficiently encoding 3D data and for enabling streaming of large 3D models. In order to fully exploit the capabilities of web browsers and standard internet protocols, a series of filtering and data processing steps must be performed, which are described in this paper.", "title": "" }, { "docid": "5eeb17964742e1bf1e517afcb1963b02", "text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.", "title": "" }, { "docid": "461a4911e3dedf13db369d2b85861f77", "text": "This paper proposes a novel approach using a coarse-to-fine analysis strategy for sentence-level emotion classification which takes into consideration of similarities to sentences in training set as well as adjacent sentences in the context. First, we use intra-sentence based features to determine the emotion label set of a target sentence coarsely through the statistical information gained from the label sets of the k most similar sentences in the training data. Then, we use the emotion transfer probabilities between neighboring sentences to refine the emotion labels of the target sentences. Such iterative refinements terminate when the emotion classification converges. The proposed algorithm is evaluated on Ren-CECps, a Chinese blog emotion corpus. Experimental results show that the coarse-to-fine emotion classification algorithm improves the sentence-level emotion classification by 19.11% on the average precision metric, which outperforms the baseline methods.", "title": "" }, { "docid": "61953281f4b568ad15e1f62be9d68070", "text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.", "title": "" }, { "docid": "90dc36628f9262157ea8722d82830852", "text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.", "title": "" }, { "docid": "64d14f0be0499ddb4183fe9c48653205", "text": "Many analysis and machine learning tasks require the availability of marginal statistics on multidimensional datasets while providing strong privacy guarantees for the data subjects. Applications for these statistics range from finding correlations in the data to fitting sophisticated prediction models. In this paper, we provide a set of algorithms for materializing marginal statistics under the strong model of local differential privacy. We prove the first tight theoretical bounds on the accuracy of marginals compiled under each approach, perform empirical evaluation to confirm these bounds, and evaluate them for tasks such as modeling and correlation testing. Our results show that releasing information based on (local) Fourier transformations of the input is preferable to alternatives based directly on (local) marginals.", "title": "" }, { "docid": "0f9a4d22cc7f63ea185f3f17759e185a", "text": "Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.", "title": "" }, { "docid": "13f7df2198bfe474e92e0072a3de2f9b", "text": "Humans and other primates shift their gaze to allocate processing resources to a subset of the visual input. Understanding and emulating the way that human observers freeview a natural scene has both scientific and economic impact. It has therefore attracted the attention from researchers in a wide range of science and engineering disciplines. With the ever increasing computational power, machine learning has become a popular tool to mine human data in the exploration of how people direct their gaze when inspecting a visual scene. This paper reviews recent advances in learning saliency-based visual attention and discusses several key issues in this topic. & 2012 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
27a5fb33ff8a2ae0a8e59311b8188740
Interactive software maps for web-based source code analysis
[ { "docid": "124c73eb861c0b2fb64d0084b3961859", "text": "Treemaps are an important and commonly-used approach to hierarchy visualization, but an important limitation of treemaps is the difficulty of discerning the structure of a hierarchy. This paper presents cascaded treemaps, a new approach to treemap presentation that is based in cascaded rectangles instead of the traditional nested rectangles. Cascading uses less space to present the same containment relationship, and the space savings enable a depth effect and natural padding between siblings in complex hierarchies. In addition, we discuss two general limitations of existing treemap layout algorithms: disparities between node weight and relative node size that are introduced by layout algorithms ignoring the space dedicated to presenting internal nodes, and a lack of stability when generating views of different levels of treemaps as a part of supporting interactive zooming. We finally present a two-stage layout process that addresses both concerns, computing a stable structure for the treemap and then using that structure to consider the presentation of internal nodes when arranging the treemap. All of this work is presented in the context of two large real-world hierarchies, the Java package hierarchy and the eBay auction hierarchy.", "title": "" } ]
[ { "docid": "adccd039cc54352eefd855567e8eeb62", "text": "In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.", "title": "" }, { "docid": "fec18dd0fba50779f8e8cc8d83c947e5", "text": "Trust plays important roles in diverse decentralized environments, including our society at large. Computational trust models help to, for instance, guide users' judgements in online auction sites about other users; or determine quality of contributions in web 2.0 sites. Most of the existing trust models, however, require historical information about past behavior of a specific agent being evaluated - information that is not always available. In contrast, in real life interactions among users, in order to make the first guess about the trustworthiness of a stranger, we commonly use our \"instinct\" - essentially stereotypes developed from our past interactions with \"similar\" people. We propose StereoTrust, a computational trust model inspired by real life stereotypes. A user forms stereotypes using her previous transactions with other agents. A stereotype contains certain features of agents and an expected outcome of the transaction. These features can be taken from agents' profile information, or agents' observed behavior in the system. When facing a stranger, the stereotypes matching stranger's profile are aggregated to derive his expected trust. Additionally, when some information about stranger's previous transactions is available, StereoTrust uses it to refine the stereotype matching. According to our experiments, StereoTrust compares favorably with existing trust models that use different kind of information and more complete historical information. Moreover, because evaluation is done according to user's personal stereotypes, the system is completely distributed and the result obtained is personalized. StereoTrust can be used as a complimentary mechanism to provide the initial trust value for a stranger, especially when there is no trusted, common third parties.", "title": "" }, { "docid": "59b26acc158c728cf485eae27de665f7", "text": "The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations.", "title": "" }, { "docid": "caad87e49a39569d3af1fe646bd0bde2", "text": "Over the last years, a variety of pervasive games was developed. Although some of these applications were quite successful in bringing digital games back to the real world, very little is known about their successful integration into smart environments. When developing video games, developers can make use of a broad variety of heuristics. Using these heuristics to guide the development process of applications for intelligent environments could significantly increase their functional quality. This paper addresses the question, whether existing heuristics can be used by pervasive game developers, or if specific design guidelines for smart home environments are required. In order to give an answer, the transferability of video game heuristics was evaluated in a two-step process. In a first step, a set of validated heuristics was analyzed to identify platform-dependent elements. In a second step, the transferability of those elements was assessed in a focus group study.", "title": "" }, { "docid": "8ddb7c62f032fb07116e7847e69b51d1", "text": "Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools", "title": "" }, { "docid": "e4db0ee5c4e2a5c87c6d93f2f7536f15", "text": "Despite the importance of sparsity in many big data applications, there are few existing methods for efficient distributed optimization of sparsely-regularized objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in distributed environments. By taking a nontraditional view of classical objectives as part of a more general primal-dual setting, we obtain a new class of methods that can be efficiently distributed and is applicable to common L1-regularized regression and classification objectives, such as Lasso, sparse logistic regression, and elastic net regression. We provide convergence guarantees for this framework and demonstrate strong empirical performance as compared to other stateof-the-art methods on several real-world distributed datasets.", "title": "" }, { "docid": "e066761ecb7d8b7468756fb4be6b8fcb", "text": "The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them.", "title": "" }, { "docid": "7928ad4d18e3f3eaaf95fa0b49efafa0", "text": "Associative classifiers have been proposed to achieve an accurate model with each individual rule being interpretable. However, existing associative classifiers often consist of a large number of rules and, thus, can be difficult to interpret. We show that associative classifiers consisting of an ordered rule set can be represented as a tree model. From this view, it is clear that these classifiers are restricted in that at least one child node of a non-leaf node is never split. We propose a new tree model, i.e., condition-based tree (CBT), to relax the restriction. Furthermore, we also propose an algorithm to transform a CBT to an ordered rule set with concise rule conditions. This ordered rule set is referred to as a condition-based classifier (CBC). Thus, the interpretability of an associative classifier is maintained, but more expressive models are possible. The rule transformation algorithm can be also applied to regular binary decision trees to extract an ordered set of rules with simple This research was partially supported by ONR grant N00014-09-1-0656. Email addresses: hdeng3@asu.com (Houtao Deng), george.runger@asu.edu (George Runger), eugene.tuv@intel.com (Eugene Tuv), wade.bannister@ingenixconsulting.com (Wade Bannister) Preprint submitted to Elsevier November 17, 2013 rule conditions. Feature selection is applied to a binary representation of conditions to simplify/improve the models further. Experimental studies show that CBC has competitive accuracy performance, and has a significantly smaller number of rules (median of 10 rules per data set) than well-known associative classifiers such as CBA (median of 47) and GARC (median of 21). CBC with feature selection has even a smaller number of rules.", "title": "" }, { "docid": "38f85a10e8f8b815974f5e42386b1fa3", "text": "Because Facebook is available on hundreds of millions of desktop and mobile computing platforms around the world and because it is available on many different kinds of platforms (from desktops and laptops running Windows, Unix, or OS X to hand held devices running iOS, Android, or Windows Phone), it would seem to be the perfect place to conduct steganography. On Facebook, information hidden in image files will be further obscured within the millions of pictures and other images posted and transmitted daily. Facebook is known to alter and compress uploaded images so they use minimum space and bandwidth when displayed on Facebook pages. The compression process generally disrupts attempts to use Facebook for image steganography. This paper explores a method to minimize the disruption so JPEG images can be used as steganography carriers on Facebook.", "title": "" }, { "docid": "0d3e55a7029d084f6ba889b7d354411c", "text": "Electrophysiological and computational studies suggest that nigro-striatal dopamine may play an important role in learning about sequences of environmentally important stimuli, particularly when this learning is based upon step-by-step associations between stimuli, such as in second-order conditioning. If so, one would predict that disruption of the midbrain dopamine system--such as occurs in Parkinson's disease--may lead to deficits on tasks that rely upon such learning processes. This hypothesis was tested using a \"chaining\" task, in which each additional link in a sequence of stimuli leading to reward is trained step-by-step, until a full sequence is learned. We further examined how medication (L-dopa) affects this type of learning. As predicted, we found that Parkinson's patients tested 'off' L-dopa performed as well as controls during the first phase of this task, when required to learn a simple stimulus-response association, but were impaired at learning the full sequence of stimuli. In contrast, we found that Parkinson's patients tested 'on' L-dopa performed better than those tested 'off', and no worse than controls, on all phases of the task. These findings suggest that the loss of dopamine that occurs in Parkinson's disease can lead to specific learning impairments that are predicted by electrophysiological and computational studies, and that enhancing dopamine levels with L-dopa alleviates this deficit. This last result raises questions regarding the mechanisms by which midbrain dopamine modulates learning in Parkinson's disease, and how L-dopa affects these processes.", "title": "" }, { "docid": "18dbbf0338d138f71a57b562883f0677", "text": "We present the analytical capability of TecDEM, a MATLAB toolbox used in conjunction with Global DEMs for the extraction of tectonic geomorphologic information. TecDEM includes a suite of algorithms to analyze topography, extracted drainage networks and sub-basins. The aim of part 2 of this paper series is the generation of morphometric maps for surface dynamics and basin analysis. TecDEM therefore allows the extraction of parameters such as isobase, incision, drainage density and surface roughness maps. We also provide tools for basin asymmetry and hypsometric analysis. These are efficient graphical user interfaces (GUIs) for mapping drainage deviation from basin mid-line and basin hypsometry. A morphotectonic interpretation of the Kaghan Valley (Northern Pakistan) is performed with TecDEM and the findings indicate a high correlation between surface dynamics and basin analysis parameters with neotectonic features in the study area. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5e896b2d47853088dc51323507f2f23a", "text": "A number of Learning Management Systems (LMSs) exist on the market today. A subset of a LMS is the component in which student assessment is managed. In some forms of assessment, such as open questions, the LMS is incapable of evaluating the students’ responses and therefore human intervention is necessary. In order to assess at higher levels of Bloom’s (1956) taxonomy, it is necessary to include open-style questions in which the student is given the task as well as the freedom to arrive at a response without the comfort of recall words and/or phrases. Automating the assessment process of open questions is an area of research that has been ongoing since the 1960s. Earlier work focused on statistical or probabilistic approaches based primarily on conceptual understanding. Recent gains in Natural Language Processing have resulted in a shift in the way in which free text can be evaluated. This has allowed for a more linguistic approach which focuses heavily on factual understanding. This study will leverage the research conducted in recent studies in the area of Natural Language Processing, Information Extraction and Information Retrieval in order to provide a fair, timely and accurate assessment of student responses to open questions based on the semantic meaning of those responses.", "title": "" }, { "docid": "71c7c98b55b2b2a9c475d4522310cfaa", "text": "This paper studies an active underground economy which spec ializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs c ollected from an active underground market operating on publi c Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal subs trate mature enough to steal wealth into the millions of dollars in less than one year.", "title": "" }, { "docid": "afc12fcceaf1bc1de724ba6e7935c086", "text": "OLAP tools have been extensively used by enterprises to make better and faster decisions. Nevertheless, they require users to specify group-by attributes and know precisely what they are looking for. This paper takes the first attempt towards automatically extracting top-k insights from multi-dimensional data. This is useful not only for non-expert users, but also reduces the manual effort of data analysts. In particular, we propose the concept of insight which captures interesting observation derived from aggregation results in multiple steps (e.g., rank by a dimension, compute the percentage of measure by a dimension). An example insight is: ``Brand B's rank (across brands) falls along the year, in terms of the increase in sales''. Our problem is to compute the top-k insights by a score function. It poses challenges on (i) the effectiveness of the result and (ii) the efficiency of computation. We propose a meaningful scoring function for insights to address (i). Then, we contribute a computation framework for top-k insights, together with a suite of optimization techniques (i.e., pruning, ordering, specialized cube, and computation sharing) to address (ii). Our experimental study on both real data and synthetic data verifies the effectiveness and efficiency of our proposed solution.", "title": "" }, { "docid": "caa60a57e847cec04d16f9281b3352f3", "text": "Part-based trackers are effective in exploiting local details of the target object for robust tracking. In contrast to most existing part-based methods that divide all kinds of target objects into a number of fixed rectangular patches, in this paper, we propose a novel framework in which a set of deformable patches dynamically collaborate on tracking of non-rigid objects. In particular, we proposed a shape-preserved kernelized correlation filter (SP-KCF) which can accommodate target shape information for robust tracking. The SP-KCF is introduced into the level set framework for dynamic tracking of individual patches. In this manner, our proposed deformable patches are target-dependent, have the capability to assume complex topology, and are deformable to adapt to target variations. As these deformable patches properly capture individual target subregions, we exploit their photometric discrimination and shape variation to reveal the trackability of individual target subregions, which enables the proposed tracker to dynamically take advantage of those subregions with good trackability for target likelihood estimation. Finally the shape information of these deformable patches enables accurate object contours to be computed as the tracking output. Experimental results on the latest public sets of challenging sequences demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "3519172a7bf6d4183484c613dcc65b0a", "text": "There has been minimal attention paid in the literature to the aesthetics of the perioral area, either in youth or in senescence. Aging around the lips traditionally was thought to result from a combination of thinning skin surrounding the area, ptosis, and loss of volume in the lips. The atrophy of senescence was treated by adding volume to the lips and filling the deep nasolabial creases. There is now a growing appreciation for the role of volume enhancement in the perioral region and the sunken midface, as well as for dentition, in the resting and dynamic appearance of the perioral area (particularly in youth). In this article, the authors describe the senior author's (BG) preferred methods for aesthetic enhancement of the perioral region and his rejuvenative techniques developed over the past 28 years. The article describes the etiologies behind the dysmorphologies in this area and presents a problem-oriented algorithm for treating them.", "title": "" }, { "docid": "872ef59b5bec5f6cbb9fcb206b6fe49e", "text": "In this paper, the analysis and design of a three-level LLC series resonant converter (TL LLC SRC) for high- and wide-input-voltage applications is presented. The TL LLC SRC discussed in this paper consists of two half-bridge LLC SRCs in series, sharing a resonant inductor and a transformer. Its main advantages are that the voltage across each switch is clamped at half of the input voltage and that voltage balance is achieved. Thus, it is suitable for high-input-voltage applications. Moreover, due to its simple driving signals, the additional circulating current of the conventional TL LLC SRCs does not appear in the converter, and a simpler driving circuitry is allowed to be designed. With this converter, the operation principles, the gain of the LLC resonant tank, and the zero-voltage-switching condition under wide input voltage variation are analyzed. Both the current and voltage stresses over different design factors of the resonant tank are discussed as well. Based on the results of these analyses, a design example is provided and its validity is confirmed by an experiment involving a prototype converter with an input of 400-600 V and an output of 48 V/20 A. In addition, a family of TL LLC SRCs with double-resonant tanks for high-input-voltage applications is introduced. While this paper deals with a TL LLC SRC, the analysis results can be applied to other TL LLC SRCs for wide-input-voltage applications.", "title": "" }, { "docid": "7c99299463d7f2a703f7bd9fbec3df74", "text": "Group emotional contagion, the transfer of moods among people in a group, and its influence on work group dynamics was examined in a laboratory study of managerial decision making using multiple, convergent measures of mood, individual attitudes, behavior, and group-level dynamics. Using a 2 times 2 experimental design, with a trained confederate enacting mood conditions, the predicted effect of emotional contagion was found among group members, using both outside coders' ratings of participants' mood and participants' selfreported mood. No hypothesized differences in contagion effects due to the degree of pleasantness of the mood expressed and the energy level with which it was conveyed were found. There was a significant influence of emotional contagion on individual-level attitudes and group processes. As predicted, the positive emotional contagion group members experienced improved cooperation, decreased conflict, and increased perceived task performance. Theoretical implications and practical ramifications of emotional contagion in groups and organizations are discussed. Disciplines Human Resources Management | Organizational Behavior and Theory This journal article is available at ScholarlyCommons: http://repository.upenn.edu/mgmt_papers/72 THE RIPPLE EFFECT: EMOTIONAL CONTAGION AND ITS INFLUENCE ON GROUP BEHAVIOR SIGAL G. BARSADE School of Management Yale University Box 208200 New Haven, CT 06520-8200 Telephone: (203) 432-6159 Fax: (203) 432-9994 E-mail: sigal.barsade@yale.edu August 2001 Revise and Resubmit, ASQ; Comments Welcome i I would like to thank my mentor Barry Staw, Charles O’Reilly, JB, Ken Craik, Batia Wiesenfeld, Jennifer Chatman, J. Turners, John Nezlek, Keith Murnigan, Linda Johanson, and three anonymous ASQ reviewers who have helped lead to positive emotional and cognitive contagion.", "title": "" }, { "docid": "22fc1e303a4c2e7d1e5c913dca73bd9e", "text": "The artificial potential field (APF) approach provides a simple and effective motion planning method for practical purpose. However, artificial potential field approach has a major problem, which is that the robot is easy to be trapped at a local minimum before reaching its goal. The avoidance of local minimum has been an active research topic in path planning by potential field. In this paper, we introduce several methods to solve this problem, emphatically, introduce and evaluate the artificial potential field approach with simulated annealing (SA). As one of the powerful techniques for escaping local minimum, simulated annealing has been applied to local and global path planning", "title": "" }, { "docid": "83742a3fcaed826877074343232be864", "text": "In this paper we propose a design of the main modulation and demodulation units of a modem compliant with the new DVB-S2 standard (Int. J. Satellite Commun. 2004; 22:249–268). A typical satellite channel model consistent with the targeted applications of the aforementioned standard is assumed. In particular, non-linear pre-compensation as well as synchronization techniques are described in detail and their performance assessed by means of analysis and computer simulations. The proposed algorithms are shown to provide a good trade-off between complexity and performance and they apply to both the broadcast and the unicast profiles, the latter allowing the exploitation of adaptive coding and modulation (ACM) (Proceedings of the 20th AIAA Satellite Communication Systems Conference, Montreal, AIAA-paper 2002-1863, May 2002). Finally, end-to-end system performances in term of BER versus the signal-to-noise ratio are shown as a result of extensive computer simulations. The whole communication chain is modelled in these simulations, including the BCH and LDPC coder, the modulator with the pre-distortion techniques, the satellite transponder model with its typical impairments, the downlink chain inclusive of the RF-front-end phase noise, the demodulator with the synchronization sub-system units and finally the LDPC and BCH decoders. Copyright # 2004 John Wiley & Sons, Ltd.", "title": "" } ]
scidocsrr
609ca5aa81db62f38bf6ea117f3271b6
RSSI based indoor and outdoor distance estimation for localization in WSN
[ { "docid": "45bd28fbea66930fca36bc20328d6d6f", "text": "Localization is one of the most challenging and important issues in wireless sensor networks (WSNs), especially if cost-effective approaches are demanded. In this paper, we present intensively discuss and analyze approaches relying on the received signal strength indicator (RSSI). The advantage of employing the RSSI values is that no extra hardware (e.g. ultrasonic or infra-red) is needed for network-centric localization. We studied different factors that affect the measured RSSI values. Finally, we evaluate two methods to estimate the distance; the first approach is based on statistical methods. For the second one, we use an artificial neural network to estimate the distance.", "title": "" } ]
[ { "docid": "6cbd51bbef3b56df6d97ec7b4348cd94", "text": "This study reviews human clinical experience to date with several synthetic cannabinoids, including nabilone, levonantradol, ajulemic acid (CT3), dexanabinol (HU-211), HU-308, and SR141716 (Rimonabant®). Additionally, the concept of “clinical endogenous cannabinoid deficiency” is explored as a possible factor in migraine, idiopathic bowel disease, fibromyalgia and other clinical pain states. The concept of analgesic synergy of cannabinoids and opioids is addressed. A cannabinoid-mediated improvement in night vision at the retinal level is discussed, as well as its potential application to treatment of retinitis pigmentosa and other conditions. Additionally noted is the role of cannabinoid treatment in neuroprotection and its application to closed head injury, cerebrovascular accidents, and CNS degenerative diseases including Alzheimer, Huntington, Parkinson diseases and ALS. Excellent clinical results employing cannabis based medicine extracts (CBME) in spasticity and spasms of MS suggests extension of such treatment to other spasmodic and dystonic conditions. Finally, controversial areas of cannabinoid treatment in obstetrics, gynecology and pediatrics are addressed along with a rationale for such interventions. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <docdelivery@haworthpress. com> Website: <http://www.HaworthPress.com>  2003 by The Haworth Press, Inc. All rights reserved.]", "title": "" }, { "docid": "643d75042a38c24b0e4130cb246fc543", "text": "Silicon carbide (SiC) switching power devices (MOSFETs, JFETs) of 1200 V rating are now commercially available, and in conjunction with SiC diodes, they offer substantially reduced switching losses relative to silicon (Si) insulated gate bipolar transistors (IGBTs) paired with fast-recovery diodes. Low-voltage industrial variable-speed drives are a key application for 1200 V devices, and there is great interest in the replacement of the Si IGBTs and diodes that presently dominate in this application with SiC-based devices. However, much of the performance benefit of SiC-based devices is due to their increased switching speeds ( di/dt, dv/ dt), which raises the issues of increased electromagnetic interference (EMI) generation and detrimental effects on the reliability of inverter-fed electrical machines. In this paper, the tradeoff between switching losses and the high-frequency spectral amplitude of the device switching waveforms is quantified experimentally for all-Si, Si-SiC, and all-SiC device combinations. While exploiting the full switching-speed capability of SiC-based devices results in significantly increased EMI generation, the all-SiC combination provides a 70% reduction in switching losses relative to all-Si when operated at comparable dv/dt. It is also shown that the loss-EMI tradeoff obtained with the Si-SiC device combination can be significantly improved by driving the IGBT with a modified gate voltage profile.", "title": "" }, { "docid": "14fe4e2fb865539ad6f767b9fc9c1ff5", "text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.", "title": "" }, { "docid": "2f48b326aaa7b41a7ee347cedce344ed", "text": "In this paper a new kind of quasi-quartic trigonometric polynomial base functions with two shape parameters λ and μ over the space Ω = span {1, sin t, cos t, sin2t, cos2t, sin3t, cos3t} is presented and the corresponding quasi-quartic trigonometric Bézier curves and surfaces are defined by the introduced base functions. Each curve segment is generated by five consecutive control points. The shape of the curve can be adjusted by altering the values of shape parameters while the control polygon is kept unchanged. These curves inherit most properties of the usual quartic Bézier curves in the polynomial space and they can be used as an efficient new model for geometric design in the fields of CAGD.", "title": "" }, { "docid": "082894a8498a5c22af8903ad8ea6399a", "text": "Despite the proliferation of mobile health applications, few target low literacy users. This is a matter of concern because 43% of the United States population is functionally illiterate. To empower everyone to be a full participant in the evolving health system and prevent further disparities, we must understand the design needs of low literacy populations. In this paper, we present two complementary studies of four graphical user interface (GUI) widgets and three different cross-page navigation styles in mobile applications with a varying literacy, chronically-ill population. Participant's navigation and interaction styles were documented while they performed search tasks using high fidelity prototypes running on a mobile device. Results indicate that participants could use any non-text based GUI widgets. For navigation structures, users performed best when navigating a linear structure, but preferred the features of cross-linked navigation. Based on these findings, we provide some recommendations for designing accessible mobile applications for varying-literacy populations.", "title": "" }, { "docid": "031562142f7a2ffc64156f9d09865604", "text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.", "title": "" }, { "docid": "8906b0cf1b58f6d58a15538946aacd5f", "text": "This glossary presents a comprehensive list of indicators of socioeconomic position used in health research. A description of what they intend to measure is given together with how data are elicited and the advantages and limitation of the indicators. The glossary is divided into two parts for journal publication but the intention is that it should be used as one piece. The second part highlights a life course approach and will be published in the next issue of the journal.", "title": "" }, { "docid": "b2d8c0397151ca043ffb5cef8046d2af", "text": "This paper describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looked at recognition from high-resolution still frontal face images and 3D face images, and measured performance for still frontal face images taken under controlled and uncontrolled illumination. The ICE 2006 evaluation reported verification performance for both left and right irises. The images in the ICE 2006 intentionally represent a broader range of quality than the ICE 2006 sensor would normally acquire. This includes images that did not pass the quality control software embedded in the sensor. The FRVT 2006 results from controlled still and 3D images document at least an order-of-magnitude improvement in recognition performance over the FRVT 2002. The FRVT 2006 and the ICE 2006 compared recognition performance from high-resolution still frontal face images, 3D face images, and the single-iris images. On the FRVT 2006 and the ICE 2006 data sets, recognition performance was comparable for high-resolution frontal face, 3D face, and the iris images. In an experiment comparing human and algorithms on matching face identity across changes in illumination on frontal face images, the best performing algorithms were more accurate than humans on unfamiliar faces.", "title": "" }, { "docid": "013ca7d513b658f2dac68644a915b43a", "text": "Money laundering a suspicious fund transfer between accounts without names which affects and threatens the stability of countries economy. The growth of internet technology and loosely coupled nature of fund transfer gateways helps the malicious user’s to perform money laundering. There are many approaches has been discussed earlier for the detection of money laundering and most of them suffers with identifying the root of money laundering. We propose a time variant approach using behavioral patterns to identify money laundering. In this approach, the transaction logs are split into various time window and for each account specific to the fund transfer the time value is split into different time windows and we generate the behavioral pattern of the user. The behavioral patterns specifies the method of transfer between accounts and the range of amounts and the frequency of destination accounts and etc.. Based on generated behavioral pattern , the malicious transfers and accounts are identified to detect the malicious root account. The proposed approach helps to identify more suspicious accounts and their group accounts to perform money laundering identification. The proposed approach has produced efficient results with less time complexity.", "title": "" }, { "docid": "f8435db6c6ea75944d1c6b521e0f3dd3", "text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0b894c503a11c7638c0fd25ea22088dc", "text": "We are moving towards a general public where web is the need of hour. Today the vast majority of the product applications executed, are composed as online applications which are keep running in a web program. Testing programming applications is critical. Numerous associations make utilization of a specific web application, so the same web applications are tried habitually by diverse clients from distinctive regions physically. Testing a web application physically is tedious, so we go for test automation. In test automation we make utilization of a product device to run repeatable tests against the application to be tried. There are various focal points of test automation. They are exceptionally exact and have more prominent preparing pace when contrasted with manual automation. There are various open source and business devices accessible for test mechanization. Selenium is one of the broadly utilized open source device for test computerization. Test automation enhances the effectiveness of programming testing procedures. Test automation gives quick criticism to engineers. It additionally discovers the imperfections when one may miss in the manual testing. In test automation we can perform boundless emphases for testing the same example of code ceaselessly commonly.", "title": "" }, { "docid": "419116a3660f1c1f7127de31f311bd1e", "text": "Unlike dimensionality reduction (DR) tools for single-view data, e.g., principal component analysis (PCA), canonical correlation analysis (CCA) and generalized CCA (GCCA) are able to integrate information from multiple feature spaces of data. This is critical in multi-modal data fusion and analytics, where samples from a single view may not be enough for meaningful DR. In this work, we focus on a popular formulation of GCCA, namely, MAX-VAR GCCA. The classic MAX-VAR problem is optimally solvable via eigen-decomposition, but this solution has serious scalability issues. In addition, how to impose regularizers on the sought canonical components was unclear - while structure-promoting regularizers are often desired in practice. We propose an algorithm that can easily handle datasets whose sample and feature dimensions are both large by exploiting data sparsity. The algorithm is also flexible in incorporating regularizers on the canonical components. Convergence properties of the proposed algorithm are carefully analyzed. Numerical experiments are presented to showcase its effectiveness.", "title": "" }, { "docid": "a9b20ad74b3a448fbc1555b27c4dcac9", "text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.", "title": "" }, { "docid": "c5103654cc2b28bc4408c2d0bee17f13", "text": "Unless the practitioner is familiar with the morphology of the roots of all teeth, and the associated intricate root canal anatomy, effective debridement and obturation may be impossible. Recent research has improved knowledge and understanding of this intricate aspect of dental practice. After studying this part you should know in what percentage of each tooth type you may expect unusual numbers of root canals and other anatomical variations.", "title": "" }, { "docid": "b1958bbb9348a05186da6db649490cdd", "text": "Fourier ptychography (FP) utilizes illumination control and computational post-processing to increase the resolution of bright-field microscopes. In effect, FP extends the fixed numerical aperture (NA) of an objective lens to form a larger synthetic system NA. Here, we build an FP microscope (FPM) using a 40X 0.75NA objective lens to synthesize a system NA of 1.45. This system achieved a two-slit resolution of 335 nm at a wavelength of 632 nm. This resolution closely adheres to theoretical prediction and is comparable to the measured resolution (315 nm) associated with a standard, commercially available 1.25 NA oil immersion microscope. Our work indicates that Fourier ptychography is an attractive method to improve the resolution-versus-NA performance, increase the working distance, and enlarge the field-of-view of high-resolution bright-field microscopes by employing lower NA objectives.", "title": "" }, { "docid": "ec6fb21b7ae27cc4df67f3d6745ffe34", "text": "In today's world data is growing very rapidly, which we call as big data. To deal with these large data sets, currently we are using NoSQL databases, as relational database is not capable for handling such data. These schema less NoSQL database allow us to handle unstructured data. Through this paper we are comparing two NoSQL databases MongoDB and CouchBase server, in terms of image storage and retrieval. Aim behind selecting these two databases as both comes under Document store category. Major applications like social media, traffic analysis, criminal database etc. require image database. The motivation behind this paper is to compare database performance in terms of time required to store and retrieve images from database. In this paper, firstly we are going describe advantages of NoSQL databases over SQL, then brief idea about MongoDB and CouchBase and finally comparison of time required to insert various size images in databases and to retrieve various size images using front end tool Java.", "title": "" }, { "docid": "2d41891667b3cc0572827c104fb2c1c1", "text": "Stock market prediction is forever important issue for investor. Computer science plays vital role to solve this problem. From the evolution of machine learning, people from this area are busy to solve this problem effectively. Many different techniques are used to build predicting system. This research describes different state of the art techniques used for stock forecasting and compare them w.r.t. their pros and cons. We have classified different techniques categorically; Time Series, Neural Network and its different variation (RNN, ESN, MLP, LRNN etc.) and different hybrid techniques (combination of neural network with different machine learning techniques) (ANFIS, GA/ATNN, GA/TDNN, ICA-BPN). By extensive study of different techniques, it was analyzed that Neural Network is the best technique till time to predict stock prices especially when some denoising schemes are applied with neural network. We, also, have implemented and compared different neural network techniques like Layered Recurrent Neural Network (LRNN), Wsmpca-NN and Feed forward Neural Network (NN). By comparing said techniques, it was observed that LRNN performs better than feed forward NN and Wsmpca-NN performs better than LRNN and NN. We have applied said techniques on PSO (Pakistan State Oil), S&P500 data sets.", "title": "" }, { "docid": "dc64fa6178f46a561ef096fd2990ad3d", "text": "Forest fires cost millions of dollars in damages and claim many human lives every year. Apart from preventive measures, early detection and suppression of fires is the only way to minimize the damages and casualties. We present the design and evaluation of a wireless sensor network for early detection of forest fires. We first present the key aspects in modeling forest fires. We do this by analyzing the Fire Weather Index (FWI) System, and show how its different components can be used in designing efficient fire detection systems. The FWI System is one of the most comprehensive forest fire danger rating systems in North America, and it is backed by several decades of forestry research. The analysis of the FWI System could be of interest in its own right to researchers working in the sensor network area and to sensor manufacturers who can optimize the communication and sensing modules of their products to better fit forest fire detection systems. Then, we model the forest fire detection problem as a coverage problem in wireless sensor networks, and we present a distributed algorithm to solve it. In addition, we show how our algorithm can achieve various coverage degrees at different subareas of the forest, which can be used to provide unequal monitoring quality of forest zones. Unequal monitoring is important to protect residential and industrial neighborhoods close to forests. Finally, we present a simple data aggregation scheme based on the FWI System. This data aggregation scheme significantly prolongs the network lifetime, because it only delivers the data that is of interest to the application. We validate several aspects of our design using simulation.", "title": "" }, { "docid": "55aff936a5ff97d9229e90f6d5394b2e", "text": "Children are ubiquitous imitators, but how do they decide which actions to imitate? One possibility is that children rationally combine multiple sources of information about which actions are necessary to cause a particular outcome. For instance, children might learn from contingencies between action sequences and outcomes across repeated demonstrations, and they might also use information about the actor's knowledge state and pedagogical intentions. We define a Bayesian model that predicts children will decide whether to imitate part or all of an action sequence based on both the pattern of statistical evidence and the demonstrator's pedagogical stance. To test this prediction, we conducted an experiment in which preschool children watched an experimenter repeatedly perform sequences of varying actions followed by an outcome. Children's imitation of sequences that produced the outcome increased, in some cases resulting in production of shorter sequences of actions that the children had never seen performed in isolation. A second experiment established that children interpret the same statistical evidence differently when it comes from a knowledgeable teacher versus a naïve demonstrator. In particular, in the pedagogical case children are more likely to \"overimitate\" by reproducing the entire demonstrated sequence. This behavior is consistent with our model's predictions, and suggests that children attend to both statistical and pedagogical evidence in deciding which actions to imitate, rather than obligately imitating successful action sequences.", "title": "" }, { "docid": "2af56829daf6d2c6c633c759d07f2208", "text": "Height of Burst (HOB) sensor is one of the critical parts in guided missiles. While seekers control the guiding scheme of the missile, proximity sensors set the trigger for increased effectiveness of the warhead. For the well-developed guided missiles of Roketsan, a novel proximity sensor is developed. The design of the sensor is for multi-purpose use. In this presentation, the application of the sensor is explained for operation as a HOB sensor in the range of 3m–50m with ± 1m accuracy. Measurement results are also presented. The same sensor is currently being developed for proximity sensor for missile defence.", "title": "" } ]
scidocsrr
9d4d1861a00d94986f1fed4bbbe06218
Analyzing User Activities, Demographics, Social Network Structure and User-Generated Content on Instagram
[ { "docid": "349f85e6ffd66d6a1dd9d9c6925d00bc", "text": "Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.", "title": "" } ]
[ { "docid": "733e379ecaab79ac328f55ccc2384b69", "text": "Introduction Since Beijing 1995, gender mainstreaming has heralded the beginning of a renewed effort to address what is seen as one of the roots of gender inequality: the genderedness of systems, procedures and organizations. In the definition of the Council of Europe, gender mainstreaming is the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages, by the actors normally involved in policymaking. All member states and some candidate states of the European Union have started to implement gender mainstreaming. The 1997 Treaty of Amsterdam places equality between women and men among the explicit tasks of the European Union and obliges the EU to promote gender equality in all its tasks and activities. The Gender Mainstreaming approach that has been legitimated by this Treaty is backed by legislation and by positive action in favour of women (or the “under-represented sex”). Gender equality policies have not only been part and parcel of modernising action in the European Union, but can be expected to continue to be so (Rossili 2000). With regard to gender inequality, the EU has both a formal EU problem definition at the present time, and a formalised set of EU strategies. Problems in the implementation of gender equality policies abound, at both national and EU level. To give just one example, it took the Netherlands – usually very supportive of the EU –14 years to implement article 119 on Equal Pay (Van der Vleuten 2001). Moreover, it has been documented that overall EU action has run counter to its goal of gender equality. Overall EU action has weakened women’s social rights more seriously than men’s (Rossili 2000). The introduction of Gender Mainstreaming, the incorporation of gender and women’s concerns in all regular policymaking is meant to address precisely this problem of a contradiction between specific gender policies and regular EU policies. Yet, in the case of the Structural Funds, for instance, Gender Mainstreaming has been used to further reduce existing funds and incentives for gender equality (Rossili 2000). Against this backdrop, this paper will present an approach at studying divergences in policy frames around gender equality as one of the elements connected to implementation problems: the MAGEEQ project.", "title": "" }, { "docid": "2e864dcde57ea1716847f47977af0140", "text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.", "title": "" }, { "docid": "ce402c150d74cbc954378ea7927dfa71", "text": "The study investigated the influence of extrinsic and intrinsic motivation on employees performance. Subjects for the study consisted of one hundred workers of Flour Mills of Nigeria PLC, Lagos. Data for the study were gathered through the administration of a self-designed questionnaire. The data collected were subjected to appropriate statistical analysis using Pearson Product Moment Correlation Coefficient, and all the findings were tested at 0.05 level of significance. The result obtained from the analysis showed that there existed relationship between extrinsic motivation and the performance of employees, while no relationship existed between intrinsic motivation and employees performance. On the basis of these findings, implications of the findings for future study were stated.", "title": "" }, { "docid": "b594a4fafc37a18773b1144dfdbb965d", "text": "Deep generative modelling for robust human body analysis is an emerging problem with many interesting applications, since it enables analysis-by-synthesis and unsupervised learning. However, the latent space learned by such models is typically not human-interpretable, resulting in less flexible models. In this work, we adopt a structured semi-supervised variational auto-encoder approach and present a deep generative model for human body analysis where the pose and appearance are disentangled in the latent space, allowing for pose estimation. Such a disentanglement allows independent manipulation of pose and appearance and hence enables applications such as pose-transfer without being explicitly trained for such a task. In addition, the ability to train in a semi-supervised setting relaxes the need for labelled data. We demonstrate the merits of our generative model on the Human3.6M and ChictopiaPlus datasets.", "title": "" }, { "docid": "20dd21215f9dc6bd125b2af53500614d", "text": "In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a number of alternative reference sentences are constructed automatically for each candidate translation. The method produces lexical and lowlevel syntactic paraphrases that are relevant to the domain in hand, does not use external knowledge resources, and can be combined with a variety of automatic MT evaluation system.", "title": "" }, { "docid": "9f184ba1cfe36fde398f896b1ce93745", "text": "http://dx.doi.org/10.1016/j.compag.2015.08.011 0168-1699/ 2015 Elsevier B.V. All rights reserved. ⇑ Corresponding author at: School of Information Technology, Indian Institute of Technology Kharagpur, India. E-mail addresses: tojha@sit.iitkgp.ernet.in (T. Ojha), smisra@sit.iitkgp.ernet.in (S. Misra), nsr@agfe.iitkgp.ernet.in (N.S. Raghuwanshi). Tamoghna Ojha a,b,⇑, Sudip Misra , Narendra Singh Raghuwanshi b", "title": "" }, { "docid": "d1357b2e247d521000169dce16f182ee", "text": "Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.", "title": "" }, { "docid": "28b70047cb41f765504f8f9b54456cc4", "text": "BACKGROUND\nAccelerometers are widely used to measure sedentary time, physical activity, physical activity energy expenditure (PAEE), and sleep-related behaviors, with the ActiGraph being the most frequently used brand by researchers. However, data collection and processing criteria have evolved in a myriad of ways out of the need to answer unique research questions; as a result there is no consensus.\n\n\nOBJECTIVES\nThe purpose of this review was to: (1) compile and classify existing studies assessing sedentary time, physical activity, energy expenditure, or sleep using the ActiGraph GT3X/+ through data collection and processing criteria to improve data comparability and (2) review data collection and processing criteria when using GT3X/+ and provide age-specific practical considerations based on the validation/calibration studies identified.\n\n\nMETHODS\nTwo independent researchers conducted the search in PubMed and Web of Science. We included all original studies in which the GT3X/+ was used in laboratory, controlled, or free-living conditions published from 1 January 2010 to the 31 December 2015.\n\n\nRESULTS\nThe present systematic review provides key information about the following data collection and processing criteria: placement, sampling frequency, filter, epoch length, non-wear-time, what constitutes a valid day and a valid week, cut-points for sedentary time and physical activity intensity classification, and algorithms to estimate PAEE and sleep-related behaviors. The information is organized by age group, since criteria are usually age-specific.\n\n\nCONCLUSION\nThis review will help researchers and practitioners to make better decisions before (i.e., device placement and sampling frequency) and after (i.e., data processing criteria) data collection using the GT3X/+ accelerometer, in order to obtain more valid and comparable data.\n\n\nPROSPERO REGISTRATION NUMBER\nCRD42016039991.", "title": "" }, { "docid": "a45294bcd622c526be47975abe4e6d66", "text": "Identification of gene locations in a DNA sequence is one of the important problems in the area of genomics. Nucleotides in exons of a DNA sequence show f = 1/3 periodicity. The period-3 property in exons of eukaryotic gene sequences enables signal processing based time-domain and frequency-domain methods to predict these regions. Identification of the period-3 regions helps in predicting the gene locations within the billions long DNA sequence of eukaryotic cells. Existing non-parametric filtering techniques are less effective in detecting small exons. This paper presents a harmonic suppression filter and parametric minimum variance spectrum estimation technique for gene prediction. We show that both the filtering techniques are able to detect smaller exon regions and adaptive MV filter minimizes the power in introns (non-coding regions) giving more suppression to the intron regions. Furthermore, 2-simplex mapping is used to reduce the computational complexity.", "title": "" }, { "docid": "7f84e215df3d908249bde3be7f2b3cab", "text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.", "title": "" }, { "docid": "b6cd222b0bc5c2839c66cdf4538d7264", "text": "Stereoscopic 3D (S3D) movies have become widely popular in the movie theaters, but the adoption of S3D at home is low even though most TV sets support S3D. It is widely believed that S3D with glasses is not the right approach for the home. A much more appealing approach is to use automulti-scopic displays that provide a glasses-free 3D experience to multiple viewers. A technical challenge is the lack of native multiview content that is required to deliver a proper view of the scene for every viewpoint. Our approach takes advantage of the abundance of stereoscopic 3D movies. We propose a real-time system that can convert stereoscopic video to a high-quality multiview video that can be directly fed to automultiscopic displays. Our algorithm uses a wavelet-based decomposition of stereoscopic images with per-wavelet disparity estimation. A key to our solution lies in combining Lagrangian and Eulerian approaches for both the disparity estimation and novel view synthesis, which leverages the complementary advantages of both techniques. The solution preserves all the features of Eulerian methods, e.g., subpixel accuracy, high performance, robustness to ambiguous depth cases, and easy integration of inter-view aliasing while maintaining the advantages of Lagrangian approaches, e.g., robustness to large disparities and possibility of performing non-trivial disparity manipulations through both view extrapolation and interpolation. The method achieves real-time performance on current GPUs. Its design also enables an easy hardware implementation that is demonstrated using a field-programmable gate array. We analyze the visual quality and robustness of our technique on a number of synthetic and real-world examples. We also perform a user experiment which demonstrates benefits of the technique when compared to existing solutions.", "title": "" }, { "docid": "e793b233039c9cb105fa311fa08312cd", "text": "A generalized single-phase multilevel current source inverter (MCSI) topology with self-balancing current is proposed, which uses the duality transformation from the generalized multilevel voltage source inverter (MVSI) topology. The existing single-phase 8- and 6-switch 5-level current source inverters (CSIs) can be derived from this generalized MCSI topology. In the proposed topology, each intermediate DC-link current level can be balanced automatically without adding any external circuits; thus, a true multilevel structure is provided. Moreover, owing to the dual relationship, many research results relating to the operation, modulation, and control strategies of MVSIs can be applied directly to the MCSIs. Some simulation results are presented to verify the proposed MCSI topology.", "title": "" }, { "docid": "1efcace33a3a6ad7805f765edfafb6f4", "text": "Recently, new configurations of robot legs using a parallel mechanism have been studied for improving the locomotion ability in four-legged robots. However, it is difficult to obtain full dynamics of the parallel-mechanism robot legs because this mechanism has many links and complex constraint conditions, which make it difficult to design a modelbased controller. Here, we propose the simplified modeling of a parallel-mechanism robot leg with two degrees-of-freedom (2DOF), which can be used instead of complex full dynamics for model-based control. The new modeling approach considers the robot leg as a 2DOF Revolute and Prismatic(RP) manipulator, inspired by the actuation mechanism of robot legs, for easily designing a nominal model of the controller. To verify the effectiveness of the new modeling approach experimentally, we conducted dynamic simulations using a commercial multi-dynamics simulator. The simulation results confirmed that the proposed modeling approach could be an alternative modeling method for parallel-mechanism robot legs.", "title": "" }, { "docid": "e9c4877bca5f1bfe51f97818cc4714fa", "text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using", "title": "" }, { "docid": "929534782eaaa41186a1138b0439cdca", "text": "How do observers respond when the actions of one individual inflict harm on another? The primary reaction to carelessly inflicted harm is to seek restitution; the offender is judged to owe compensation to the harmed individual. The primary reaction to harm inflicted intentionally is moral outrage producing a desire for retribution; the harm-doer must be punished. Reckless conduct, an intermediate case, provokes reactions that involve elements of both careless and intentional harm. The moral outrage felt by those who witness transgressions is a product of both cognitive interpretations of the event and emotional reactions to it. Theory about the exact nature of the emotional reactions is considered, along with suggestions for directions for future research.", "title": "" }, { "docid": "c75ee3e700806bcb098f6e1c05fdecfc", "text": "This study examines patterns of cellular phone adoption and usage in an urban setting. One hundred and seventy-six cellular telephone users were surveyed abou their patterns of usage, demographic and socioeconomic characteristics, perceptions about the technology, and their motivations to use cellular services. The results of this study confirm that users' perceptions are significantly associated with their motivation to use cellular phones. Specifically, perceived ease of use was found to have significant effects on users' extrinsic and intrinsic motivations; apprehensiveness about cellular technology had a negative effect on intrinsic motivations. Implications of these findings for practice and research are examined.", "title": "" }, { "docid": "627e4d3c2dfb8233f0e345410064f6d0", "text": "Data clustering is an important task in many disciplines. A large number of studies have attempted to improve clustering by using the side information that is often encoded as pairwise constraints. However, these studies focus on designing special clustering algorithms that can effectively exploit the pairwise constraints. We present a boosting framework for data clustering,termed as BoostCluster, that is able to iteratively improve the accuracy of any given clustering algorithm by exploiting the pairwise constraints. The key challenge in designing a boosting framework for data clustering is how to influence an arbitrary clustering algorithm with the side information since clustering algorithms by definition are unsupervised. The proposed framework addresses this problem by dynamically generating new data representations at each iteration that are, on the one hand, adapted to the clustering results at previous iterations by the given algorithm, and on the other hand consistent with the given side information. Our empirical study shows that the proposed boosting framework is effective in improving the performance of a number of popular clustering algorithms (K-means, partitional SingleLink, spectral clustering), and its performance is comparable to the state-of-the-art algorithms for data clustering with side information.", "title": "" }, { "docid": "9b791932b6f2cdbbf0c1680b9a610614", "text": "To survive in today’s global marketplace, businesses need to be able to deliver products on time, maintain market credibility and introduce new products and services faster than competitors. This is especially crucial to the Smalland Medium-sized Enterprises (SMEs). Since the emergence of the Internet, it has allowed SMEs to compete effectively and efficiently in both domestic and international market. Unfortunately, such leverage is often impeded by the resistance and mismanagement of SMEs to adopt Electronic Commerce (EC) proficiently. Consequently, this research aims to investigate how SMEs can adopt and implement EC successfully to achieve competitive advantage. Building on an examination of current technology diffusion literature, a model of EC diffusion has been developed. It investigates the factors that influence SMEs in the adoption of EC, followed by an examination in the diffusion process, which SMEs adopt to integrate EC into their business systems.", "title": "" }, { "docid": "7d7ea6239106f614f892701e527122e2", "text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.", "title": "" }, { "docid": "e67986714c6bda56c03de25168c51e6b", "text": "With the development of modern technology and Android Smartphone, Smart Living is gradually changing people’s life. Bluetooth technology, which aims to exchange data wirelessly in a short distance using short-wavelength radio transmissions, is providing a necessary technology to create convenience, intelligence and controllability. In this paper, a new Smart Living system called home lighting control system using Bluetooth-based Android Smartphone is proposed and prototyped. First Smartphone, Smart Living and Bluetooth technology are reviewed. Second the system architecture, communication protocol and hardware design aredescribed. Then the design of a Bluetooth-based Smartphone application and the prototype are presented. It is shown that Android Smartphone can provide a platform to implement Bluetooth-based application for Smart Living.", "title": "" } ]
scidocsrr
0f9121a2bbc0c9f9ba5dfa567e29e17d
PLDA: Parallel Latent Dirichlet Allocation for Large-Scale Applications
[ { "docid": "64e93cfb58b7cf331b4b74fadb4bab74", "text": "Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization (p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from O(n2) to O(np/m), and improves computation time to O(np2/m). Empirical study shows PSVM to be effective. PSVM Open Source is available for download at http://code.google.com/p/psvm/.", "title": "" }, { "docid": "83060ef5605b19c14d8b0f41cbd61de5", "text": "We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to manydifferent learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a singlealgorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.", "title": "" } ]
[ { "docid": "fd9992b50e6d58afab53954eac400b84", "text": "Several physico-mechanical designs evolved in fish are currently inspiring robotic devices for propulsion and manoeuvring purposes in underwater vehicles. Considering the potential benefits involved, this paper presents an overview of the swimming mechanisms employed by fish. The motivation is to provide a relevant and useful introduction to the existing literature for engineers with an interest in the emerging area of aquatic biomechanisms. The fish swimming types are presented following the well-established classification scheme and nomenclature originally proposed by Breder. Fish swim either by Body and/or Caudal Fin (BCF) movements or using Median and/or Paired Fin (MPF) propulsion. The latter is generally employed at slow speeds, offering greater manoeuvrability and better propulsive efficiency, while BCF movements can achieve greater thrust and accelerations. For both BCF and MPF locomotion specific swimming modes are identified, based on the propulsor and the type of movements (oscillatory or undulatory) employed for thrust generation. Along with general descriptions and kinematic data, the analytical approaches developed to study each swimming mode are also introduced. Particular reference is made to lunate tail propulsion, undulating fins and labriform (oscillatory pectoral fin) swimming mechanisms, identified as having the greatest potential for exploitation in artificial systems. Index Terms marine animals, hydrodynamics, underwater vehicle propulsion, mobile robots, kinematics * Submitted as a regular paper to the IEEE Journal of Oceanic Engineering, March 1998. † Ocean Systems Laboratory, Dept. of Computing & Electrical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, Scotland, U.K. Tel: +(44) (0) 131 4513350. Fax: +(44) (0) 131 4513327. Email: dml@cee.hw.ac.uk ‡ Dept. of Mechanical & Chemical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, Scotland,U.K. Review of Fish Swimming Modes for Aquatic Locomotion -2", "title": "" }, { "docid": "ed9528fe8e4673c30de35d33130c728e", "text": "This paper introduces a friendly system to control the home appliances remotely by the use of mobile cell phones; this system is well known as “Home Automation System” (HAS).", "title": "" }, { "docid": "fb173d15e079fcdf0cc222f558713f9c", "text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.", "title": "" }, { "docid": "7e6eab1db77c8404720563d0eed1b325", "text": "With the success of Open Data a huge amount of tabular data sources became available that could potentially be mapped and linked into the Web of (Linked) Data. Most existing approaches to “semantically label” such tabular data rely on mappings of textual information to classes, properties, or instances in RDF knowledge bases in order to link – and eventually transform – tabular data into RDF. However, as we will illustrate, Open Data tables typically contain a large portion of numerical columns and/or non-textual headers; therefore solutions that solely focus on textual “cues” are only partially applicable for mapping such data sources. We propose an approach to find and rank candidates of semantic labels and context descriptions for a given bag of numerical values. To this end, we apply a hierarchical clustering over information taken from DBpedia to build a background knowledge graph of possible “semantic contexts” for bags of numerical values, over which we perform a nearest neighbour search to rank the most likely candidates. Our evaluation shows that our approach can assign fine-grained semantic labels, when there is enough supporting evidence in the background knowledge graph. In other cases, our approach can nevertheless assign high level contexts to the data, which could potentially be used in combination with other approaches to narrow down the search space of possible labels.", "title": "" }, { "docid": "1e7b2271c7efc02f2e9148cefc55e7a1", "text": "Foot-and-mouth disease (FMD) is one of the most important diseases with heavy economic losses. The causative agent of the disease is a virus, named as FMD virus, belonging to the picornavirus family. There is no treatment for the disease and vaccination is the main control strategy. Several vaccination methods have been introduced against FMD including DNA vaccines. In this study, two genetic constructs, which were defined by absence and presence of an intron, were tested for their ability to induce the anti-FMD virus responses in mouse. Both constructs encoded a fusion protein consisting of viral (P12A and 3C) and EGFP proteins under the control of CMV promoter. The protein expression was studied in the COS-7 cells transfected with the plasmids by detecting EGFP protein. Cell death was induced in the cells expressing the P12A3C-EGFP, but not the EGFP, protein. This might be explained by the protease activity of the 3C protein which cleaved critical proteins of the host cells. Mice injected with the intron-containing plasmid induced 16-fold higher antibody level than the intronless plasmid. In addition, serum neutralization antibodies were only induced in the mice injected with intron-containing plasmid. In conclusion, the use of intron might be a useful strategy for enhancing antibody responses by DNA vaccines. Moreover, cell death inducing activity of the 3C protein might suggest applying it along with DNA vaccines to improve immunogenicity.", "title": "" }, { "docid": "0569bcd89de031431e755ad827cc6828", "text": "In his enigmatic death bed letter to Hardy, written in January 1920, Ramanujan introduced the notion of a mock theta function. Despite many works, very little was known about the role that these functions play within the theory of automorphic and modular forms until 2002. In that year Sander Zwegers (in his Ph.D. thesis) established that these functions are “holomorphic parts” of harmonic Maass forms. This realization has resulted in many applications in a wide variety of areas: arithmetic geometry, combinatorics, modular forms, and mathematical physics. Here we outline the general facets of the theory, and we give several applications to number theory: partitions and q-series, modular forms, singular moduli, Borcherds products, extensions of theorems of Kohnen-Zagier and Waldspurger on modular L-functions, and the work of Bruinier and Yang on Gross-Zagier formulae. Following our discussion of these works on harmonic Maass forms, we shall then study the emerging new theory of quantum modular forms. Don Zagier introduced the notion of a quantum modular form in his 2010 Clay lecture, and it turns out that a beautiful part of this theory lives at the interface of classical modular forms and harmonic Maass forms.", "title": "" }, { "docid": "1569bcea0c166d9bf2526789514609c5", "text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.", "title": "" }, { "docid": "af28e57d508511ce4f494eb45da0e525", "text": "Posthumanism entails the idea of transcendence of the human being achieved through technology. The article begins by distinguishing perfection and change (or growth). It also attempts to show the anthropological premises of posthumanism itself and suggests that we can identify two roots: the liberal humanistic subject (autonomous and unrelated that simply realizes herself/himself through her/his own project) and the interpretation of thought as a computable process. Starting from these premises, many authors call for the loosening of the clear boundaries of one’s own subject in favour of blending with other beings. According to these theories, we should become post-human: if the human being is thought and thought is a computable process, whatever is able to process information broader and faster is better than the actual human being and has to be considered as the way towards the real completeness of the human being itself. The paper endeavours to discuss the adequacy of these premises highlighting the structural dependency of the human being, the role of the human body, the difference between thought and a computational process, the singularity of some useless and unexpected human acts. It also puts forward the need for axiological criteria to define growth as perfectionism.", "title": "" }, { "docid": "a430a43781d7fd4e36cd393103958265", "text": "BACKGROUND\nThis review evaluates the DSM-IV criteria of social anxiety disorder (SAD), with a focus on the generalized specifier and alternative specifiers, the considerable overlap between the DSM-IV diagnostic criteria for SAD and avoidant personality disorder, and developmental issues.\n\n\nMETHOD\nA literature review was conducted, using the validators provided by the DSM-V Spectrum Study Group. This review presents a number of options and preliminary recommendations to be considered for DSM-V.\n\n\nRESULTS/CONCLUSIONS\nLittle supporting evidence was found for the current specifier, generalized SAD. Rather, the symptoms of individuals with SAD appear to fall along a continuum of severity based on the number of fears. Available evidence suggested the utility of a specifier indicating a \"predominantly performance\" variety of SAD. A specifier based on \"fear of showing anxiety symptoms\" (e.g., blushing) was considered. However, a tendency to show anxiety symptoms is a core fear in SAD, similar to acting or appearing in a certain way. More research is needed before considering subtyping SAD based on core fears. SAD was found to be a valid diagnosis in children and adolescents. Selective mutism could be considered in part as a young child's avoidance response to social fears. Pervasive test anxiety may belong not only to SAD, but also to generalized anxiety disorder. The data are equivocal regarding whether to consider avoidant personality disorder simply a severe form of SAD. Secondary data analyses, field trials, and validity tests are needed to investigate the recommendations and options.", "title": "" }, { "docid": "5ff345f050ec14b02c749c41887d592d", "text": "Testing multithreaded code is hard and expensive. Each multithreaded unit test creates two or more threads, each executing one or more methods on shared objects of the class under test. Such unit tests can be generated at random, but basic generation produces tests that are either slow or do not trigger concurrency bugs. Worse, such tests have many false alarms, which require human effort to filter out. We present BALLERINA, a novel technique for automatic generation of efficient multithreaded random tests that effectively trigger concurrency bugs. BALLERINA makes tests efficient by having only two threads, each executing a single, randomly selected method. BALLERINA increases chances that such a simple parallel code finds bugs by appending it to more complex, randomly generated sequential code. We also propose a clustering technique to reduce the manual effort in inspecting failures of automatically generated multithreaded tests. We evaluate BALLERINA on 14 real-world bugs from 6 popular codebases: Groovy, Java JDK, jFreeChart, Log4j, Lucene, and Pool. The experiments show that tests generated by BALLERINA can find bugs on average 2X-10X faster than various configurations of basic random generation, and our clustering technique reduces the number of inspected failures on average 4X-8X. Using BALLERINA, we found three previously unknown bugs in Apache Pool and Log4j, one of which was already confirmed and fixed.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8a414e60b4a81da21d21d5bcfcff1ccf", "text": "We propose an e¢ cient liver allocation system for allocating donated organs to patients waiting for transplantation, the only viable treatment for End-Stage Liver Disease. We optimize two metrics which are used to measure the e¢ ciency: total quality adjusted life years and the number of organs wasted due to patients rejecting some organ o¤ers. Our model incorporates the possibility that the patients may turn down the organ o¤ers. Given the scarcity of available organs relative to the number patients waiting for transplantation, we model the system as a multiclass ‡uid model of overloaded queues. The ‡uid model we advance captures the disease evolution over time by allowing the patients to switch between classes over time, e.g. patients waiting for transplantation may get sicker/better, or may die. We characterize the optimal solution to the ‡uid model using the duality framework for optimal control problems developed by Rockafellar (1970a). The optimal solution for assigning livers to patients is an intuitive dynamic index policy, where the indices depend on patients’ acceptance probabilities of the organ o¤er, immediate rewards, and the shadow prices calculated from the dual dynamical system. Finally, we perform a detailed simulation study to demonstrate the e¤ectiveness of the proposed policy using data from the United Network for Organ Sharing System (UNOS).", "title": "" }, { "docid": "706b2948b19d15953809d2bdff4c04a3", "text": "The aim of image enhancement is to produce a processed image which is more suitable than the original image for specific application. Application can be edge detection, boundary detection, image fusion, segmentation etc. In this paper different types of image enhancement algorithms in spatial domain are presented for gray scale as well as for color images. Quantitative analysis like AMBE (Absolute mean brightness error), MSE (Mean square error) and PSNR (Peak signal to noise ratio) for the different algorithms are evaluated. For gray scale image Weighted histogram equalization, Linear contrast stretching (LCS), Non linear contrast stretching logarithmic (NLLCS), Non linear contrast stretching exponential (NLECS), Bi Histogram Equalization (BHE) algorithms are discussed and compared. For color image (RGB) Linear contrast stretching, Non linear contrast stretching logarithmic and Non linear contrast stretching exponential algorithms are discussed. During result analysis, it has been observed that some algorithms does give considerably highly distinct values(MSE or AMBE) for different images. To stabilize these parameters, had proposed the new enhancement scheme Local mean and local standard deviation(LMLS) which will take care of these issues. By experimental analysis It has been observed that proposed method gives better AMBE (should be less) and PSNR (should be high) values compared with other algorithms, also these values are not highly distinct for different images.", "title": "" }, { "docid": "e42805b57fa2f8f95d03fea8af2e8560", "text": "Models are used in a variety of fields, including land change science, to better understand the dynamics of systems, to develop hypotheses that can be tested empirically, and to make predictions and/or evaluate scenarios for use in assessment activities. Modeling is an important component of each of the three foci outlined in the science plan of the Land-use and -cover change (LUCC) project (Turner et al. 1995) of the International Geosphere-Biosphere Program (IGBP) and the International Human Dimensions Program (IHDP). In Focus 1, on comparative land-use dynamics, models are used to help improve our understanding of the dynamics of land-use that arise from human decision-making at all levels, households to nations. These models are supported by surveys and interviews of decision makers. Focus 2 emphasizes development of empirical diagnostic models based on aerial and satellite observations of spatial and temporal land-cover dynamics. Finally, Focus 3 focuses specifically on the development of models of land-use and -cover change (LUCC) that can be used for prediction and scenario generation in the context of integrative assessments of global change.", "title": "" }, { "docid": "e59f53449783b3b7aceef8ae3b43dae1", "text": "W E use the definitions of (11). However, in deference to some recent attempts to unify the terminology of graph theory we replace the term 'circuit' by 'polygon', and 'degree' by 'valency'. A graph G is 3-connected (nodally 3-connected) if it is simple and non-separable and satisfies the following condition; if G is the union of two proper subgraphs H and K such that HnK consists solely of two vertices u and v, then one of H and K is a link-graph (arc-graph) with ends u and v. It should be noted that the union of two proper subgraphs H and K of G can be the whole of G only if each of H and K includes at least one edge or vertex not belonging to the other. In this paper we are concerned mainly with nodally 3-connected graphs, but a specialization to 3-connected graphs is made in § 12. In § 3 we discuss conditions for a nodally 3-connected graph to be planar, and in § 5 we discuss conditions for the existence of Kuratowski subgraphs of a given graph. In §§ 6-9 we show how to obtain a convex representation of a nodally 3-connected graph, without Kuratowski subgraphs, by solving a set of linear equations. Some extensions of these results to general graphs, with a proof of Kuratowski's theorem, are given in §§ 10-11. In § 12 we discuss the representation in the plane of a pair of dual graphs, and in § 13 we draw attention to some unsolved problems.", "title": "" }, { "docid": "25f73f6a65d115443ef56b8d25527adc", "text": "Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.", "title": "" }, { "docid": "c2dd0a4616bdb5931debaad1edf06a60", "text": "For polar codes with short-to-medium code length, list successive cancellation decoding is used to achieve a good error-correcting performance. However, list pruning in the current list decoding is based on the sorting strategy and its timing complexity is high. This results in a long decoding latency for large list size. In this work, aiming at a low-latency list decoding implementation, a double thresholding algorithm is proposed for a fast list pruning. As a result, with a negligible performance degradation, the list pruning delay is greatly reduced. Based on the double thresholding, a low-latency list decoding architecture is proposed and implemented using a UMC 90nm CMOS technology. Synthesis results show that, even for a large list size of 16, the proposed low-latency architecture achieves a decoding throughput of 220 Mbps at a frequency of 641 MHz.", "title": "" }, { "docid": "ec0733962301d6024da773ad9d0f636d", "text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.", "title": "" }, { "docid": "2316e37df8796758c86881aaeed51636", "text": "Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.", "title": "" }, { "docid": "03b4b786ba40b4c631fe679b591880aa", "text": "The abundance of user-generated data in social media has incentivized the development of methods to infer the latent attributes of users, which are crucially useful for personalization, advertising and recommendation. However, the current user profiling approaches have limited success, due to the lack of a principled way to integrate different types of social relationships of a user, and the reliance on scarcely-available labeled data in building a prediction model. In this paper, we present a novel solution termed Collective Semi-Supervised Learning (CSL), which provides a principled means to integrate different types of social relationship and unlabeled data under a unified computational framework. The joint learning from multiple relationships and unlabeled data yields a computationally sound and accurate approach to model user attributes in social media. Extensive experiments using Twitter data have demonstrated the efficacy of our CSL approach in inferring user attributes such as account type and marital status. We also show how CSL can be used to determine important user features, and to make inference on a larger user population.", "title": "" } ]
scidocsrr
01ee25fe6322230fcf237832e9d3cb93
Using Eye Tracking to Trace a Cognitive Process : Gaze Behaviour During Decision Making in a Natural Environment
[ { "docid": "bd077cbf7785fc84e98724558832aaf6", "text": "Two process tracing techniques, explicit information search and verbal protocols, were used to examine the information processing strategies subjects use in reaching a decision. Subjects indicated preferences among apartments. The number of alternatives available and number of dimensions of information available was varied across sets of apartments. When faced with a two alternative situation, the subjects employed search strategies consistent with a compensatory decision process. In contrast, when faced with a more complex (multialternative) decision task, the subjects employed decision strategies designed to eliminate some of the available alternatives as quickly as possible and on the basis of a limited amount of information search and evaluation. The results demonstrate that the information processing leading to choice will vary as a function of task complexity. An integration of research in decision behavior with the methodology and theory of more established areas of cognitive psychology, such as human problem solving, is advocated.", "title": "" }, { "docid": "0d723c344ab5f99447f7ad2ff72c0455", "text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.", "title": "" } ]
[ { "docid": "e6db8cbbb3f7bac211f672ffdef44fb6", "text": "This paper aims to develop a benchmarking framework that evaluates the cold chain performance of a company, reveals its strengths and weaknesses and finally identifies and prioritizes potential alternatives for continuous improvement. A Delphi-AHP-TOPSIS based methodology has divided the whole benchmarking into three stages. The first stage is Delphi method, where identification, synthesis and prioritization of key performance factors and sub-factors are done and a novel consistent measurement scale is developed. The second stage is Analytic Hierarchy Process (AHP) based cold chain performance evaluation of a selected company against its competitors, so as to observe cold chain performance of individual factors and sub-factors, as well as overall performance index. And, the third stage is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based assessment of possible alternatives for the continuous improvement of the company’s cold chain performance. Finally a demonstration of proposed methodology in a retail industry is presented for better understanding. The proposed framework can assist managers to comprehend the present strengths and weaknesses of their cold. They can identify good practices from the market leader and can benchmark them for improving weaknesses keeping in view the current operational conditions and strategies of the company. This framework also facilitates the decision makers to better understand the complex relationships of the relevant cold chain performance factors in decision-making. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f6b974c04dceaea3176a0092304bab72", "text": "Information-Centric Networking (ICN) has recently emerged as a promising Future Internet architecture that aims to cope with the increasing demand for highly scalable and efficient distribution of content. Moving away from the Internet communication model based in addressable hosts, ICN leverages in-network storage for caching, multi-party communication through replication, and interaction models that decouple senders and receivers. This novel networking approach has the potential to outperform IP in several dimensions, besides just content dissemination. Concretely, the rise of the Internet of Things (IoT), with its rich set of challenges and requirements placed over the current Internet, provide an interesting ground for showcasing the contribution and performance of ICN mechanisms. This work analyses how the in-network caching mechanisms associated to ICN, particularly those implemented in the Content-Centric Networking (CCN) architecture, contribute in IoT environments, particularly in terms of energy consumption and bandwidth usage. A simulation comparing IP and the CCN architecture (an instantiation of ICN) in IoT environments demonstrated that CCN leads to a considerable reduction of the energy consumed by the information producers and to a reduction of bandwidth requirements, as well as highlighted the flexibility for adapting current ICN caching mechanisms to target specific requirements of IoT.", "title": "" }, { "docid": "14857144b52dbfb661d6ef4cd2c59b64", "text": "The candidate confirms that the work submitted is his/her own and that appropriate credit has been given where reference has been made to the work of others. i ACKNOWLEDGMENT I am truly indebted and thankful to my scholarship sponsor ―National Information Technology Development Agency (NITDA), Nigeria‖ for giving me the rare privilege to study at the University of Leeds. I am sincerely and heartily grateful to my supervisor Dr. Des McLernon for his valuable support, patience and guidance throughout the course of this dissertation. I am sure it would not have been possible without his help. I would like to express my deep gratitude to Romero-Zurita Nabil for his enthusiastic encouragement, useful critique, recommendation and providing me with great information resources. I also acknowledge my colleague Frempong Kwadwo for his invaluable suggestions and discussion. Finally, I would like to appreciate my parents for their support and encouragement throughout my study at Leeds. Above all, special thanks to God Almighty for the gift of life. ii DEDICATION This thesis is dedicated to family especially; to my parents for inculcating the importance of hardwork and higher education to Omobolanle for being a caring and loving sister. to Abimbola for believing in me.", "title": "" }, { "docid": "5b8a5a8c87acec59a5430cb5b28fb2e6", "text": "This paper investigates the problems of outliers and/or noise in surface segmentation and proposes a statistically robust segmentation algorithm for laser scanning 3-D point cloud data. Principal component analysis (PCA)-based local saliency features, e.g., normal and curvature, have been frequently used in many ways for point cloud segmentation. However, PCA is sensitive to outliers; saliency features from PCA are nonrobust and inaccurate in the presence of outliers; consequently, segmentation results can be erroneous and unreliable. As a remedy, robust techniques, e.g., RANdom SAmple Consensus (RANSAC), and/or robust versions of PCA (RPCA) have been proposed. However, RANSAC is influenced by the well-known swamping effect, and RPCA methods are computationally intensive for point cloud processing. We propose a region growing based robust segmentation algorithm that uses a recently introduced maximum consistency with minimum distance based robust diagnostic PCA (RDPCA) approach to get robust saliency features. Experiments using synthetic and laser scanning data sets show that the RDPCA-based method has an intrinsic ability to deal with outlier- and/or noise-contaminated data. Results for a synthetic data set show that RDPCA is 105 times faster than RPCA and gives more accurate and robust results when compared with other segmentation methods. Compared with RANSAC and RPCA based methods, RDPCA takes almost the same time as RANSAC, but RANSAC results are markedly worse than RPCA and RDPCA results. Coupled with a segment merging algorithm, the proposed method is efficient for huge volumes of point cloud data consisting of complex objects surfaces from mobile, terrestrial, and aerial laser scanning systems.", "title": "" }, { "docid": "88ea3f043b43a11a0a7d79e59a774c1f", "text": "The purpose of this paper is to present an alternative systems thinking–based perspective and approach to the requirements elicitation process in complex situations. Three broad challenges associated with the requirements engineering elicitation in complex situations are explored, including the (1) role of the system observer, (2) nature of system requirements in complex situations, and (3) influence of the system environment. Authors have asserted that the expectation of unambiguous, consistent, complete, understandable, verifiable, traceable, and modifiable requirements is not consistent with complex situations. In contrast, complex situations are an emerging design reality for requirements engineering processes, marked by high levels of ambiguity, uncertainty, and emergence. This paper develops the argument that dealing with requirements for complex situations requires a change in paradigm. The elicitation of requirements for simple and technically driven systems is appropriately accomplished by proven methods. In contrast, the elicitation of requirements in complex situations (e.g., integrated multiple critical infrastructures, system-of-systems, etc.) requires more holistic thinking and can be enhanced by grounding in systems theory.", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "74ed962cbf02712f33dac9f901561cad", "text": "Leak detection in transmission pipelines is crucially important for safe operation. Delay in detecting leaks leads to loss of property and human life in fire hazards and loss of valuable material. Leaking of methane and hydrocarbon gas causes negative impacts on the eco system such as global warming and air pollution. Pipeline leak detection systems play a key role in minimization of the probability of occurrence of leaks and hence their impacts. Today there are many available technologies in the domain of leak detection. This paper provides an overview on external and internal leak detection and location systems and a summary of comparison regarding performance of each system.", "title": "" }, { "docid": "dfcc931d9cd7d084bbbcf400f44756a5", "text": "In this paper we address the problem of aligning very long (often more than one hour) audio files to their corresponding textual transcripts in an effective manner. We present an efficient recursive technique to solve this problem that works well even on noisy speech signals. The key idea of this algorithm is to turn the forced alignment problem into a recursive speech recognition problem with a gradually restricting dictionary and language model. The algorithm is tolerant to acoustic noise and errors or gaps in the text transcript or audio tracks. We report experimental results on a 3 hour audio file containing TV and radio broadcasts. We will show accurate alignments on speech under a variety of real acoustic conditions such as speech over music and speech over telephone lines. We also report results when the same audio stream has been corrupted with white additive noise or compressed using a popular web encoding format such as RealAudio. This algorithm has been used in our internal multimedia indexing project. It has processed more than 200 hours of audio from varied sources, such as WGBH NOVA documentaries and NPR web audio files. The system aligns speech media content in about one to five times realtime, depending on the acoustic conditions of the audio signal.", "title": "" }, { "docid": "c678ea5e9bc8852ec80a8315a004c7f0", "text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.", "title": "" }, { "docid": "f8d554c215cc40ddc71171b3f266c43a", "text": "Nowadays, Edge computing allows to push the application intelligence at the boundaries of a network in order to get high-performance processing closer to both data sources and end-users. In this scenario, the Horizon 2020 BEACON project - enabling federated Cloud-networking - can be used to setup Fog computing environments were applications can be deployed in order to instantiate Edge computing applications. In this paper, we focus on the deployment orchestration of Edge computing distributed services on such fog computing environments. We assume that a distributed service is composed of many microservices. Users, by means of geolocation deployment constrains can select regions in which microservices will be deployed. Specifically, we present an Orchestration Broker that starting from an ad-hoc OpenStack-based Heat Orchestraton Template (HOT) service manifest of an Edge computing distributed service produces several HOT microservice manifests including the the deployment instruction for each involved Fog computing node. Experiments prove the goodness of our approach.", "title": "" }, { "docid": "a3a373130b5c602022449919dcc81f98", "text": "We describe a method for registering and super-resolving moving vehicles from aerial surveillance video. The challenge of vehicle super-resolution lies in the fact that vehicles may be very small and thus frame-to-frame registration does not offer enough constraints to yield registration with sub-pixel accuracy. To overcome this, we first register the large-scale image backgrounds and then, relative to the background registration, register the small-scale moving vehicle over all frames simultaneously using a vehicle motion model. To solve for the vehicle motion parameters we optimize a cost function that incorporates both vehicle appearance and background appearance consistency. Once this process accurately registers a moving vehicle, it is super-resolved. We apply both a frequency domain and a spatial domain approach. The frequency domain approach can be used when the final registered vehicle motion is well approximated by shifts in the image plane. The robust regularized spatial domain approach handles all cases of vehicle motion.", "title": "" }, { "docid": "c02865dab28db59a22b972d570c2929a", "text": "............................................................................................................................. iii Table of", "title": "" }, { "docid": "b2a670d90d53825c53d8ce0082333db6", "text": "Social media platforms facilitate the emergence of citizen communities that discuss real-world events. Their content reflects a variety of intent ranging from social good (e.g., volunteering to help) to commercial interest (e.g., criticizing product features). Hence, mining intent from social data can aid in filtering social media to support organizations, such as an emergency management unit for resource planning. However, effective intent mining is inherently challenging due to ambiguity in interpretation, and sparsity of relevant behaviors in social data. In this paper, we address the problem of multiclass classification of intent with a use-case of social data generated during crisis events. Our novel method exploits a hybrid feature representation created by combining top-down processing using knowledge-guided patterns with bottom-up processing using a bag-of-tokens model. We employ pattern-set creation from a variety of knowledge sources including psycholinguistics to tackle the ambiguity challenge, social behavior about conversations to enrich context, and contrast patterns to tackle the sparsity challenge. Our results show a significant absolute gain up to 7% in the F1 score relative to a baseline using bottom-up processing alone, within the popular multiclass frameworks of One-vs-One and One-vs-All. Intent mining can help design efficient cooperative information systems between citizens and organizations for serving organizational information needs.", "title": "" }, { "docid": "9b02cd39293b2f2fb74de14ea3cdd67b", "text": "Convolutional Neural Networks (CNNs) have been widely used in computer vision tasks, such as face recognition, and have achieved state-of-the-art results due to their ability to learn discriminative deep features. Conventionally, CNNs have been trained with Softmax as supervision signal to penalize the classification loss. In order to further enhance discriminative capability of deep features, we introduced a joint supervision signal, Git loss, which leverages on Softmax and Center loss functions. The aim of our loss function is to minimizes the intra-class variances as well as maximizes the interclass distances. Such minimization and maximization of deep features are considered ideal for face recognition task. Results obtained on two popular face recognition benchmarks datasets show that our proposed loss function achieves maximum separability between deep face features of different identities and achieves state-of-the-art accuracy on two major face recognition benchmark datasets: Labeled Faces in the Wild (LFW) and YouTube Faces (YTF).", "title": "" }, { "docid": "81534e94c4d5714fadd7de63d7f3f631", "text": "OBJECTIVES\nSocial capital has been studied due to its contextual influence on health. However, no specific assessment tool has been developed and validated for the measurement of social capital among 12-year-old adolescent students. The aim of the present study was to develop and validate a quick, simple assessment tool to measure social capital among adolescent students.\n\n\nMETHODS\nA questionnaire was developed based on a review of relevant literature. For such, searches were made of the Scientific Electronic Library Online, Latin American and Caribbean Health Sciences, The Cochrane Library, ISI Web of Knowledge, International Database for Medical Literature and PubMed Central bibliographical databases from September 2011 to January 2014 for papers addressing assessment tools for the evaluation of social capital. Focus groups were also formed by adolescent students as well as health, educational and social professionals. The final assessment tool was administered to a convenience sample from two public schools (79 students) and one private school (22 students), comprising a final sample of 101 students. Reliability and internal consistency were evaluated using the Kappa coefficient and Cronbach's alpha coefficient, respectively. Content validity was determined by expert consensus as well as exploratory and confirmatory factor analysis.\n\n\nRESULTS\nThe final version of the questionnaire was made up of 12 items. The total scale demonstrated very good internal consistency (Cronbach's alpha: 0.71). Reproducibility was also very good, as the Kappa coefficient was higher than 0.72 for the majority of items (range: 0.63 to 0.97). Factor analysis grouped the 12 items into four subscales: School Social Cohesion, School Friendships, Neighborhood Social Cohesion and Trust (school and neighborhood).\n\n\nCONCLUSIONS\nThe present findings indicate the validity and reliability of the Social Capital Questionnaire for Adolescent Students.", "title": "" }, { "docid": "768a8cfff3f127a61f12139466911a94", "text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.", "title": "" }, { "docid": "3e5ae0b370b98185d95b428be727d1a8", "text": "A 40-Gb/s receiver includes a continuous-time linear equalizer, a discrete-time linear equalizer, a two-tap decision-feedback equalizer, a clock and data recovery circuit, and a one-to-four deserializer. Hardware minimization and charge steering techniques are extensively used to reduce the power consumption by a factor of ten. Fabricated in 45-nm CMOS technology, the receiver exhibits a bathtub curve opening of 0.28 UI with a recovered clock jitter of 0.5 psrms.", "title": "" }, { "docid": "31e558e1d306e204bfa64121749b75fc", "text": "Experimental results in psychology have shown the important role of manipulation in guiding infant development. This has inspired work in developmental robotics as well. In this case, however, the benefits of this approach have been limited by the intrinsic difficulties of the task. Controlling the interaction between the robot and the environment in a meaningful and safe way is hard especially when little prior knowledge is available. We push the idea that haptic feedback can enhance the way robots interact with unmodeled environments. We approach grasping and manipulation as tasks driven mainly by tactile and force feedback. We implemented a grasping behavior on a robotic platform with sensitive tactile sensors and compliant actuators; the behavior allows the robot to grasp objects placed on a table. Finally, we demonstrate that the haptic feedback originated by the interaction with the objects carries implicit information about their shape and can be useful for learning.", "title": "" }, { "docid": "aa7026774074ed81dd7836ef6dc44334", "text": "To improve safety on the roads, next-generation vehicles will be equipped with short-range communication technologies. Many applications enabled by such communication will be based on a continuous broadcast of information about the own status from each vehicle to the neighborhood, often referred as cooperative awareness or beaconing. Although the only standardized technology allowing direct vehicle-to-vehicle (V2V) communication has been IEEE 802.11p until now, the latest release of long-term evolution (LTE) included advanced device-to-device features designed for the vehicular environment (LTE-V2V) making it a suitable alternative to IEEE 802.11p. Advantages and drawbacks are being considered for both technologies, and which one will be implemented is still under debate. The aim of this paper is thus to provide an insight into the performance of both technologies for cooperative awareness and to compare them. The investigation is performed analytically through the implementation of novel models for both IEEE 802.11p and LTE-V2V able to address the same scenario, with consistent settings and focusing on the same output metrics. The proposed models take into account several aspects that are often neglected by related works, such as hidden terminals and capture effect in IEEE 802.11p, the impact of imperfect knowledge of vehicles position on the resource allocation in LTE-V2V, and the various modulation and coding scheme combinations that are available in both technologies. Results show that LTE-V2V allows us to maintain the required quality of service at even double or more the distance than IEEE 802.11p in moderate traffic conditions. However, due to the half-duplex nature of devices and the structure of LTE frames, it shows lower capacity than IEEE 802.11p if short distances and very high vehicle density are targeted.", "title": "" } ]
scidocsrr
a88aaf49001e63adafce5bd5554b17df
Democratizing Production-Scale Distributed Deep Learning
[ { "docid": "3435041805c5cb2629d70ff909c10637", "text": "Synchronized stochastic gradient descent (SGD) optimizers with data parallelism are widely used in training large-scale deep neural networks. Although using larger mini-batch sizes can improve the system scalability by reducing the communication-to-computation ratio, it may hurt the generalization ability of the models. To this end, we build a highly scalable deep learning training system for dense GPU clusters with three main contributions: (1) We propose a mixed-precision training method that significantly improves the training throughput of a single GPU without losing accuracy. (2) We propose an optimization approach for extremely large minibatch size (up to 64k) that can train CNN models on the ImageNet dataset without losing accuracy. (3) We propose highly optimized all-reduce algorithms that achieve up to 3x and 11x speedup on AlexNet and ResNet-50 respectively than NCCL-based training on a cluster with 1024 Tesla P40 GPUs. On training ResNet-50 with 90 epochs, the state-of-the-art GPU-based system with 1024 Tesla P100 GPUs spent 15 minutes and achieved 74.9% top-1 test accuracy, and another KNL-based system with 2048 Intel KNLs spent 20 minutes and achieved 75.4% accuracy. Our training system can achieve 75.8% top-1 test accuracy in only 6.6 minutes using 2048 Tesla P40 GPUs. When training AlexNet with 95 epochs, our system can achieve 58.7% top-1 test accuracy within 4 minutes, which also outperforms all other existing systems.", "title": "" } ]
[ { "docid": "2281d739c6858d35eb5f3650d2d03474", "text": "We discuss an implementation of the RRT* optimal motion planning algorithm for the half-car dynamical model to enable autonomous high-speed driving. To develop fast solutions of the associated local steering problem, we observe that the motion of a special point (namely, the front center of oscillation) can be modeled as a double integrator augmented with fictitious inputs. We first map the constraints on tire friction forces to constraints on these augmented inputs, which provides instantaneous, state-dependent bounds on the curvature of geometric paths feasibly traversable by the front center of oscillation. Next, we map the vehicle's actual inputs to the augmented inputs. The local steering problem for the half-car dynamical model can then be transformed to a simpler steering problem for the front center of oscillation, which we solve efficiently by first constructing a curvature-bounded geometric path and then imposing a suitable speed profile on this geometric path. Finally, we demonstrate the efficacy of the proposed motion planner via numerical simulation results.", "title": "" }, { "docid": "c9d833d872ab0550edb0aa26565ac76b", "text": "In this paper we investigate the potential of the neural machine translation (NMT) when taking into consideration the linguistic aspect of target language. From this standpoint, the NMT approach with attention mechanism [1] is extended in order to produce several linguistically derived outputs. We train our model to simultaneously output the lemma and its corresponding factors (e.g. part-of-speech, gender, number). The word level translation is built with a mapping function using a priori linguistic information. Compared to the standard NMT system, factored architecture increases significantly the vocabulary coverage while decreasing the number of unknown words. With its richer architecture, the Factored NMT approach allows us to implement several training setup that will be discussed in detail along this paper. On the IWSLT’15 English-to-French task, FNMT model outperforms NMT model in terms of BLEU score. A qualitative analysis of the output on a set of test sentences shows the effectiveness of the FNMT model.", "title": "" }, { "docid": "67421eaa6f719f37fd91407714ba2a2d", "text": "With the widespread use of online shopping in recent years, consumer search requests for products have become more diverse. Previous web search methods have used adjectives as input by consumers. However, given that the number of adjectives that can be used to express textures is limited, it is debatable whether adjectives are capable of richly expressing variations of product textures. In Japanese, tactile experiences are easily and frequently expressed by onomatopoeia, such as “ fuwa-fuwa” which indicates a soft and light sensation. Onomatopoeia are useful for understanding not only material textures but also a user’s intuitive, sensitive, and even ambiguous feelings evoked by materials. In this study, we propose a system to recommend products corresponding to product textures associated with Japanese onomatopoeia based on their symbolic sound associations between the onomatopoeia phonemes and the texture sensations. Our system quantitatively estimates the texture sensations of onomatopoeia input by users, and calculates the similarities between the users’ impressions of the onomatopoeia and those of product pictures. Our system also suggests products which best match the entered onomatopoeia. An evaluation of our method revealed that the best performance was achieved when the SIFT features, the colors of product pictures, and text describing product pictures were used; Specifically, precision was 66 for the top 15 search results. Our system is expected to contribute to online shopping activity as an intuitive product recommendation system.", "title": "" }, { "docid": "992d71459b616bfe72845493a6f8f910", "text": "Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.", "title": "" }, { "docid": "ecbd9201a7f8094a02fcec2c4f78240d", "text": "Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the need for labels during training; (ii) we introduce a regularization scheme to prevent a trivially-strong discriminator without reducing the network capacity and (iii) our approach generalizes on different teacher-student models. In an extensive evaluation on five standard datasets, we show that our student has small accuracy drop, achieves better performance than other knowledge transfer approaches and it surpasses the performance of the same network trained with labels. In addition, we demonstrate state-ofthe-art results compared to other compression strategies.", "title": "" }, { "docid": "77059bf4b66792b4f34bc78bbb0b373a", "text": "Cloud computing systems host most of today's commercial business applications yielding it high revenue which makes it a target of cyber attacks. This emphasizes the need for a digital forensic mechanism for the cloud environment. Conventional digital forensics cannot be directly presented as a cloud forensic solution due to the multi tenancy and virtualization of resources prevalent in cloud. While we do cloud forensics, the data to be inspected are cloud component logs, virtual machine disk images, volatile memory dumps, console logs and network captures. In this paper, we have come up with a remote evidence collection and pre-processing framework using Struts and Hadoop distributed file system. Collection of VM disk images, logs etc., are initiated through a pull model when triggered by the investigator, whereas cloud node periodically pushes network captures to HDFS. Pre-processing steps such as clustering and correlation of logs and VM disk images are carried out through Mahout and Weka to implement cross drive analysis.", "title": "" }, { "docid": "081347f2376f4e4061ea5009af137ca7", "text": "The Internet of things can be defined as to make the “things” belong to the Internet. However, many wonder if the current Internet can support such a challenge. For this and other reasons, hundreds of worldwide initiatives to redesign the Internet are underway. This article discusses the perspectives, challenges and opportunities behind a future Internet that fully supports the “things”, as well as how the “things” can help in the design of a more synergistic future Internet. Keywords–Internet of things, smart things, future Internet, software-defined networking, service-centrism, informationcentrism, ID/Loc splitting, security, privacy, trust.", "title": "" }, { "docid": "d21ec7373565211670a0b43f6e39cd90", "text": "In this paper, resonant tank design procedure and practical design considerations are presented for a high performance LLC multiresonant dc-dc converter in a two-stage smart battery charger for neighborhood electric vehicle applications. The multiresonant converter has been analyzed and its performance characteristics are presented. It eliminates both low- and high-frequency current ripple on the battery, thus maximizing battery life without penalizing the volume of the charger. Simulation and experimental results are presented for a prototype unit converting 390 V from the input dc link to an output voltage range of 48-72 V dc at 650 W. The prototype achieves a peak efficiency of 96%.", "title": "" }, { "docid": "f975a1fa2905f8ae42ced1f13a88a15b", "text": "This paper presents a new method of detecting and tracking the boundaries of drivable regions in road without road-markings. As unmarked roads connect residential places to public roads, the capability of autonomously driving on such a roadway is important to truly realize self-driving cars in daily driving scenarios. To detect the left and right boundaries of drivable regions, our method first examines the image region at the front of ego-vehicle and then uses the appearance information of that region to identify the boundary of the drivable region from input images. Due to variation in the image acquisition condition, the image features necessary for boundary detection may not be present. When this happens, a boundary detection algorithm working frame-by-frame basis would fail to successfully detect the boundaries. To effectively handle these cases, our method tracks, using a Bayes filter, the detected boundaries over frames. Experiments using real-world videos show promising results.", "title": "" }, { "docid": "00100476074a90ecb616308b63a128e8", "text": "We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attributeclass relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets.", "title": "" }, { "docid": "ce650daedc7ba277d245a2150062775f", "text": "Amongst the large number of write-and-throw-away-spreadsheets developed for one-time use there is a rather neglected proportion of spreadsheets that are huge, periodically used, and submitted to regular update-cycles like any conventionally evolving valuable legacy application software. However, due to the very nature of spreadsheets, their evolution is particularly tricky and therefore error-prone. In our strive to develop tools and methodologies to improve spreadsheet quality, we analysed consolidation spreadsheets of an internationally operating company for the errors they contain. The paper presents the results of the field audit, involving 78 spreadsheets with 60,446 non-empty cells. As a by-product, the study performed was also to validate our analysis tools in an industrial context. The evaluated auditing tool offers the auditor a new view on the formula structure of the spreadsheet by grouping similar formulas into equivalence classes. Our auditing approach defines three similarity criteria between formulae, namely copy, logical and structural equivalence. To improve the visualization of large spreadsheets, equivalences and data dependencies are displayed in separated windows that are interlinked with the spreadsheet. The auditing approach helps to find irregularities in the geometrical pattern of similar formulas.", "title": "" }, { "docid": "f7252ab3871dfae3860f575515867db6", "text": "This review paper deals with IoT that can be used to improve cultivation of food crops, as lots of research work is going on to monitor the effective food crop cycle, since from the start to till harvesting the famers are facing very difficult for better yielding of food crops. Although few initiatives have also been taken by the Indian Government for providing online and mobile messaging services to farmers related to agricultural queries and agro vendor’s information to farmers even such information’s are not enough for farmer so still lot of research work need to be carried out on current agricultural approaches so that continuous sensing and monitoring of crops by convergence of sensors with IoT and making farmers to aware about crops growth, harvest time periodically and in turn making high productivity of crops and also ensuring correct delivery of products to end consumers at right place and right time.", "title": "" }, { "docid": "9326b7c1bd16e7db931131f77aaad687", "text": "We argue in this article that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalized grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution, and inference.", "title": "" }, { "docid": "4248fb006221fbb74d565705dcbc5a7a", "text": "Shot boundary detection (SBD) is an important and fundamental step in video content analysis such as content-based video indexing, browsing, and retrieval. In this paper, a hybrid SBD method is presented by integrating a high-level fuzzy Petri net (HLFPN) model with keypoint matching. The HLFPN model with histogram difference is executed as a predetection. Next, the speeded-up robust features (SURF) algorithm that is reliably robust to image affine transformation and illumination variation is used to figure out all possible false shots and the gradual transition based on the assumption from the HLFPN model. The top-down design can effectively lower down the computational complexity of SURF algorithm. The proposed approach has increased the precision of SBD and can be applied in different types of videos.", "title": "" }, { "docid": "bcf55ba5534aca41cefddb6f4b0b4d22", "text": "In a point-to-point wireless fading channel, multiple transmit and receive antennas can be used to improve the reliability of reception (diversity gain) or increase the rate of communication for a fixed reliability level (multiplexing gain). In a multiple-access situation, multiple receive antennas can also be used to spatially separate signals from different users (multiple-access gain). Recent work has characterized the fundamental tradeoff between diversity and multiplexing gains in the point-to-point scenario. In this paper, we extend the results to a multiple-access fading channel. Our results characterize the fundamental tradeoff between the three types of gain and provide insights on the capabilities of multiple antennas in a network context.", "title": "" }, { "docid": "7fbc3820c259d9ea58ecabaa92f8c875", "text": "The use of digital imaging devices, ranging from professional digital cinema cameras to consumer grade smartphone cameras, has become ubiquitous. The acquired image is a degraded observation of the unknown latent image, while the degradation comes from various factors such as noise corruption, camera shake, object motion, resolution limit, hazing, rain streaks, or a combination of them. Image restoration (IR), as a fundamental problem in image processing and low-level vision, aims to reconstruct the latent high-quality image from its degraded observation. Image degradation is, in general, irreversible, and IR is a typical ill-posed inverse problem. Due to the large space of natural image contents, prior information on image structures is crucial to regularize the solution space and produce a good estimation of the latent image. Image prior modeling and learning then are key issues in IR research. This lecture note describes the development of image prior modeling and learning techniques, including sparse representation models, low-rank models, and deep learning models.", "title": "" }, { "docid": "0e68fa08edfc2dcb52585b13d0117bf1", "text": "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE’s code is available on GitHub at https://github.com/Mehran-k/SimplE.", "title": "" }, { "docid": "09bfe483e80464d0116bda5ec57c7d66", "text": "The problem of distance-based outlier detection is difficult to solve efficiently in very large datasets because of potential quadratic time complexity. We address this problem and develop sequential and distributed algorithms that are significantly more efficient than state-of-the-art methods while still guaranteeing the same outliers. By combining simple but effective indexing and disk block accessing techniques, we have developed a sequential algorithm iOrca that is up to an order-of-magnitude faster than the state-of-the-art. The indexing scheme is based on sorting the data points in order of increasing distance from a fixed reference point and then accessing those points based on this sorted order. To speed up the basic outlier detection technique, we develop two distributed algorithms (DOoR and iDOoR) for modern distributed multi-core clusters of machines, connected on a ring topology. The first algorithm passes data blocks from each machine around the ring, incrementally updating the nearest neighbors of the points passed. By maintaining a cutoff threshold, it is able to prune a large number of points in a distributed fashion. The second distributed algorithm extends this basic idea with the indexing scheme discussed earlier. In our experiments, both distributed algorithms exhibit significant improvements compared to the state-of-the-art distributed method [13].", "title": "" }, { "docid": "2a41af8ad6000163951b9e7399ce7444", "text": "Accurate location of the endpoints of an isolated word is important for reliable and robust word recognition. The endpoint detection problem is nontrivial for nonstationary backgrounds where artifacts (i.e., nonspeech events) may be introduced by the speaker, the recording environment, and the transmission system. Several techniques for the detection of the endpoints of isolated words recorded over a dialed-up telephone line were studied. The techniques were broadly classified as either explicit, implicit, or hybrid in concept. The explicit techniques for endpoint detection locate the endpoints prior to and independent of the recognition and decision stages of the system. For the implicit methods, the endpoints are determined solely by the recognition and decision stages Of the system, i.e., there is no separate stage for endpoint detection. The hybrid techniques incorporate aspects from both the explicit and implicit methods. Investigations showed that the hybrid techniques consistently provided the best estimates for both of the word endpoints and, correspondingly, the highest recognition accuracy of the three classes studied. A hybrid endpoint detector is proposed which gives a rejection rate of less than 0.5 percent, while providing recognition accuracy close to that obtained from hand-edited endpoints.", "title": "" }, { "docid": "57b35e32b92b54fc1ea7724e73b26f39", "text": "The authors examined relations between the Big Five personality traits and academic outcomes, specifically SAT scores and grade-point average (GPA). Openness was the strongest predictor of SAT verbal scores, and Conscientiousness was the strongest predictor of both high school and college GPA. These relations replicated across 4 independent samples and across 4 different personality inventories. Further analyses showed that Conscientiousness predicted college GPA, even after controlling for high school GPA and SAT scores, and that the relation between Conscientiousness and college GPA was mediated, both concurrently and longitudinally, by increased academic effort and higher levels of perceived academic ability. The relation between Openness and SAT verbal scores was independent of academic achievement and was mediated, both concurrently and longitudinally, by perceived verbal intelligence. Together, these findings show that personality traits have independent and incremental effects on academic outcomes, even after controlling for traditional predictors of those outcomes. ((c) 2007 APA, all rights reserved).", "title": "" } ]
scidocsrr
64fb7af3a0293707c72f34f8fedd7fe5
Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining
[ { "docid": "18a524545090542af81e0a66df3a1395", "text": "What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.\n When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.\n We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.", "title": "" }, { "docid": "6c9acb831bc8dc82198aef10761506be", "text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.", "title": "" } ]
[ { "docid": "4835360fec2ca50355d71f0d0ba76cbc", "text": "The surge in global population is compelling a shift toward smart agriculture practices. This coupled with the diminishing natural resources, limited availability of arable land, increase in unpredictable weather conditions makes food security a major concern for most countries. As a result, the use of Internet of Things (IoT) and data analytics (DA) are employed to enhance the operational efficiency and productivity in the agriculture sector. There is a paradigm shift from use of wireless sensor network (WSN) as a major driver of smart agriculture to the use of IoT and DA. The IoT integrates several existing technologies, such as WSN, radio frequency identification, cloud computing, middleware systems, and end-user applications. In this paper, several benefits and challenges of IoT have been identified. We present the IoT ecosystem and how the combination of IoT and DA is enabling smart agriculture. Furthermore, we provide future trends and opportunities which are categorized into technological innovations, application scenarios, business, and marketability.", "title": "" }, { "docid": "2f0eb4a361ff9f09bda4689a1f106ff2", "text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.", "title": "" }, { "docid": "cfb665d0ca71289a4da834584604250b", "text": "This work is motivated by the engineering task of achieving a near state-of-the-art face recognition on a minimal computing budget running on an embedded system. Our main technical contribution centers around a novel training method, called Multibatch, for similarity learning, i.e., for the task of generating an invariant “face signature” through training pairs of “same” and “not-same” face images. The Multibatch method first generates signatures for a mini-batch of k face images and then constructs an unbiased estimate of the full gradient by relying on all k2 k pairs from the mini-batch. We prove that the variance of the Multibatch estimator is bounded by O(1/k2), under some mild conditions. In contrast, the standard gradient estimator that relies on random k/2 pairs has a variance of order 1/k. The smaller variance of the Multibatch estimator significantly speeds up the convergence rate of stochastic gradient descent. Using the Multibatch method we train a deep convolutional neural network that achieves an accuracy of 98.2% on the LFW benchmark, while its prediction runtime takes only 30msec on a single ARM Cortex A9 core. Furthermore, the entire training process took only 12 hours on a single Titan X GPU.", "title": "" }, { "docid": "0c9a76222f885b95f965211e555e16cd", "text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.", "title": "" }, { "docid": "4667b31c7ee70f7bc3709fc40ec6140f", "text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.", "title": "" }, { "docid": "f88235f1056d66c5dc188fcf747bf570", "text": "In this paper, we compare the differences between traditional Kelly Criterion and Vince's optimal f through backtesting actual financial transaction data. We apply a momentum trading strategy to the Taiwan Weighted Index Futures, and analyze its profit-and-loss vectors of Kelly Criterion and Vince's optimal f, respectively. Our numerical experiments demonstrate that there is nearly 90% chance that the difference gap between the bet ratio recommended by Kelly criterion and and Vince's optimal f lies within 2%. Therefore, in the actual transaction, the values from Kelly Criterion could be taken directly as the optimal bet ratio for funds control.", "title": "" }, { "docid": "c5928a67d0b8a6a1c40b7cad6ac03d16", "text": "Drug addiction represents a dramatic dysregulation of motivational circuits that is caused by a combination of exaggerated incentive salience and habit formation, reward deficits and stress surfeits, and compromised executive function in three stages. The rewarding effects of drugs of abuse, development of incentive salience, and development of drug-seeking habits in the binge/intoxication stage involve changes in dopamine and opioid peptides in the basal ganglia. The increases in negative emotional states and dysphoric and stress-like responses in the withdrawal/negative affect stage involve decreases in the function of the dopamine component of the reward system and recruitment of brain stress neurotransmitters, such as corticotropin-releasing factor and dynorphin, in the neurocircuitry of the extended amygdala. The craving and deficits in executive function in the so-called preoccupation/anticipation stage involve the dysregulation of key afferent projections from the prefrontal cortex and insula, including glutamate, to the basal ganglia and extended amygdala. Molecular genetic studies have identified transduction and transcription factors that act in neurocircuitry associated with the development and maintenance of addiction that might mediate initial vulnerability, maintenance, and relapse associated with addiction.", "title": "" }, { "docid": "10d14531df9190f5ffb217406fe8eb49", "text": "Web technology has enabled e-commerce. However, in our review of the literature, we found little research on how firms can better position themselves when adopting e-commerce for revenue generation. Drawing upon technology diffusion theory, we developed a conceptual model for assessing e-commerce adoption and migration, incorporating six factors unique to e-commerce. A series of propositions were then developed. Survey data of 1036 firms in a broad range of industries were collected and used to test our model. Our analysis based on multi-nominal logistic regression demonstrated that technology integration, web functionalities, web spending, and partner usage were significant adoption predictors. The model showed that these variables could successfully differentiate non-adopters from adopters. Further, the migration model demonstrated that web functionalities, web spending, and integration of externally oriented inter-organizational systems tend to be the most influential drivers in firms’ migration toward e-commerce, while firm size, partner usage, electronic data interchange (EDI) usage, and perceived obstacles were found to negatively affect ecommerce migration. This suggests that large firms, as well as those that have been relying on outsourcing or EDI, tended to be slow to migrate to the internet platform. # 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a2130c0316eea0fa510f381ea312b65e", "text": "A technique for building consistent 3D reconstructions from many views based on fitting a low rank matrix to a matrix with missing data is presented. Rank-four submatrices of minimal, or slightly larger, size are sampled and spans of their columns are combined to constrain a basis of the fitted matrix. The error minimized is expressed in terms of the original subspaces which leads to a better resistance to noise compared to previous methods. More than 90% of the missing data can be handled while finding an acceptable solution efficiently. Applications to 3D reconstruction using both affine and perspective camera models are shown. For the perspective model, a new linear method based on logarithms of positive depths from chirality is introduced to make the depths consistent with an overdetermined set of epipolar geometries. Results are shown for scenes and sequences of various types. Many images in open and closed sequences in narrow and wide base-line setups are reconstructed with reprojection errors around one pixel. It is shown that reconstructed cameras can be used to obtain dense reconstructions from epipolarly aligned images.", "title": "" }, { "docid": "3b4607a6b0135eba7c4bb0852b78dda9", "text": "Heart rate variability for the treatment of major depression is a novel, alternative approach that can offer symptom reduction with minimal-to-no noxious side effects. The following material will illustrate some of the work being conducted at our laboratory to demonstrate the efficacy of heart rate variability. Namely, results will be presented regarding our published work on an initial open-label study and subsequent results of a small, unfinished randomized controlled trial.", "title": "" }, { "docid": "b333be40febd422eae4ae0b84b8b9491", "text": "BACKGROUND\nRarely, basal cell carcinomas (BCCs) have the potential to become extensively invasive and destructive, a phenomenon that has led to the term \"locally advanced BCC\" (laBCC). We identified and described the diverse settings that could be considered \"locally advanced\".\n\n\nMETHODS\nThe panel of experts included oncodermatologists, dermatological and maxillofacial surgeons, pathologists, radiotherapists and geriatricians. During a 1-day workshop session, an interactive flow/sequence of questions and inputs was debated.\n\n\nRESULTS\nDiscussion of nine cases permitted us to approach consensus concerning what constitutes laBCC. The expert panel retained three major components for the complete assessment of laBCC cases: factors of complexity related to the tumour itself, factors related to the operability and the technical procedure, and factors related to the patient. Competing risks of death should be precisely identified. To ensure homogeneous multidisciplinary team (MDT) decisions in different clinical settings, the panel aimed to develop a practical tool based on the three components.\n\n\nCONCLUSION\nThe grid presented is not a definitive tool, but rather, it is a method for analysing the complexity of laBCC.", "title": "" }, { "docid": "3c98c5bd1d9a6916ce5f6257b16c8701", "text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.", "title": "" }, { "docid": "80d457b352362d2b72acb26ca5b8a382", "text": "Language experience shapes infants' abilities to process speech sounds, with universal phonetic discrimination abilities narrowing in the second half of the first year. Brain measures reveal a corresponding change in neural discrimination as the infant brain becomes selectively sensitive to its native language(s). Whether and how bilingual experience alters the transition to native language specific phonetic discrimination is important both theoretically and from a practical standpoint. Using whole head magnetoencephalography (MEG), we examined brain responses to Spanish and English syllables in Spanish-English bilingual and English monolingual 11-month-old infants. Monolingual infants showed sensitivity to English, while bilingual infants were sensitive to both languages. Neural responses indicate that the dual sensitivity of the bilingual brain is achieved by a slower transition from acoustic to phonetic sound analysis, an adaptive and advantageous response to increased variability in language input. Bilingual neural responses extend into the prefrontal and orbitofrontal cortex, which may be related to their previously described bilingual advantage in executive function skills. A video abstract of this article can be viewed at: https://youtu.be/TAYhj-gekqw.", "title": "" }, { "docid": "60b876a2065587fc7f152d452605dc14", "text": "Fillers are frequently used in beautifying procedures. Despite major advancements of the chemical and biological features of injected materials, filler-related adverse events may occur, and can substantially impact the clinical outcome. Filler granulomas become manifest as visible grains, nodules, or papules around the site of the primary injection. Early recognition and proper treatment of filler-related complications is important because effective treatment options are available. In this report, we provide a comprehensive overview of the differential diagnosis and diagnostics and develop an algorithm of successful therapy regimens.", "title": "" }, { "docid": "28641a6621a31bf720586e4c5980645b", "text": "This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant [20] of temporal ensembling [8], a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge [12]. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised", "title": "" }, { "docid": "663342554879c5464a7e1aff969339b7", "text": "Esthetic surgery of external female genitalia remains an uncommon procedure. This article describes a novel, de-epithelialized, labial rim flap technique for labia majora augmentation using de-epithelialized labia minora tissue otherwise to be excised as an adjunct to labia minora reduction. Ten patients were included in the study. The protruding segments of the labia minora were de-epithelialized with a fine scissors or scalpel instead of being excised, and a bulky section of subcutaneous tissue was obtained. Between the outer and inner surfaces of the labia minora, a flap with a subcutaneous pedicle was created in continuity with the de-epithelialized marginal tissue. A pocket was dissected in the labium majus, and the flap was transposed into the pocket to augment the labia majora. Mean patient age was 39.9 (±13.9) years, mean operation time was 60 min, and mean follow-up period was 14.5 (±3.4) months. There were no major complications (hematoma, wound dehiscence, infection) following surgery. No patient complained of postoperative difficulty with coitus or dyspareunia. All patients were satisfied with the final appearance. Several methods for labia minora reduction have been described. Auxiliary procedures are required with labia minora reduction for better results. Nevertheless, few authors have taken into account the final esthetic appearance of the whole female external genitalia. The described technique in this study is indicated primarily for mild atrophy of the labia majora with labia minora hypertrophy; the technique resulted in perfect patient satisfaction with no major complications or postoperative coital problems. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "29dab83f08d38702e09acec2f65346b3", "text": "This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for contentaware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outpues a retargeted image. Retargeting is performed through a shift reap, which is a pixet-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to r content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image-level annotation are used to compute content and structure tosses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.", "title": "" }, { "docid": "189d0b173f8a9e0b3deb21398955dc3c", "text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.", "title": "" }, { "docid": "d569902303b93274baf89527e666adc0", "text": "We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.", "title": "" }, { "docid": "9420760d6945440048cee3566ce96699", "text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.", "title": "" } ]
scidocsrr
f967052774d8ea4c17830f7c5657c9e9
Addressing the challenges of underspecification in web search
[ { "docid": "419c721c2d0a269c65fae59c1bdb273c", "text": "Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational\" searches are less prevalent than generally believed while a previously unexplored \"resource-seeking\" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.", "title": "" } ]
[ { "docid": "2c8f4c911c298cdc19a420781c569d9c", "text": "Colorectal cancer is the fourth leading cause of cancer deaths worldwide and the second leading cause in the United States. The risk of colorectal cancer can be mitigated by the identification and removal of premalignant lesions through optical colonoscopy. Unfortunately, conventional colonoscopy misses more than 20% of the polyps that should be removed, due in part to poor contrast of lesion topography. Imaging depth and tissue topography during a colonoscopy is difficult because of the size constraints of the endoscope and the deforming mucosa. Most existing methods make unrealistic assumptions which limits accuracy and sensitivity. In this paper, we present a method that avoids these restrictions, using a joint deep convolutional neural network-conditional random field (CNN-CRF) framework for monocular endoscopy depth estimation. Estimated depth is used to reconstruct the topography of the surface of the colon from a single image. We train the unary and pairwise potential functions of a CRF in a CNN on synthetic data, generated by developing an endoscope camera model and rendering over 200,000 images of an anatomically-realistic colon.We validate our approach with real endoscopy images from a porcine colon, transferred to a synthetic-like domain via adversarial training, with ground truth from registered computed tomography measurements. The CNN-CRF approach estimates depths with a relative error of 0.152 for synthetic endoscopy images and 0.242 for real endoscopy images. We show that the estimated depth maps can be used for reconstructing the topography of the mucosa from conventional colonoscopy images. This approach can easily be integrated into existing endoscopy systems and provides a foundation for improving computer-aided detection algorithms for detection, segmentation and classification of lesions.", "title": "" }, { "docid": "861e2a3c19dafdd3273dc718416309c2", "text": "For the last 40 years high - capacity Unmanned Air Vehicles have been use mostly for military services such as tracking, surveillance, engagement with active weapon or in the simplest term for data acquisition purpose. Unmanned Air Vehicles are also demanded commercially because of their advantages in comparison to manned vehicles such as their low manufacturing and operating cost, configuration flexibility depending on customer request, not risking pilot in the difficult missions. Nevertheless, they have still open issues such as integration to the manned flight air space, reliability and airworthiness. Although Civil Unmanned Air Vehicles comprise 3% of the UAV market, it is estimated that they will reach 10% level within the next 5 years. UAV systems with their useful equipment (camera, hyper spectral imager, air data sensors and with similar equipment) have been in use more and more for civil applications: Tracking and monitoring in the event of agriculture / forest / marine pollution / waste / emergency and disaster situations; Mapping for land registry and cadastre; Wildlife and ecologic monitoring; Traffic Monitoring and; Geology and mine researches. They can bring minimal risk and cost advantage to many civil applications, in which it was risky and costly to use manned air vehicles before. When the cost of Unmanned Air Vehicles designed and produced for military service is taken into account, civil market demands lower cost and original products which are suitable for civil applications. Most of civil applications which are mentioned above require UAVs that are able to take off and land on limited runway, and moreover move quickly in the operation region for mobile applications but hover for immobile measurement and tracking when necessary. This points to a hybrid unmanned vehicle concept optimally, namely the Vertical Take Off and Landing (VTOL) UAVs. At the same time, this system requires an efficient cost solution for applicability / convertibility for different civil applications. It means an Air Vehicle having easily portability of payload depending on application concept and programmability of operation (hover and cruise flight time) specific to the application. The main topic of this project is designing, producing and testing the TURAC VTOL UAV that have the following features : Vertical takeoff and landing, and hovering like helicopter ; High cruise speed and fixed-wing ; Multi-functional and designed for civil purpose ; The project involves two different variants ; The TURAC A variant is a fully electrical platform which includes 2 tilt electric motors in the front, and a fixed electric motor and ducted fan in the rear ; The TURAC B variant uses fuel cells.", "title": "" }, { "docid": "d94a4f07939c0f420787b099336f426b", "text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.", "title": "" }, { "docid": "9f9128951d6c842689f61fc19c79f238", "text": "This paper concerns image reconstruction for helical x-ray transmission tomography (CT) with multi-row detectors. We introduce two approximate cone-beam (CB) filtered-backprojection (FBP) algorithms of the Feldkamp type, obtained by extending to three dimensions (3D) two recently proposed exact FBP algorithms for 2D fan-beam reconstruction. The new algorithms are similar to the standard Feldkamp-type FBP for helical CT. In particular, they can reconstruct each transaxial slice from data acquired along an arbitrary segment of helix, thereby efficiently exploiting the available data. In contrast to the standard Feldkamp-type algorithm, however, the redundancy weight is applied after filtering, allowing a more efficient numerical implementation. To partially alleviate the CB artefacts, which increase with increasing values of the helical pitch, a frequency-mixing method is proposed. This method reconstructs the high frequency components of the image using the longest possible segment of helix, whereas the low frequencies are reconstructed using a minimal, short-scan, segment of helix to minimize CB artefacts. The performance of the algorithms is illustrated using simulated data.", "title": "" }, { "docid": "5ba3baabc84d02f0039748a4626ace36", "text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.", "title": "" }, { "docid": "4eebd4a2d5c50a2d7de7c36c5296786d", "text": "Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.", "title": "" }, { "docid": "b10074ccf133a3c18a2029a5fe52f7ff", "text": "Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper.", "title": "" }, { "docid": "fd54d540c30968bb8682a4f2eee43c8d", "text": "This paper presents LISSA (“Learning dashboard for Insights and Support during Study Advice”), a learning analytics dashboard designed, developed, and evaluated in collaboration with study advisers. The overall objective is to facilitate communication between study advisers and students by visualizing grade data that is commonly available in any institution. More specifically, the dashboard attempts to support the dialogue between adviser and student through an overview of study progress, peer comparison, and by triggering insights based on facts as a starting point for discussion and argumentation. We report on the iterative design process and evaluation results of a deployment in 97 advising sessions. We have found that the dashboard supports the current adviser-student dialogue, helps them motivate students, triggers conversation, and provides tools to add personalization, depth, and nuance to the advising session. It provides insights at a factual, interpretative, and reflective level and allows both adviser and student to take an active role during the session.", "title": "" }, { "docid": "d2c36f67971c22595bc483ebb7345404", "text": "Resistive-switching random access memory (RRAM) devices utilizing a crossbar architecture represent a promising alternative for Flash replacement in high-density data storage applications. However, RRAM crossbar arrays require the adoption of diodelike select devices with high on-off -current ratio and with sufficient endurance. To avoid the use of select devices, one should develop passive arrays where the nonlinear characteristic of the RRAM device itself provides self-selection during read and write. This paper discusses the complementary switching (CS) in hafnium oxide RRAM, where the logic bit can be encoded in two high-resistance levels, thus being immune from leakage currents and related sneak-through effects in the crossbar array. The CS physical mechanism is described through simulation results by an ion-migration model for bipolar switching. Results from pulsed-regime characterization are shown, demonstrating that CS can be operated at least in the 10-ns time scale. The minimization of the reset current is finally discussed.", "title": "" }, { "docid": "b63077105e140546a7485167339fdf62", "text": "Deep multi-layer perceptron neural networks are used in many state-of-the-art systems for machine perception (e.g., speech-to-text, image classification, and object detection). Once a network is trained to do a specific task, e.g., finegrained bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize additional bird species or learning an entirely different task such as finegrained flower recognition. When new tasks are added, deep neural networks are prone to catastrophically forgetting previously learned information. Catastrophic forgetting has hindered the use of neural networks in deployed applications that require lifelong learning. There have been multiple attempts to develop schemes that mitigate catastrophic forgetting, but these methods have yet to be compared and the kinds of tests used to evaluate individual methods vary greatly. In this paper, we compare multiple mechanisms designed to mitigate catastrophic forgetting in neural networks. Experiments showed that the mechanism(s) that are critical for optimal performance vary based on the incremental training paradigm and type of data being used.", "title": "" }, { "docid": "b0133ea142da1d4f2612407d4d8bf6c0", "text": "The ability to transfer knowledge gained in previous tasks into new contexts is one of the most important mechanisms of human learning. Despite this, adapting autonomous behavior to be reused in partially similar settings is still an open problem in current robotics research. In this paper, we take a small step in this direction and propose a generic framework for learning transferable motion policies. Our goal is to solve a learning problem in a target domain by utilizing the training data in a different but related source domain. We present this in the context of an autonomous MAV flight using monocular reactive control, and demonstrate the efficacy of our proposed approach through extensive real-world flight experiments in outdoor cluttered environments.", "title": "" }, { "docid": "170e2f1ad2ffc7ab1666205fdafe01de", "text": "One of the important issues concerning the spreading process in social networks is the influence maximization. This is the problem of identifying the set of the most influential nodes in order to begin the spreading process based on an information diffusion model in the social networks. In this study, two new methods considering the community structure of the social networks and influence-based closeness centrality measure of the nodes are presented to maximize the spread of influence on the multiplication threshold, minimum threshold and linear threshold information diffusion models. The main objective of this study is to improve the efficiency with respect to the run time while maintaining the accuracy of the final influence spread. Efficiency improvement is obtained by reducing the number of candidate nodes subject to evaluation in order to find the most influential. Experiments consist of two parts: first, the effectiveness of the proposed influence-based closeness centrality measure is established by comparing it with available centrality measures; second, the evaluations are conducted to compare the two proposed community-based methods with well-known benchmarks in the literature on the real datasets, leading to the results demonstrate the efficiency and effectiveness of these methods in maximizing the influence spread in social networks.", "title": "" }, { "docid": "b8322d65e61be7fb252b2e418df85d3e", "text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]", "title": "" }, { "docid": "f76f400bbb71c724657082d42eb7406e", "text": "Semantic segmentation is a critical module in robotics related applications, especially autonomous driving. Most of the research on semantic segmentation is focused on improving the accuracy with less attention paid to computationally efficient solutions. Majority of the efficient semantic segmentation algorithms have customized optimizations without scalability and there is no systematic way to compare them. In this paper, we present a real-time segmentation benchmarking framework and study various segmentation algorithms for autonomous driving. We implemented a generic meta-architecture via a decoupled design where different types of encoders and decoders can be plugged in independently. We provide several example encoders including VGG16, Resnet18, MobileNet, and ShuffleNet and decoders including SkipNet, UNet and Dilation Frontend. The framework is scalable for addition of new encoders and decoders developed in the community for other vision tasks. We performed detailed experimental analysis on cityscapes dataset for various combinations of encoder and decoder. The modular framework enabled rapid prototyping of a custom efficient architecture which provides ~x143 GFLOPs reduction compared to SegNet and runs real-time at ~15 fps on NVIDIA Jetson TX2. The source code of the framework is publicly available.", "title": "" }, { "docid": "4449b826b2a6acb5ce10a0bcacabc022", "text": "Centralized Resource Description Framework (RDF) repositories have limitations both in their failure tolerance and in their scalability. Existing Peer-to-Peer (P2P) RDF repositories either cannot guarantee to find query results, even if these results exist in the network, or require up-front definition of RDF schemas and designation of super peers. We present a scalable distributed RDF repository (RDFPeers) that stores each triple at three places in a multi-attribute addressable network by applying globally known hash functions to its subject predicate and object. Thus all nodes know which node is responsible for storing triple values they are looking for and both exact-match and range queries can be efficiently routed to those nodes. RDFPeers has no single point of failure nor elevated peers and does not require the prior definition of RDF schemas. Queries are guaranteed to find matched triples in the network if the triples exist. In RDFPeers both the number of neighbors per node and the number of routing hops for inserting RDF triples and for resolving most queries are logarithmic to the number of nodes in the network. We further performed experiments that show that the triple-storing load in RDFPeers differs by less than an order of magnitude between the most and the least loaded nodes for real-world RDF data.", "title": "" }, { "docid": "f614df1c1775cd4e2a6927fce95ffa46", "text": "In this paper we have designed and implemented (15, k) a BCH Encoder and decoder using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (2 4 ) with irreducible primitive polynomial x 4 +x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoders are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to K clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. In multiple error correction method, we have implemented (15, 5 ,3 ) ,(15,7, 2) and (15, 11, 1) BCH encoder and decoder using VHDL and the simulation is done using Xilinx ISE 14.2. KeywordsBCH, BER, SNR, BCH Encoder, Decoder VHDL, Error Correction, AWGN, LFSR", "title": "" }, { "docid": "ca4e2cff91621bca4018ce1eca5450e2", "text": "Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high-dimensional constrained problems, as the projection step becomes computationally prohibitive. To address this problem, this paper adopts a projection-free optimization approach, a.k.a. the Frank–Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an <italic>inexact </italic> FW algorithm. Using a diminishing step size rule and letting <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math></inline-formula> be the iteration number, we show that the DeFW algorithm's convergence rate is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t)$</tex-math></inline-formula> for convex objectives; is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t^2)$</tex-math></inline-formula> for strongly convex objectives with the optimal solution in the interior of the constraint set; and is <inline-formula> <tex-math notation=\"LaTeX\">${\\mathcal O}(1/\\sqrt{t})$</tex-math></inline-formula> toward a stationary point for smooth but nonconvex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. We demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings.", "title": "" }, { "docid": "4d3195c6fd592a7b8379bc61529b44c3", "text": "Financial institutions all over the world are providing banking services via information systems, such as: automated teller machines (ATMs), Internet banking, and telephone banking, in an effort to remain competitive as well as enhancing customer service. However, the acceptance of such banking information systems (BIS) in developing countries remains open. The classical Technology Acceptance Model (TAM) has been well validated over hundreds of studies in the past two decades. This study contributed to the extensive body of research of technology acceptance by attempting to validate the integration of trust and computer self-efficacy (CSE) constructs into the classical TAM model. Moreover, the key uniqueness of this work is in the context of BIS in a developing country, namely Jamaica. Based on structural equations modeling using data of 374 customers from three banks in Jamaica, this study results indicated that the classic TAM provided a better fit than the extended TAM with Trust and CSE. However, the results also indicated that trust is indeed a significant construct impacting both perceived usefulness and perceived ease-of-use. Additionally, test for gender differences indicated that across all study participants, only trust was found to be significantly different between male and female bank customers. Conclusions and recommendations for future research are also provided.", "title": "" }, { "docid": "cc1876cf1d71be6c32c75bd2ded25e65", "text": "Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore, it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this article, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pairwise and point-wise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies.", "title": "" }, { "docid": "b8505166c395750ee47127439a4afa1a", "text": "Modern replicated data stores aim to provide high availability, by immediately responding to client requests, often by implementing objects that expose concurrency. Such objects, for example, multi-valued registers (MVRs), do not have sequential specifications. This paper explores a recent model for replicated data stores that can be used to precisely specify causal consistency for such objects, and liveness properties like eventual consistency, without revealing details of the underlying implementation. The model is used to prove the following results: An eventually consistent data store implementing MVRs cannot satisfy a consistency model strictly stronger than observable causal consistency (OCC). OCC is a model somewhat stronger than causal consistency, which captures executions in which client observations can use causality to infer concurrency of operations. This result holds under certain assumptions about the data store. Under the same assumptions, an eventually consistent and causally consistent replicated data store must send messages of unbounded size: If s objects are supported by n replicas, then, for every k > 1, there is an execution in which an Ω({n,s} k)-bit message is sent.", "title": "" } ]
scidocsrr
aa230d13a85bb2fb47cbd0bcd514b38f
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
[ { "docid": "c943fcc6664681d832133dc8739e6317", "text": "The explosion in online advertisement urges to better estimate the click prediction of ads. For click prediction on single ad impression, we have access to pairwise relevance among elements in an impression, but not to global interaction among key features of elements. Moreover, the existing method on sequential click prediction treats propagation unchangeable for different time intervals. In this work, we propose a novel model, Convolutional Click Prediction Model (CCPM), based on convolution neural network. CCPM can extract local-global key features from an input instance with varied elements, which can be implemented for not only single ad impression but also sequential ad impression. Experiment results on two public large-scale datasets indicate that CCPM is effective on click prediction.", "title": "" }, { "docid": "3734fd47cf4e4e5c00f660cbb32863f0", "text": "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm.", "title": "" }, { "docid": "fd03cf7e243571e9b3e81213fe91fd29", "text": "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.", "title": "" } ]
[ { "docid": "98a703bc054e871826173e2517074d06", "text": "Several attempts have been made in the past to construct encoding schemes that allow modularity to emerge in evolving systems, but success is limited. We believe that in order to create successful and scalable encodings for emerging modularity, we first need to explore the benefits of different types of modularity by hard-wiring these into evolvable systems. In this paper we explore different ways of exploiting sensory symmetry inherent in the agent in the simple game Cellz by evolving symmetrically identical modules. It is concluded that significant increases in both speed of evolution and final fitness can be achieved relative to monolithic controllers. Furthermore, we show that simple function approximation task that exhibits sensory symmetry can be used as a quick approximate measure of the utility of an encoding scheme for the more complex game-playing task.", "title": "" }, { "docid": "ba5cd7dcf8d7e9225df1d9dc69c95c11", "text": "Œe e‚ective of information retrieval (IR) systems have become more important than ever. Deep IR models have gained increasing aŠention for its ability to automatically learning features from raw text; thus, many deep IR models have been proposed recently. However, the learning process of these deep IR models resemble a black box. Œerefore, it is necessary to identify the di‚erence between automatically learned features by deep IR models and hand-cra‰ed features used in traditional learning to rank approaches. Furthermore, it is valuable to investigate the di‚erences between these deep IR models. Œis paper aims to conduct a deep investigation on deep IR models. Speci€cally, we conduct an extensive empirical study on two di‚erent datasets, including Robust and LETOR4.0. We €rst compared the automatically learned features and handcra‰ed features on the respects of query term coverage, document length, embeddings and robustness. It reveals a number of disadvantages compared with hand-cra‰ed features. Œerefore, we establish guidelines for improving existing deep IR models. Furthermore, we compare two di‚erent categories of deep IR models, i.e. representation-focused models and interaction-focused models. It is shown that two types of deep IR models focus on di‚erent categories of words, including topic-related words and query-related words.", "title": "" }, { "docid": "729fac8328b57376a954f2e7fc10405e", "text": "Generative Adversarial Networks are proved to be efficient on various kinds of image generation tasks. However, it is still a challenge if we want to generate images precisely. Many researchers focus on how to generate images with one attribute. But image generation under multiple attributes is still a tough work. In this paper, we try to generate a variety of face images under multiple constraints using a pipeline process. The Pip-GAN (Pipeline Generative Adversarial Network) we present employs a pipeline network structure which can generate a complex facial image step by step using a neutral face image. We applied our method on two face image databases and demonstrate its ability to generate convincing novel images of unseen identities under multiple conditions previously.", "title": "" }, { "docid": "5705022b0a08ca99d4419485f3c03eaa", "text": "In this paper, we propose a wireless sensor network paradigm for real-time forest fire detection. The wireless sensor network can detect and forecast forest fire more promptly than the traditional satellite-based detection approach. This paper mainly describes the data collecting and processing in wireless sensor networks for real-time forest fire detection. A neural network method is applied to in-network data processing. We evaluate the performance of our approach by simulations.", "title": "" }, { "docid": "733e5961428e5aad785926e389b9bd75", "text": "OBJECTIVE\nPeer support can be defined as the process of giving and receiving nonprofessional, nonclinical assistance from individuals with similar conditions or circumstances to achieve long-term recovery from psychiatric, alcohol, and/or other drug-related problems. Recently, there has been a dramatic rise in the adoption of alternative forms of peer support services to assist recovery from substance use disorders; however, often peer support has not been separated out as a formalized intervention component and rigorously empirically tested, making it difficult to determine its effects. This article reports the results of a literature review that was undertaken to assess the effects of peer support groups, one aspect of peer support services, in the treatment of addiction.\n\n\nMETHODS\nThe authors of this article searched electronic databases of relevant peer-reviewed research literature including PubMed and MedLINE.\n\n\nRESULTS\nTen studies met our minimum inclusion criteria, including randomized controlled trials or pre-/post-data studies, adult participants, inclusion of group format, substance use-related, and US-conducted studies published in 1999 or later. Studies demonstrated associated benefits in the following areas: 1) substance use, 2) treatment engagement, 3) human immunodeficiency virus/hepatitis C virus risk behaviors, and 4) secondary substance-related behaviors such as craving and self-efficacy. Limitations were noted on the relative lack of rigorously tested empirical studies within the literature and inability to disentangle the effects of the group treatment that is often included as a component of other services.\n\n\nCONCLUSION\nPeer support groups included in addiction treatment shows much promise; however, the limited data relevant to this topic diminish the ability to draw definitive conclusions. More rigorous research is needed in this area to further expand on this important line of research.", "title": "" }, { "docid": "a5f78c3708a808fd39c4ced6152b30b8", "text": "Building ontology for wireless network intrusion detection is an emerging method for the purpose of achieving high accuracy, comprehensive coverage, self-organization and flexibility for network security. In this paper, we leverage the power of Natural Language Processing (NLP) and Crowdsourcing for this purpose by constructing lightweight semi-automatic ontology learning framework which aims at developing a semantic-based solution-oriented intrusion detection knowledge map using documents from Scopus. Our proposed framework uses NLP as its automatic component and Crowdsourcing is applied for the semi part. The main intention of applying both NLP and Crowdsourcing is to develop a semi-automatic ontology learning method in which NLP is used to extract and connect useful concepts while in uncertain cases human power is leveraged for verification. This heuristic method shows a theoretical contribution in terms of lightweight and timesaving ontology learning model as well as practical value by providing solutions for detecting different types of intrusions.", "title": "" }, { "docid": "c05b2317f529d79a2d05223c249549b6", "text": "PURPOSE\nThis study presents a two-degree customized animated stimulus developed to evaluate smooth pursuit in children and investigates the effect of its predetermined characteristics (stimulus type and size) in an adult population. Then, the animated stimulus is used to evaluate the impact of different pursuit motion paradigms in children.\n\n\nMETHODS\nTo study the effect of animating a stimulus, eye movement recordings were obtained from 20 young adults while the customized animated stimulus and a standard dot stimulus were presented moving horizontally at a constant velocity. To study the effect of using a larger stimulus size, eye movement recordings were obtained from 10 young adults while presenting a standard dot stimulus of different size (1° and 2°) moving horizontally at a constant velocity. Finally, eye movement recordings were obtained from 12 children while the 2° customized animated stimulus was presented after three different smooth pursuit motion paradigms. Performance parameters, including gains and number of saccades, were calculated for each stimulus condition.\n\n\nRESULTS\nThe animated stimulus produced in young adults significantly higher velocity gain (mean: 0.93; 95% CI: 0.90-0.96; P = .014), position gain (0.93; 0.85-1; P = .025), proportion of smooth pursuit (0.94; 0.91-0.96, P = .002), and fewer saccades (5.30; 3.64-6.96, P = .008) than a standard dot (velocity gain: 0.87; 0.82-0.92; position gain: 0.82; 0.72-0.92; proportion smooth pursuit: 0.87; 0.83-0.90; number of saccades: 7.75; 5.30-10.46). In contrast, changing the size of a standard dot stimulus from 1° to 2° did not have an effect on smooth pursuit in young adults (P > .05). Finally, smooth pursuit performance did not significantly differ in children for the different motion paradigms when using the animated stimulus (P > .05).\n\n\nCONCLUSIONS\nAttention-grabbing and more dynamic stimuli, such as the developed animated stimulus, might potentially be useful for eye movement research. Finally, with such stimuli, children perform equally well irrespective of the motion paradigm used.", "title": "" }, { "docid": "ca52ed08e302b843ca4bc0a0e8d2fd5c", "text": "We report a case of surgical treatment for Hallermann-Streiff syndrome in a patient with ocular manifestations of esotropia, entropion, and blepharoptosis. A 54-year-old man visited Yeouido St. Mary's Hospital complaining of ocular discomfort due to cilia touching the corneas of both eyes for several years. He had a bird-like face, pinched nose, hypotrichosis of the scalp, mandibular hypoplasia with forward displacement of the temporomandibular joints, a small mouth, and proportional short stature. His ophthalmic features included sparse eyelashes and eyebrows, microphthalmia, nystagmus, lower lid entropion in the right eye, and upper lid entropion with blepharoptosis in both eyes. There was esodeviation of the eyeball of more than 100 prism diopters at near and distance, and there were limitations in ocular movement on lateral gaze. The capsulopalpebral fascia was repaired to treat the right lower lid entropion, but an additional Quickert suture was required to prevent recurrence. Blepharoplasty and levator palpebrae repair were performed for blepharoptosis and dermatochalasis. Three months after lid surgery, the right medial rectus muscle was recessed 7.5 mm, the left medial rectus was recessed 7.25 mm, and the left lateral rectus muscle was resected 8.0 mm.", "title": "" }, { "docid": "7078d24d78abf6c46a6bc8c2213561c4", "text": "In the past two decades, a new form of scholarship has appeared in which researchers present an overview of previously conducted research syntheses on the same topic. In these efforts, research syntheses are the principal units of evidence. Overviews of reviews introduce unique problems that require unique solutions. This article describes what methods overviewers have developed or have adopted from other forms of scholarship. These methods concern how to (a) define the broader problem space of an overview, (b) conduct literature searches that specifically look for research syntheses, (c) address the overlap in evidence in related reviews, (d) evaluate the quality of both primary research and research syntheses, (e) integrate the outcomes of research syntheses, especially when they produce discordant results, (f) conduct a second-order meta-analysis, and (g) present findings. The limitations of overviews are also discussed, especially with regard to the age of the included evidence.", "title": "" }, { "docid": "491ad4b4ab179db2efd54f3149d08db5", "text": "In robotics, Air Muscle is used as the analogy of the biological motor for locomotion or manipulation. It has advantages like the passive Damping, good power-weight ratio and usage in rough environments. An experimental test set up is designed to test both contraction and volume trapped in Air Muscle. This paper gives the characteristics of Air Muscle in terms of contraction of Air Muscle with variation of pressure at different loads and also in terms of volume of air trapped in it with variation in pressure at different loads. Braid structure of the Muscle has been described and its theoretical and experimental aspects of the characteristics of an Air Muscle are analysed.", "title": "" }, { "docid": "cf62cb1e0b3cac894a277762808c68e0", "text": "-Most educational institutions’ administrators are concerned about student irregular attendance. Truancies can affect student overall academic performance. The conventional method of taking attendance by calling names or signing on paper is very time consuming and insecure, hence inefficient. Therefore, computer based student attendance management system is required to assist the faculty and the lecturer for this time-provide much convenient method to take attendance, but some prerequisites has to be done before start using the program. Although the use of RFID systems in educational institutions is not new, it is intended to show how the use of it came to solve daily problems in our university. The system has been built using the web-based applications such as ASP.NET and IIS server to cater the recording and reporting of the students’ attendances The system can be easily accessed by the lecturers via the web and most importantly, the reports can be generated in real-time processing, thus, providing valuable information about the students’.", "title": "" }, { "docid": "c446ce16a62f832a167101293fe8b58d", "text": "Unforeseen events such as node failures and resource contention can have a severe impact on the performance of data processing frameworks, such as Hadoop, especially in cloud environments where such incidents are common. SLA compliance in the presence of such events requires the ability to quickly and dynamically resize infrastructure resources. Unfortunately, the distributed and stateful nature of data processing frameworks makes it challenging to accurately scale the system at run-time. In this paper, we present the design and implementation of a model-driven autoscaling solution for Hadoop clusters. We first develop novel gray-box performance models for Hadoop workloads that specifically relate job execution times to resource allocation and workload parameters. We then employ these models to dynamically determine the resources required to successfully complete the Hadoop jobs as per the user-specified SLA under various scenarios including node failures and multi-job executions. Our experimental results on three different Hadoop cloud clusters and across different workloads demonstrate the efficacy of our models and highlight their autoscaling capabilities.", "title": "" }, { "docid": "fdcf6e60ad11b10fba077a62f7f1812d", "text": "Delivering web software as a service has grown into a powerful paradigm for deploying a wide range of Internetscale applications. However for end-users, accessing software as a service is fundamentally at odds with free software, because of the associated cost of maintaining server infrastructure. Users end up paying for the service in one way or another, often indirectly through ads or the sale of their private data. In this paper, we aim to enable a new generation of portable and free web apps by proposing an alternative model to the existing client-server web architecture. freedom.js is a platform for developing and deploying rich multi-user web apps, where application logic is pushed out from the cloud and run entirely on client-side browsers. By shifting the responsibility of where code runs, we can explore a novel incentive structure where users power applications with their own resources, gain the ability to control application behavior and manage privacy of data. For developers, we lower the barrier of writing popular web apps by removing much of the deployment cost and making applications simpler to write. We provide a set of novel abstractions that allow developers to automatically scale their application with low complexity and overhead. freedom.js apps are inherently sandboxed, multi-threaded, and composed of reusable modules. We demonstrate the flexibility of freedom.js through a number of applications that we have built on top of the platform, including a messaging application, a social file synchronization tool, and a peer-to-peer (P2P) content delivery network (CDN). Our experience shows that we can implement a P2P-CDN with 50% fewer lines of application-specific code in the freedom.js framework when compared to a standalone version. In turn, we incur an additional startup latency of 50-60ms (about 6% of the page load time) with the freedom.js version, without any noticeable impact on system throughput.", "title": "" }, { "docid": "d0ea7fe7ed0dfdca3b43de20bb1dc1d0", "text": "Text clustering methods can be used to structure large sets of text or hypertext documents. The well-known methods of text clustering, however, do not really address the special problems of text clustering: very high dimensionality of the data, very large size of the databases and understandability of the cluster description. In this paper, we introduce a novel approach which uses frequent item (term) sets for text clustering. Such frequent sets can be efficiently discovered using algorithms for association rule mining. To cluster based on frequent term sets, we measure the mutual overlap of frequent sets with respect to the sets of supporting documents. We present two algorithms for frequent term-based text clustering, FTC which creates flat clusterings and HFTC for hierarchical clustering. An experimental evaluation on classical text documents as well as on web documents demonstrates that the proposed algorithms obtain clusterings of comparable quality significantly more efficiently than state-of-the- art text clustering algorithms. Furthermore, our methods provide an understandable description of the discovered clusters by their frequent term sets.", "title": "" }, { "docid": "763b8982d13b0637a17347b2c557f1f8", "text": "This paper describes an application of Case-Based Reasonin g to the problem of reducing the number of final-line fraud investigation s i the credit approval process. The performance of a suite of algorithms whi ch are applied in combination to determine a diagnosis from a set of retriev ed cases is reported. An adaptive diagnosis algorithm combining several neighbourhoodbased and probabilistic algorithms was found to have the bes t performance, and these results indicate that an adaptive solution can pro vide fraud filtering and case ordering functions for reducing the number of fin al-li e fraud investigations necessary.", "title": "" }, { "docid": "a0e68c731cdb46d1bdf708997a871695", "text": "Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.", "title": "" }, { "docid": "3a7bfdaf92ae9b0509220016eecc8042", "text": "Background/Objectives:Policies focused on food quality are intended to facilitate healthy choices by consumers, even those who are not fully informed about the links between food consumption and health. The goal of this paper is to evaluate the potential impact of such a food reformulation scenario on health outcomes.Subjects/Methods:We first created reformulation scenarios adapted to the French characteristics of foods. After computing the changes in the nutrient intakes of representative consumers, we determined the health effects of these changes. To do so, we used the DIETRON health assessment model, which calculates the number of deaths avoided by changes in food and nutrient intakes.Results:Depending on the reformulation scenario, the total impact of reformulation varies between 2408 and 3597 avoided deaths per year, which amounts to a 3.7–5.5% reduction in mortality linked to diseases considered in the DIETRON model. The impacts are much higher for men than for women and much higher for low-income categories than for high-income categories. These differences result from the differences in consumption patterns and initial disease prevalence among the various income categories.Conclusions:Even without any changes in consumers’ behaviors, realistic food reformulation may have significant health outcomes.", "title": "" }, { "docid": "149de84d7cbc9ea891b4b1297957ade7", "text": "Deep convolutional neural networks (CNNs) have had a major impact in most areas of image understanding, including object category detection. In object detection, methods such as R-CNN have obtained excellent results by integrating CNNs with region proposal generation algorithms such as selective search. In this paper, we investigate the role of proposal generation in CNN-based detectors in order to determine whether it is a necessary modelling component, carrying essential geometric information not contained in the CNN, or whether it is merely a way of accelerating detection. We do so by designing and evaluating a detector that uses a trivial region generation scheme, constant for each image. Combined with SPP, this results in an excellent and fast detector that does not require to process an image with algorithms other than the CNN itself. We also streamline and simplify the training of CNN-based detectors by integrating several learning steps in a single algorithm, as well as by proposing a number of improvements that accelerate detection.", "title": "" }, { "docid": "7fff067167bb50cab7ab84c91518031a", "text": "Unsupervised depth estimation from a single image is a very attractive technique with several implications in robotic, autonomous navigation, augmented reality and so on. This topic represents a very challenging task and the advent of deep learning enabled to tackle this problem with excellent results. However, these architectures are extremely deep and complex. Thus, real-time performance can be achieved only by leveraging power-hungry GPUs that do not allow to infer depth maps in application fields characterized by low-power constraints. To tackle this issue, in this paper we propose a novel architecture capable to quickly infer an accurate depth map on a CPU, even of an embedded system, using a pyramid of features extracted from a single input image. Similarly to state-of-the-art, we train our network in an unsupervised manner casting depth estimation as an image reconstruction problem. Extensive experimental results on the KITTI dataset show that compared to the top performing approach our network has similar accuracy but a much lower complexity (about 6% of parameters) enabling to infer a depth map for a KITTI image in about 1.7 s on the Raspberry Pi 3 and at more than 8 Hz on a standard CPU. Moreover, by trading accuracy for efficiency, our network allows to infer maps at about 2 Hz and 40 Hz respectively, still being more accurate than most state-of-the-art slower methods. To the best of our knowledge, it is the first method enabling such performance on CPUs paving the way for effective deployment of unsupervised monocular depth estimation even on embedded systems.", "title": "" }, { "docid": "0b22d7f6326210f02da44b0fa686f25a", "text": "Current methods learn monolithic attribute predictors, with the assumption that a single model is sufficient to reflect human understanding of a visual attribute. However, in reality, humans vary in how they perceive the association between a named property and image content. For example, two people may have slightly different internal models for what makes a shoe look \"formal\", or they may disagree on which of two scenes looks \"more cluttered\". Rather than discount these differences as noise, we propose to learn user-specific attribute models. We adapt a generic model trained with annotations from multiple users, tailoring it to satisfy user-specific labels. Furthermore, we propose novel techniques to infer user-specific labels based on transitivity and contradictions in the user's search history. We demonstrate that adapted attributes improve accuracy over both existing monolithic models as well as models that learn from scratch with user-specific data alone. In addition, we show how adapted attributes are useful to personalize image search, whether with binary or relative attributes.", "title": "" } ]
scidocsrr
d7dcdb0f375f3cd055764fb1951a7241
AND: Autoregressive Novelty Detectors
[ { "docid": "5d80ce0bffd5bc2016aac657669a98de", "text": "Information and Communication Technology (ICT) has a great impact on social wellbeing, economic growth and national security in todays world. Generally, ICT includes computers, mobile communication devices and networks. ICT is also embraced by a group of people with malicious intent, also known as network intruders, cyber criminals, etc. Confronting these detrimental cyber activities is one of the international priorities and important research area. Anomaly detection is an important data analysis task which is useful for identifying the network intrusions. This paper presents an in-depth analysis of four major categories of anomaly detection techniques which include classification, statistical, information theory and clustering. The paper also discusses research challenges with the datasets used for network intrusion detection. & 2015 Published by Elsevier Ltd.", "title": "" }, { "docid": "a7456ecf7af7e447cdde61f371128965", "text": "For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN.", "title": "" } ]
[ { "docid": "bba81ac392b87a123a1e2f025bffd30c", "text": "This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to http://www.deakin.edu.au/~thanhthi/drl.htm.", "title": "" }, { "docid": "b7961e6b82ca38e65fcfefcb5309bd46", "text": "IMPORTANCE\nCryolipolysis is the noninvasive reduction of fat with localized cutaneous cooling. Since initial introduction, over 650,000 cryolipolysis treatment cycles have been performed worldwide. We present a previously unreported, rare adverse effect following cryolipolysis: paradoxical adipose hyperplasia.\n\n\nOBSERVATIONS\nA man in his 40s underwent a single cycle of cryolipolysis to his abdomen. Three months following his treatment, a gradual enlargement of the treatment area was noted. This enlargement was a large, well-demarcated subcutaneous mass, slightly tender to palpation. Imaging studies revealed accumulation of adipose tissue with normal signal intensity within the treatment area.\n\n\nCONCLUSIONS AND RELEVANCE\nParadoxical adipose hyperplasia is a rare, previously unreported adverse effect of cryolipolysis with an incidence of 0.0051%. No single unifying risk factor has been identified. The phenomenon seems to be more common in male patients undergoing cryolipolysis. At this time, there is no evidence of spontaneous resolution. Further studies are needed to characterize the pathogenesis and histologic findings of this rare adverse event.", "title": "" }, { "docid": "88a8ea1de5ad5cb8883890c1e30b3491", "text": "Service robots will have to accomplish more and more complex, open-ended tasks and regularly acquire new skills. In this work, we propose a new approach to the problem of generating plans for such household robots. Instead composing them from atomic actions — the common approach in robot planning — we propose to transform task descriptions on web sites like ehow.com into executable robot plans. We present methods for automatically converting the instructions from natural language into a formal, logic-based representation, for resolving the word senses using the WordNet database and the Cyc ontology, and for exporting the generated plans into the mobile robot's plan language RPL. We discuss the problem of inferring information that is missing in these descriptions and the problem of grounding the abstract task descriptions in the perception and action system, and we propose techniques for solving them. The whole system works autonomously without human interaction. It has successfully been tested with a set of about 150 natural language directives, of which up to 80% could be correctly transformed.", "title": "" }, { "docid": "d62c2e7ca3040900d04f83ef4f99de4f", "text": "Manual classification of brain tumor is time devastating and bestows ambiguous results. Automatic image classification is emergent thriving research area in medical field. In the proposed methodology, features are extracted from raw images which are then fed to ANFIS (Artificial neural fuzzy inference system).ANFIS being neuro-fuzzy system harness power of both hence it proves to be a sophisticated framework for multiobject classification. A comprehensive feature set and fuzzy rules are selected to classify an abnormal image to the corresponding tumor type. This proposed technique is fast in execution, efficient in classification and easy in implementation.", "title": "" }, { "docid": "9adf653a332e07b8aa055b62449e1475", "text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.", "title": "" }, { "docid": "a4a5c6cbec237c2cd6fb3abcf6b4a184", "text": "Developing automatic diagnostic tools for the early detection of skin cancer lesions in dermoscopic images can help to reduce melanoma-induced mortality. Image segmentation is a key step in the automated skin lesion diagnosis pipeline. In this paper, a fast and fully-automatic algorithm for skin lesion segmentation in dermoscopic images is presented. Delaunay Triangulation is used to extract a binary mask of the lesion region, without the need of any training stage. A quantitative experimental evaluation has been conducted on a publicly available database, by taking into account six well-known state-of-the-art segmentation methods for comparison. The results of the experimental analysis demonstrate that the proposed approach is highly accurate when dealing with benign lesions, while the segmentation accuracy significantly decreases when melanoma images are processed. This behavior led us to consider geometrical and color features extracted from the binary masks generated by our algorithm for classification, achieving promising results for melanoma detection.", "title": "" }, { "docid": "1debcbf981ae6115efcc4a853cd32bab", "text": "Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.", "title": "" }, { "docid": "2c39eafa87d34806dd1897335fdfe41c", "text": "One of the issues facing credit card fraud detection systems is that a significant percentage of transactions labeled as fraudulent are in fact legitimate. These &quot;false alarms&quot; delay the detection of fraudulent transactions and can cause unnecessary concerns for customers. In this study, over 1 million unique credit card transactions from 11 months of data from a large Canadian bank were analyzed. A meta-classifier model was applied to the transactions after being analyzed by the Bank&apos;s existing neural network based fraud detection algorithm. This meta-classifier model consists of 3 base classifiers constructed using the decision tree, naïve Bayesian, and k-nearest neighbour algorithms. The naïve Bayesian algorithm was also used as the meta-level algorithm to combine the base classifier predictions to produce the final classifier. Results from the research show that when a meta-classifier was deployed in series with the Bank&apos;s existing fraud detection algorithm improvements of up to 28% to their existing system can be achieved.", "title": "" }, { "docid": "88229017a9d4df8dfc44e996a116cbad", "text": "BACKGROUND\nThe Society of Thoracic Surgeons (STS)/American College of Cardiology Transcatheter Valve Therapy (TVT) Registry captures all procedures with Food and Drug Administration-approved transcatheter valve devices performed in the United States, and is mandated as a condition of reimbursement by the Centers for Medicaid & Medicare Services.\n\n\nOBJECTIVES\nThis annual report focuses on patient characteristics, trends, and outcomes of transcatheter aortic and mitral valve catheter-based valve procedures in the United States.\n\n\nMETHODS\nWe reviewed data for all patients receiving commercially approved devices from 2012 through December 31, 2015, that are entered in the TVT Registry.\n\n\nRESULTS\nThe 54,782 patients with transcatheter aortic valve replacement demonstrated decreases in expected risk of 30-day operative mortality (STS Predicted Risk of Mortality [PROM]) of 7% to 6% and transcatheter aortic valve replacement PROM (TVT PROM) of 4% to 3% (both p < 0.0001) from 2012 to 2015. Observed in-hospital mortality decreased from 5.7% to 2.9%, and 1-year mortality decreased from 25.8% to 21.6%. However, 30-day post-procedure pacemaker insertion increased from 8.8% in 2013 to 12.0% in 2015. The 2,556 patients who underwent transcatheter mitral leaflet clip in 2015 were similar to patients from 2013 to 2014, with hospital mortality of 2% and with mitral regurgitation reduced to grade ≤2 in 87% of patients (p < 0.0001). The 349 patients who underwent mitral valve-in-valve and mitral valve-in-ring procedures were high risk, with an STS PROM for mitral valve replacement of 11%. The observed hospital mortality was 7.2%, and 30-day post-procedure mortality was 8.5%.\n\n\nCONCLUSIONS\nThe TVT Registry is an innovative registry that that monitors quality, patient safety and trends for these rapidly evolving new technologies.", "title": "" }, { "docid": "b1dd6c2db60cae5405c07c3757ed6696", "text": "In this paper, we present the Smartbin system that identifies fullness of litter bin. The system is designed to collect data and to deliver the data through wireless mesh network. The system also employs duty cycle technique to reduce power consumption and to maximize operational time. The Smartbin system was tested in an outdoor environment. Through the testbed, we collected data and applied sense-making methods to obtain litter bin utilization and litter bin daily seasonality information. With such information, litter bin providers and cleaning contractors are able to make better decision to increase productivity.", "title": "" }, { "docid": "34623fb38c81af8efaf8e7073e4c43bc", "text": "The k-means problem consists of finding k centers in R that minimize the sum of the squared distances of all points in an input set P from R to their closest respective center. Awasthi et. al. recently showed that there exists a constant ε′ > 0 such that it is NP-hard to approximate the k-means objective within a factor of 1 + ε′. We establish that the constant ε′ is at least 0.0013. For a given set of points P ⊂ R, the k-means problem consists of finding a partition of P into k clusters (C1, . . . , Ck) with corresponding centers (c1, . . . , ck) that minimize the sum of the squared distances of all points in P to their corresponding center, i.e. the quantity arg min (C1,...,Ck),(c1,...,ck) k ∑", "title": "" }, { "docid": "45bf73a93f0014820864d1805f257bfc", "text": "SEPIC topology based bidirectional DC-DC Converter is proposed for interfacing energy storage elements such as batteries & super capacitors with various power systems. This proposed bidirectional DC-DC converter acts as a buck boost where it changes its output voltage according to its duty cycle. An important factor is used to increase the voltage conversion ratio as well as it achieves high efficiency. In the proposed SEPIC based BDC converter is used to increase the voltage proposal of this is low voltage at the input side is converted into a very high level at the output side to drive the HVDC smart grid. In this project PIC microcontro9 ller is used to give faster response than the existing system. The proposed scheme ensures that the voltage on the both sides of the converter is always matched thereby the conduction losses can be reduced to improve efficiency. MATLAB/Simulink software is utilized for simulation. The obtained experimental results show the functionality and feasibility of the proposed converter.", "title": "" }, { "docid": "efddb60143c59ee9e459e1048a09787c", "text": "The aim of this paper is to determine the possibilities of using commercial off the shelf FPGA based Software Defined Radio Systems to develop a system capable of detecting and locating small drones.", "title": "" }, { "docid": "7b4567b9f32795b267f2fb2d39bbee51", "text": "BACKGROUND\nWearable and mobile devices that capture multimodal data have the potential to identify risk factors for high stress and poor mental health and to provide information to improve health and well-being.\n\n\nOBJECTIVE\nWe developed new tools that provide objective physiological and behavioral measures using wearable sensors and mobile phones, together with methods that improve their data integrity. The aim of this study was to examine, using machine learning, how accurately these measures could identify conditions of self-reported high stress and poor mental health and which of the underlying modalities and measures were most accurate in identifying those conditions.\n\n\nMETHODS\nWe designed and conducted the 1-month SNAPSHOT study that investigated how daily behaviors and social networks influence self-reported stress, mood, and other health or well-being-related factors. We collected over 145,000 hours of data from 201 college students (age: 18-25 years, male:female=1.8:1) at one university, all recruited within self-identified social groups. Each student filled out standardized pre- and postquestionnaires on stress and mental health; during the month, each student completed twice-daily electronic diaries (e-diaries), wore two wrist-based sensors that recorded continuous physical activity and autonomic physiology, and installed an app on their mobile phone that recorded phone usage and geolocation patterns. We developed tools to make data collection more efficient, including data-check systems for sensor and mobile phone data and an e-diary administrative module for study investigators to locate possible errors in the e-diaries and communicate with participants to correct their entries promptly, which reduced the time taken to clean e-diary data by 69%. We constructed features and applied machine learning to the multimodal data to identify factors associated with self-reported poststudy stress and mental health, including behaviors that can be possibly modified by the individual to improve these measures.\n\n\nRESULTS\nWe identified the physiological sensor, phone, mobility, and modifiable behavior features that were best predictors for stress and mental health classification. In general, wearable sensor features showed better classification performance than mobile phone or modifiable behavior features. Wearable sensor features, including skin conductance and temperature, reached 78.3% (148/189) accuracy for classifying students into high or low stress groups and 87% (41/47) accuracy for classifying high or low mental health groups. Modifiable behavior features, including number of naps, studying duration, calls, mobility patterns, and phone-screen-on time, reached 73.5% (139/189) accuracy for stress classification and 79% (37/47) accuracy for mental health classification.\n\n\nCONCLUSIONS\nNew semiautomated tools improved the efficiency of long-term ambulatory data collection from wearable and mobile devices. Applying machine learning to the resulting data revealed a set of both objective features and modifiable behavioral features that could classify self-reported high or low stress and mental health groups in a college student population better than previous studies and showed new insights into digital phenotyping.", "title": "" }, { "docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a", "text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.", "title": "" }, { "docid": "b18ca3607462ba54ec86055dfd4683fe", "text": "Electric power transmission lines face increased threats from malicious attacks and natural disasters. This underscores the need to develop new techniques to ensure safe and reliable transmission of electric power. This paper deals with the development of an online monitoring technique based on mechanical state estimation to determine the sag levels of overhead transmission lines in real time and hence determine if these lines are in normal physical condition or have been damaged or downed. A computational algorithm based on least squares state estimation is applied to the physical transmission line equations to determine the conductor sag levels from measurements of tension, temperature, and other transmission line conductor parameters. The estimated conductor sag levels are used to generate warning signals of vertical clearance violations in the energy management system. These warning signals are displayed to the operator to make appropriate decisions to maintain the line within the prescribed clearance limits and prevent potential cascading failures.", "title": "" }, { "docid": "c7fd5a26da59fab4e66e0cb3e93530d6", "text": "Switching audio amplifiers are widely used in HBridge topology thanks to their high efficiency; however low audio performances in single ended power stage topology is a strong weakness leading to not be used for headset applications. This paper explains the importance of efficient error correction in Single Ended Class-D audio amplifier. A hysteresis control for Class-D amplifier with a variable window is also presented. The analyses are verified by simulations and measurements. The proposed solution was fabricated in 0.13µm CMOS technology with an active area of 0.2mm2. It could be used in single ended output configuration fully compatible with common headset connectors. The proposed Class-D amplifier achieves a harmonic distortion of 0.01% and a power supply rejection of 70dB with a quite low static current consumption.", "title": "" }, { "docid": "dcf9cba8bf8e2cc3f175e63e235f6b81", "text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.", "title": "" }, { "docid": "8a20feb22ce8797fa77b5d160919789c", "text": "We proposed the concept of hardware software co-simulation for image processing using Xilinx system generator. Recent advances in synthesis tools for SIMULINK suggest a feasible high-level approach to algorithm implementation for embedded DSP systems. An efficient FPGA based hardware design for enhancement of color and grey scale images in image and video processing. The top model – based visual development process of SIMULINK facilitates host side simulation and validation, as well as synthesis of target specific code, furthermore, legacy code written in MATLAB or ANCI C can be reuse in custom blocks. However, the code generated for DSP platforms is often not very efficient. We are implemented the Image processing applications on FPGA it can be easily design.", "title": "" } ]
scidocsrr
20f57da36f9d8ec9fdab0f7eea8a015c
Privacy by design in big data: An overview of privacy enhancing technologies in the era of big data analytics
[ { "docid": "c80222e5a7dfe420d16e10b45f8fab66", "text": "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.", "title": "" } ]
[ { "docid": "fde0f116dfc929bf756d80e2ce69b1c7", "text": "The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in a PSO algorithm developed by the authors at UCLA specifically for engineering optimizations (UCLA-PSO). Also discussed is recent progress in the development of the PSO and the special considerations needed for engineering implementation including suggestions for the selection of parameter values. Additionally, a study of boundary conditions is presented indicating the invisible wall technique outperforms absorbing and reflecting wall techniques. These concepts are then integrated into a representative example of optimization of a profiled corrugated horn antenna.", "title": "" }, { "docid": "3299c32ee123e8c5fb28582e5f3a8455", "text": "Software defects, commonly known as bugs, present a serious challenge for system reliability and dependability. Once a program failure is observed, the debugging activities to locate the defects are typically nontrivial and time consuming. In this paper, we propose a novel automated approach to pin-point the root-causes of software failures.\n Our proposed approach consists of three steps. The first step is bug prediction, which leverages the existing work on anomaly-based bug detection as exceptional behavior during program execution has been shown to frequently point to the root cause of a software failure. The second step is bug isolation, which eliminates false-positive bug predictions by checking whether the dynamic forward slices of bug predictions lead to the observed program failure. The last step is bug validation, in which the isolated anomalies are validated by dynamically nullifying their effects and observing if the program still fails. The whole bug prediction, isolation and validation process is fully automated and can be implemented with efficient architectural support. Our experiments with 6 programs and 7 bugs, including a real bug in the gcc 2.95.2 compiler, show that our approach is highly effective at isolating only the relevant anomalies. Compared to state-of-art debugging techniques, our proposed approach pinpoints the defect locations more accurately and presents the user with a much smaller code set to analyze.", "title": "" }, { "docid": "b12cc6abd517246009e1d4230d1878c4", "text": "Electronic government is being increasingly recognized as a means for transforming public governance. Despite this increasing interest, information systems (IS) literature is mostly silent on what really contributes to the success of e-government 100 TEO, SRIVASTAVA, AND JIANG Web sites. To fill this gap, this study examines the role of trust in e-government success using the updated DeLone and McLean IS success model as the theoretical framework. The model is tested via a survey of 214 Singapore e-government Web site users. The results show that trust in government, but not trust in technology, is positively related to trust in e-government Web sites. Further, trust in e-government Web sites is positively related to information quality, system quality, and service quality. The quality constructs have different effects on “intention to continue” using the Web site and “satisfaction” with the Web site. Post hoc analysis indicates that the nature of usage (active versus passive users) may help us better understand the interrelationships among success variables examined in this study. This result suggests that the DeLone and McLean model can be further extended by examining the nature of IS use. In addition, it is important to consider the role of trust as well as various Web site quality attributes in understanding e-government success.", "title": "" }, { "docid": "141d607eb8caeb7512f777ee3dea5972", "text": "DBSCAN is a base algorithm for density based clustering. It can detect the clusters of different shapes and sizes from the large amount of data which contains noise and outliers. However, it is fail to handle the local density variation that exists within the cluster. In this paper, we propose a density varied DBSCAN algorithm which is capable to handle local density variation within the cluster. It calculates the growing cluster density mean and then the cluster density variance for any core object, which is supposed to be expended further, by considering density of its -neighborhood with respect to cluster density mean. If cluster density variance for a core object is less than or equal to a threshold value and also satisfying the cluster similarity index, then it will allow the core object for expansion. The experimental results show that the proposed clustering algorithm gives optimized results.", "title": "" }, { "docid": "da1ac93453bc9da937df4eb49902fbe5", "text": "A novel hierarchical multimodal attention-based model is developed in this paper to generate more accurate and descriptive captions for images. Our model is an \"end-to-end\" neural network which contains three related sub-networks: a deep convolutional neural network to encode image contents, a recurrent neural network to identify the objects in images sequentially, and a multimodal attention-based recurrent neural network to generate image captions. The main contribution of our work is that the hierarchical structure and multimodal attention mechanism is both applied, thus each caption word can be generated with the multimodal attention on the intermediate semantic objects and the global visual content. Our experiments on two benchmark datasets have obtained very positive results.", "title": "" }, { "docid": "2fe33171bc57e5b78ce4dafb30f7d427", "text": "In this paper, we propose a volume visualization system that accepts direct manipulation through a sketch-based What You See Is What You Get (WYSIWYG) approach. Similar to the operations in painting applications for 2D images, in our system, a full set of tools have been developed to enable direct volume rendering manipulation of color, transparency, contrast, brightness, and other optical properties by brushing a few strokes on top of the rendered volume image. To be able to smartly identify the targeted features of the volume, our system matches the sparse sketching input with the clustered features both in image space and volume space. To achieve interactivity, both special algorithms to accelerate the input identification and feature matching have been developed and implemented in our system. Without resorting to tuning transfer function parameters, our proposed system accepts sparse stroke inputs and provides users with intuitive, flexible and effective interaction during volume data exploration and visualization.", "title": "" }, { "docid": "63f078ce0186faa9f541b5b2145431ea", "text": "Although insulated-gate bipolar-transistor (IGBT) turn-on losses can be comparable to turn-off losses, IGBT turn-on has not been as thoroughly studied in the literature. In the present work IGBT turn on under resistive and inductive load conditions is studied in detail through experiments, finite element simulations, and circuit simulations using physics-based semiconductor models. Under resistive load conditions, it is critical to accurately model the conductivity-modulation phenomenon. Under clamped inductive load conditions at turn-on there is strong interaction between the IGBT and the freewheeling diode undergoing reverse recovery. Physics-based IGBT and diode models are used that have been proved accurate in the simulation of IGBT turn-off.", "title": "" }, { "docid": "8e0badc0828019460da0017774c8b631", "text": "To meet the explosive growth in traffic during the next twenty years, 5G systems using local area networks need to be developed. These systems will comprise of small cells and will use extreme cell densification. The use of millimeter wave (Mmwave) frequencies, in particular from 20 GHz to 90 GHz, will revolutionize wireless communications given the extreme amount of available bandwidth. However, the different propagation conditions and hardware constraints of Mmwave (e.g., the use of RF beamforming with very large arrays) require reconsidering the modulation methods for Mmwave compared to those used below 6 GHz. In this paper we present ray-tracing results, which, along with recent propagation measurements at Mmwave, all point to the fact that Mmwave frequencies are very appropriate for next generation, 5G, local area wireless communication systems. Next, we propose null cyclic prefix single carrier as the best candidate for Mmwave communications. Finally, systemlevel simulation results show that with the right access point deployment peak rates of over 15 Gbps are possible at Mmwave along with a cell edge experience in excess of 400 Mbps.", "title": "" }, { "docid": "7e8723331aaec6b4f448030a579fa328", "text": "With the recent trend toward more non extraction treatment, several appliances have been advocated to distalize molars in the upper arch. Certain principles, as outlined by Burstone, must be borne in mind when designing such an appliance:", "title": "" }, { "docid": "bf152c9b8937f84b3a7796133a5f0749", "text": "This paper proposes a robust sensor fusion algorithm to accurately track the spatial location and motion of a human under various dynamic activities, such as walking, running, and jumping. The position accuracy of the indoor wireless positioning systems frequently suffers from non-line-of-sight and multipath effects, resulting in heavy-tailed outliers and signal outages. We address this problem by integrating the estimates from an ultra-wideband (UWB) system and inertial measurement units, but also taking advantage of the estimated velocity and height obtained from an aiding lower body biomechanical model. The proposed method is a cascaded Kalman filter-based algorithm where the orientation filter is cascaded with the robust position/velocity filter. The outliers are detected for individual measurements using the normalized innovation squared, where the measurement noise covariance is softly scaled to reduce its weight. The positioning accuracy is further improved with the Rauch–Tung–Striebel smoother. The proposed algorithm was validated against an optical motion tracking system for both slow (walking) and dynamic (running and jumping) activities performed in laboratory experiments. The results show that the proposed algorithm can maintain high accuracy for tracking the location of a subject in the presence of the outliers and UWB signal outages with a combined 3-D positioning error of less than 13 cm.", "title": "" }, { "docid": "a7e2538186ce04325d24842c72ff41c6", "text": "Omics refers to a field of study in biology such as genomics, proteomics, and metabolomics. Investigating fundamental biological problems based on omics data would increase our understanding of bio-systems as a whole. However, omics data is characterized with high-dimensionality and unbalance between features and samples, which poses big challenges for classical statistical analysis and machine learning methods. This paper studies a minimal-redundancy-maximal-relevance (MRMR) feature selection for omics data classification using three different relevance evaluation measures including mutual information (MI), correlation coefficient (CC), and maximal information coefficient (MIC). A linear forward search method is used to search the optimal feature subset. The experimental results on five real-world omics datasets indicate that MRMR feature selection with CC is more robust to obtain better (or competitive) classification accuracy than the other two measures.", "title": "" }, { "docid": "cb13f835a46c44302e4068241cfc7142", "text": "Medical diagnosis is an exciting are of research and many researchers have been working on the application of Artificial Intelligence techniques to develop disease recognition systems. They are analysing currently available information and also biochemical data collecting from clinical laboratories and experts for identifying pathological status of the patient. During the process of diagnosis, the clinical data so obtained from several sources must be inferred and classified into a particular pathology. Computer aided diagnosis tools designed based on biologically inspired methods such as artificial neural/immune networks can be employed to improve the regular diagnostic process and to avoid misdiagnosis. In this paper pre-processing and classification techniques are used to train the system. Artificial immune recognition method is used for pre-processing and KNN classifier is used for classification. The system is tested with some sample data and obtained the results. The system is validated with annotated data.", "title": "" }, { "docid": "7b9df4427a6290cf5efda9c41612ad64", "text": "A systematic design of planar MIMO monopole antennas with significantly reduced mutual coupling is presented, based on the concept of metamaterials. The design is performed by means of individual rectangular loop resonators, placed in the space between the antenna elements. The underlying principle is that resonators act like small metamaterial samples, thus providing an effective means of controlling electromagnetic wave propagation. The proposed design achieves considerably high levels of isolation between antenna elements, without essentially affecting the simplicity and planarity of the MIMO antenna.", "title": "" }, { "docid": "c902e2669f233a48d9048b9c7abd1401", "text": "Unmanned Aerial Vehicles (UAV)-based remote sensing offers great possibilities to acquire in a fast and easy way field data for precision agriculture applications. This field of study is rapidly increasing due to the benefits and advantages for farm resources management, particularly for studying crop health. This paper reports some experiences related to the analysis of cultivations (vineyards and tomatoes) with Tetracam multispectral data. The Tetracam camera was mounted on a multi-rotor hexacopter. The multispectral data were processed with a photogrammetric pipeline to create triband orthoimages of the surveyed sites. Those orthoimages were employed to extract some Vegetation Indices (VI) such as the Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Soil Adjusted Vegetation Index (SAVI), examining the vegetation vigor for each crop. The paper demonstrates the great potential of high-resolution UAV data and photogrammetric techniques applied in the agriculture framework to collect multispectral images and OPEN ACCESS Remote Sens. 2015, 7 4027 evaluate different VI, suggesting that these instruments represent a fast, reliable, and cost-effective resource in crop assessment for precision farming applications.", "title": "" }, { "docid": "2cf6b0b92b84da58c612e3767c6a24d9", "text": "OBJECTIVE\nTo determine the effectiveness of early physiotherapy in reducing the risk of secondary lymphoedema after surgery for breast cancer.\n\n\nDESIGN\nRandomised, single blinded, clinical trial.\n\n\nSETTING\nUniversity hospital in Alcalá de Henares, Madrid, Spain.\n\n\nPARTICIPANTS\n120 women who had breast surgery involving dissection of axillary lymph nodes between May 2005 and June 2007.\n\n\nINTERVENTION\nThe early physiotherapy group was treated by a physiotherapist with a physiotherapy programme including manual lymph drainage, massage of scar tissue, and progressive active and action assisted shoulder exercises. This group also received an educational strategy. The control group received the educational strategy only.\n\n\nMAIN OUTCOME MEASURE\nIncidence of clinically significant secondary lymphoedema (>2 cm increase in arm circumference measured at two adjacent points compared with the non-affected arm).\n\n\nRESULTS\n116 women completed the one year follow-up. Of these, 18 developed secondary lymphoedema (16%): 14 in the control group (25%) and four in the intervention group (7%). The difference was significant (P=0.01); risk ratio 0.28 (95% confidence interval 0.10 to 0.79). A survival analysis showed a significant difference, with secondary lymphoedema being diagnosed four times earlier in the control group than in the intervention group (intervention/control, hazard ratio 0.26, 95% confidence interval 0.09 to 0.79).\n\n\nCONCLUSION\nEarly physiotherapy could be an effective intervention in the prevention of secondary lymphoedema in women for at least one year after surgery for breast cancer involving dissection of axillary lymph nodes.\n\n\nTRIAL REGISTRATION\nCurrent controlled trials ISRCTN95870846.", "title": "" }, { "docid": "c2daec5b85a4e8eea614d855c6549ef0", "text": "An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as \"place green at B 4 now\". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.", "title": "" }, { "docid": "fcc36e4c32953dd9deedd5fd11ca8a1a", "text": "Effective human-robot cooperation requires robotic devices that understand human goals and intentions. We frame the problem of intent recognition as one of tracking and predicting human actions within the context of plan task sequences. A hybrid mode estimation approach, which estimates both discrete operating modes and continuous state, is used to accomplish this tracking based on possibly noisy sensor input. The operating modes correspond to plan tasks, hence, the ability to estimate and predict these provides a prediction of human actions and associated needs in the plan context. The discrete and continuous estimates interact in that the discrete mode selects continous dynamic models used in the continuous estimation, and the continuous state is used to evaluate guard conditions for mode transitions. Two applications: active prosthetic devices, and cooperative assembly, are described.", "title": "" }, { "docid": "b250df76fdd27728af89b0c02aef5a68", "text": "In this experiment, seven software teams developed versions of the same small-size (2000-4000 source instruction) application software product. Four teams used the Specifying approach. Three teams used the Prototyping approach.\n The main results of the experiment were:\n Prototyping yielded products with roughly equivalent performance, but with about 40% less code and 45% less effort.\n The prototyped products rated somewhat lower on functionality and robustness, but higher on ease of use and ease of learning.\n Specifying produced more coherent designs and software that was easier to integrate.\n The paper presents the experimental data supporting these and a number of additional conclusions.", "title": "" }, { "docid": "a691214a7ac8a1a7b4ad6fe833afd572", "text": "Within the field of computer vision, change detection algorithms aim at automatically detecting significant changes occurring in a scene by analyzing the sequence of frames in a video stream. In this paper we investigate how state-of-the-art change detection algorithms can be combined and used to create a more robust algorithm leveraging their individual peculiarities. We exploited genetic programming (GP) to automatically select the best algorithms, combine them in different ways, and perform the most suitable post-processing operations on the outputs of the algorithms. In particular, algorithms’ combination and post-processing operations are achieved with unary, binary and ${n}$ -ary functions embedded into the GP framework. Using different experimental settings for combining existing algorithms we obtained different GP solutions that we termed In Unity There Is Strength. These solutions are then compared against state-of-the-art change detection algorithms on the video sequences and ground truth annotations of the ChangeDetection.net 2014 challenge. Results demonstrate that using GP, our solutions are able to outperform all the considered single state-of-the-art change detection algorithms, as well as other combination strategies. The performance of our algorithm are significantly different from those of the other state-of-the-art algorithms. This fact is supported by the statistical significance analysis conducted with the Friedman test and Wilcoxon rank sum post-hoc tests.", "title": "" }, { "docid": "1cbbc5af1327338283ca75e0bed7d53c", "text": "Microscopic examination revealed polymorphic cells with abundant cytoplasm and large nuclei within the acanthotic epidermis (Figure 3). There were aggregated melanin granules in the epidermis, as well as a subepidermal lymphocytic infiltrate. The atypical cells were positive for CK7 (Figure 4). A few scattered cells were positive with the Melan-A stain (Figure 5). Pigmented lesion of the left nipple in a 49-year-old woman Case for Diagnosis", "title": "" } ]
scidocsrr
6bd5f4367e4b61199da4da47b337a1ae
Dual Band-Reject UWB Antenna With Sharp Rejection of Narrow and Closely-Spaced Bands
[ { "docid": "99d5eab7b0dfcb59f7111614714ddf95", "text": "To prevent interference problems due to existing nearby communication systems within an ultrawideband (UWB) operating frequency, the significance of an efficient band-notched design is increased. Here, the band-notches are realized by adding independent controllable strips in terms of the notch frequency and the width of the band-notches to the fork shape of the UWB antenna. The size of the flat type band-notched UWB antenna is etched on 24 times 36 mm2 substrate. Two novel antennas are presented. One antenna is designed for single band-notch with a separated strip to cover the 5.15-5.825 GHz band. The second antenna is designed for dual band-notches using two separated strips to cover the 5.15-5.35 GHz band and 5.725-5.825 GHz band. The simulation and measurement show that the proposed antenna achieves a wide bandwidth from 3 to 12 GHz with the dual band-notches successfully.", "title": "" } ]
[ { "docid": "ff572b8e20b6f6792f8598b80660238f", "text": "In this study, Cu pillar bump is firstly built on FCCSP with 65 nm low k chip. 7 DOE cells are designed to evaluate the effects of Cu pillar height, Cu pillar diameter, PI opening size and PI material on package reliability performance. No obvious failure is found after package assembly and long-term reliability test. The packages are still in good shape even though the reliability test is expanded to 3x test durations With the experiences of Cu pillar bump on 65 nm low k chip, Cu pillar bump is again built on FCBGA package with 45 nm ELK chip. White bump defect is found after chip bond via CSAM inspection, failure analysis shows that the white bump phenomenon is due to crack occurs inside ELK layer. A local heating bond tool (thermal compression bond) is used to improve ELK crack, test results illustrate ELK crack still exists, however the failure rate reduces from original 30%~50% to 5%~20%. Simulation analysis is conducted to study the effect of PI opening size and UBM size on stress concentration at ELK layer. Small PI opening size can reduce stress distribution at ELK layer. On the contrary, relatively large PI opening size and large UBM size also show positive effect on ELK crack. Assembly process and reliability test are conducted again to validate simulation results, experiment data is consistent with simulation result.", "title": "" }, { "docid": "35da724255bbceb859d01ccaa0dec3b1", "text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.", "title": "" }, { "docid": "d54b25dc88c99a02a66ed056bff78444", "text": "Objectives: This study was undertaken to observe the topographical features of enamel surface deproteinized with 5.25% sodium hypochlorite (NaOCl) after phosphoric acid (H3PO4) etching using Scanning Electron Microscope (SEM) analysis and also the effect of enamel deproteinization after acid etching on the shear bond strength (SBS) of AdperTM Single Bond 2 adhesive and FiltekTM Z350 XT composite resin. Study design: SEM Observation: 10 enamel blocks of 1 mm2 from 10 human sound permanent molar teeth were obtained and treated with 37% H3PO4 gel for 15 seconds followed by treatment with 5.25% NaOCl for 60 seconds. All the 10 samples were subjected to SEM analysis and 5 microphotographs of each sample were obtained at 500X magnification and evaluated for the occurrence of Type I – II etching pattern in percentage (%) using Auto – CAD 2007 software. SBS Evaluation: A 5×4 mm window of the enamel surface was etched with 37% H3PO4 gel for 15 seconds, washed with distilled water and air dried. The etched enamel surface was then treated with 5.25% NaOCl for 60 seconds, washed with distilled water and air dried. A single coat of AdperTM Single Bond 2 adhesive was applied and photo polymerized for 20 seconds and FiltekTM Z350 XT composite resin block of length 5mm, width 4 mm and height 5 mm respectively was built and photo polymerized in increments for 20 seconds each. The shear bond strength of all the 20 test samples (permanent molar teeth) were measured (in MPa) on Instron Mechanical Testing Machine. Results: The mean value of Type I – II etching pattern of all the test samples was observed to be 40.68 + 26.38% and the mean SBS value for all the test samples was observed to be 17.35 + 7.25 MPa. Conclusions: No significant enhancive effect of enamel deproteinization after acid etching with respect to the occurrence of Type I-II etching patterns as well as on the SBS of adhesive resin and composite resin complex to the enamel surface was observed in this study. The use of 37% phosphoric acid alone for 15 seconds still remains the best method for pretreatment of the enamel. *Corresponding author: Dr. Ramakrishna Yeluri. M.D.S, F.P.F.A, Professor, Department of Pedodontics and Preventive Dentistry, K.D. Dental College and Hospital, Mathura – Delhi N.H #2, Mathura – 281001, Uttar Pradesh, India, Tel: +919997951558; Fax: 0565-2530764; E-mail: drramakrishnay@indiatimes.com, kittypedo@yahoo.com Received December 17, 2013; Accepted Janaury 21, 2014; Published January 23, 2014 Citation: Ramakrishna Y, Bhoomika A, Harleen N, Munshi AK (2014) Enamel Deproteinization after Acid Etching Is it Worth the Effort? Dentistry 4: 200. doi:10.4172/2161-1122.1000200 Copyright: © 2014 Ramakrishna Y, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "90813d00050fdb1b8ce1a9dffe858d46", "text": "Background: Diabetes mellitus is associated with biochemical and pathological alterations in the liver. The aim of this study was to investigate the effects of apple cider vinegar (ACV) on serum biochemical markers and histopathological changes in the liver of diabetic rats for 30 days. Effects were evaluated using streptozotocin (STZ)-induced diabetic rats as an experimental model. Materials and methods: Diabetes mellitus was induced by a single dose of STZ (65 mg/kg) given intraperitoneally. Thirty wistar rats were divided into three groups: control group, STZ-treated group and STZ plus ACV treated group (2 ml/kg BW). Animals were sacrificed 30 days post treatment. Results: Biochemical results indicated that, ACV caused a significant decrease in glucose, TC, LDL-c and a significant increase in HDL-c. Histopathological examination of the liver sections of diabetic rats showed fatty changes in the cytoplasm of the hepatocytes in the form of accumulation of lipid droplets, lymphocytic infiltration. Electron microscopic studies revealed aggregations of polymorphic mitochondria with apparent loss of their cristae and condensed matrices. Besides, the rough endoplasmic reticulum was proliferating and fragmented into smaller stacks. The cytoplasm of the hepatocytes exhibited vacuolations and displayed a large number of lipid droplets of different sizes. On the other hand, the liver sections of diabetic rats treated with ACV showed minimal toxic effects due to streptozotocin. These ultrastructural results revealed that treatment of diabetic rats with ACV led to apparent recovery of the injured hepatocytes. In prophetic medicine, Prophet Muhammad peace is upon him strongly recommended eating vinegar in the Prophetic Hadeeth: \"vinegar is the best edible\". Conclusion: This study showed that ACV, in early stages of diabetes inductioncan decrease the destructive progress of diabetes and cause hepatoprotection against the metabolic damages resulting from streptozotocininduced diabetes mellitus.", "title": "" }, { "docid": "8f25b3b36031653311eee40c6c093768", "text": "This paper provides a survey of the applications of computers in music teaching. The systems are classified by musical activity rather than by technical approach. The instructional strategies involved and the type of knowledge represented are highlighted and areas for future research are identified.", "title": "" }, { "docid": "a4b8f00bc8c37f56f85ed61cae226ef3", "text": "Academic motivation is discussed in terms of self-efficacy, an individual's judgments of his or her capabilities to perform given actions. After presenting an overview of self-efficacy theory, I contrast self-efficacy with related constructs (perceived control, outcome expectations, perceived value of outcomes, attributions, and selfconcept) and discuss some efficacy research relevant to academic motivation. Studies of the effects of person variables (goal setting and information processing) and situation variables (models, attributional feedback, and rewards) on self-efficacy and motivation are reviewed. In conjunction with this discussion, I mention substantive issues that need to be addressed in the self-efficacy research and summarize evidence on the utility of self-efficacy for predicting motivational outcomes. Areas for future research are suggested. Article: The concept of personal expectancy has a rich history in psychological theory on human motivation (Atkinson, 1957; Rotter, 1966; Weiner, 1979). Research conducted within various theoretical traditions supports the idea that expectancy can influence behavioral instigation, direction, effort, and persistence (Bandura, 1986; Locke & Latham, 1990; Weiner, 1985). In this article, I discuss academic motivation in terms of one type of personal expectancy: self-efficacy, defined as \"People's judgments of their capabilities to organize and execute courses of action required to attain designated types of performances\" (Bandura, 1986, p. 391). Since Bandura's (1977) seminal article on selfefficacy, much research has clarified and extended the role of self-efficacy as a mechanism underlying behavioral change, maintenance, and generalization. For example, there is evidence that self-efficacy predicts such diverse outcomes as academic achievements, social skills, smoking cessation, pain tolerance, athletic performances, career choices, assertiveness, coping with feared events, recovery from heart attack, and sales performance (Bandura, 1986). After presenting an overview of self-efficacy theory and comparison of self-efficacy with related constructs, I discuss some self-efficacy research relevant to academic motivation, pointing out substantive issues that need to be addressed. I conclude with recommendations for future research. SELF-EFFICACY THEORY Antecedents and Consequences Bandura (1977) hypothesized that self-efficacy affects an individual's choice of activities, effort, and persistence. People who have a low sense of efficacy for accomplishing a task may avoid it; those who believe they are capable should participate readily. Individuals who feel efficacious are hypothesized to work harder and persist longer when they encounter difficulties than those who doubt their capabilities. Self-efficacy theory postulates that people acquire information to appraise efficacy from their performance accomplishments, vicarious (observational) experiences, forms of persuasion, and physiological indexes. An individual's own performances offer the most reliable guides for assessing efficacy. Successes raise efficacy and failure lowers it, but once a strong sense of efficacy is developed, a failure may not have much impact (Bandura, 1986). An individual also acquires capability information from knowledge of others. Similar others offer the best basis for comparison (Schunk, 1989b). Observing similar peers perform a task conveys to observers that they too are capable of accomplishing it. Information acquired vicariously typically has a weaker effect on self-efficacy than performance-based information; a vicarious increase in efficacy can be negated by subsequent failures. Students often receive persuasory information that they possess the capabilities to perform a task (e.g., \"You can do this\"). Positive persuasory feedback enhances self-efficacy, but this increase will be temporary if subsequent efforts turn out poorly. Students also derive efficacy information from physiological indexes (e.g., heart rate and sweating). Bodily symptoms signaling anxiety might be interpreted to indicate a lack of skills. Information acquired from these sources does not automatically influence efficacy; rather, it is cognitively appraised (Bandura, 1986). Efficacy appraisal is an inferential process in which persons weigh and combine the contributions of such personal and situational factors as their perceived ability, the difficulty of the task, amount of effort expended, amount of external assistance received, number and pattern of successes and failures, their perceived similarity to models, and persuader credibility (Schunk, 1989b). Self-efficacy is not the only influence on behavior; it is not necessarily the most important. Behavior is a function of many variables. In achievement settings some other important variables are skills, outcome expectations, and the perceived value of outcomes (Schunk, 1989b). High self-efficacy will not produce competent performances when requisite skills are lacking. Outcome expectations, or beliefs concerning the probable outcomes of actions, are important because individuals are not motivated to act in ways they believe will result in negative outcomes. Perceived value of outcomes refers to how much people desire certain outcomes relative to others. Given adequate skills, positive outcome expectations, and personally valued outcomes, self-efficacy is hypothesized to influence the choice and direction of much human behavior (Bandura, 1989b). Schunk (1989b) discussed how self-efficacy might operate during academic learning. At the start of an activity, students differ in their beliefs about their capabilities to acquire knowledge, perform skills, master the material, and so forth. Initial self-efficacy varies as a function of aptitude (e.g., abilities and attitudes) and prior experience. Such personal factors as goal setting and information processing, along with situational factors (e.g., rewards and teacher feedback), affect students while they are working. From these factors students derive cues signaling how well they are learning, which they use to assess efficacy for further learning. Motivation is enhanced when students perceive they are making progress in learning. In turn, as students work on tasks and become more skillful, they maintain a sense of self-efficacy for performing well.", "title": "" }, { "docid": "20c6b7417a31aceb39bcf1b1fa3fce4b", "text": "In the process of dealing with the cutting calculation of Multi-axis CNC Simulation, the traditional Voxel Model not only will cost large computation time when judging whether the cutting happens or not, but also the data points may occupy greater storage space. So it cannot satisfy the requirement of real-time emulation, In the construction method of Compressed Voxel Model, it can satisfy the need of Multi-axis CNC Simulation, and storage space is relatively small. Also the model reconstruction speed is faster, but the Boolean computation in the cutting judgment is very complex, so it affects the real-time of CNC Simulation indirectly. Aimed at the shortcomings of these methods, we propose an improved solid modeling technique based on the Voxel model, which can meet the demand of real-time in cutting computation and Graphic display speed.", "title": "" }, { "docid": "058a128a15c7d0e343adb3ada80e18d3", "text": "PURPOSE OF REVIEW\nOdontogenic causes of sinusitis are frequently missed; clinicians often overlook odontogenic disease whenever examining individuals with symptomatic rhinosinusitis. Conventional treatments for chronic rhinosinusitis (CRS) will often fail in odontogenic sinusitis. There have been several recent developments in the understanding of mechanisms, diagnosis, and treatment of odontogenic sinusitis, and clinicians should be aware of these advances to best treat this patient population.\n\n\nRECENT FINDINGS\nThe majority of odontogenic disease is caused by periodontitis and iatrogenesis. Notably, dental pain or dental hypersensitivity is very commonly absent in odontogenic sinusitis, and symptoms are very similar to those seen in CRS overall. Unilaterality of nasal obstruction and foul nasal drainage are most suggestive of odontogenic sinusitis, but computed tomography is the gold standard for diagnosis. Conventional panoramic radiographs are very poorly suited to rule out odontogenic sinusitis, and cannot be relied on to identify disease. There does not appear to be an optimal sequence of treatment for odontogenic sinusitis; the dental source should be addressed and ESS is frequently also necessary to alleviate symptoms.\n\n\nSUMMARY\nOdontogenic sinusitis has distinct pathophysiology, diagnostic considerations, microbiology, and treatment strategies whenever compared with chronic rhinosinusitis. Clinicians who can accurately identify odontogenic sources can increase efficacy of medical and surgical treatments and improve patient outcomes.", "title": "" }, { "docid": "2bd2bd3b2604d29c11017413c109c47c", "text": "Supervised semantic role labeling (SRL) systems are generally claimed to have accuracies in the range of 80% and higher (Erk and Padó, 2006). These numbers, though, are the result of highly-restricted evaluations, i.e., typically evaluating on hand-picked lemmas for which training data is available. In this paper we consider performance of such systems when we evaluate at the document level rather than on the lemma level. While it is wellknown that coverage gaps exist in the resources available for training supervised SRL systems, what we have been lacking until now is an understanding of the precise nature of this coverage problem and its impact on the performance of SRL systems. We present a typology of five different types of coverage gaps in FrameNet. We then analyze the impact of the coverage gaps on performance of a supervised semantic role labeling system on full texts, showing an average oracle upper bound of 46.8%.", "title": "" }, { "docid": "dc3417d01a998ee476aeafc0e9d11c74", "text": "We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. 1. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures (section 3.1). 2. Model sizes can be reduced by a factor of 4 by quantizing weights to 8bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights (section 3.1). 3. We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX (section 6). 4. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks (section 3.2). 5. We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks (Section 3). 6. We review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations (section 4). 7. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits (section 7).", "title": "" }, { "docid": "245de72c0f333f4814990926e08c13e9", "text": "Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.", "title": "" }, { "docid": "39fc7b710a6d8b0fdbc568b48221de5d", "text": "The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practical scenarios, the complexity of network dynamics makes it difficult to determine the network evolution model in advance. Thus, the wireless decision-making entities may face a black-box network control problem and the model-based network management mechanisms will be no longer applicable. In contrast, model-free learning enables the decision-making entities to adapt their behaviors based on the reinforcement from their interaction with the environment and (implicitly) build their understanding of the system from scratch through trial-and-error. Such characteristics are highly in accordance with the requirement of cognition-based intelligence for devices in cognitive wireless networks. Therefore, model-free learning has been considered as one key implementation approach to adaptive, self-organized network control in cognitive wireless networks. In this paper, we provide a comprehensive survey on the applications of the state-of-the-art model-free learning mechanisms in cognitive wireless networks. According to the system models on which those applications are based, a systematic overview of the learning algorithms in the domains of single-agent system, multiagent systems, and multiplayer games is provided. The applications of model-free learning to various problems in cognitive wireless networks are discussed with the focus on how the learning mechanisms help to provide the solutions to these problems and improve the network performance over the model-based, non-adaptive methods. Finally, a broad spectrum of challenges and open issues is discussed to offer a guideline for the future research directions.", "title": "" }, { "docid": "96c1da4e4b52014e4a9c5df098938c98", "text": "Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.", "title": "" }, { "docid": "54fc5bc85ef8022d099fff14ab1b7ce0", "text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.", "title": "" }, { "docid": "42d31b6b66192552d0f0aa1ce9a36e21", "text": "OBJECTIVE\nAlthough stress is often presumed to cause sleep disturbances, little research has documented the role of stressful life events in primary insomnia. The present study examined the relationship of stress and coping skills, and the potential mediating role of presleep arousal, to sleep patterns in good sleepers and insomnia sufferers.\n\n\nMETHODS\nThe sample was composed of 67 participants (38 women, 29 men; mean age, 39.6 years), 40 individuals with insomnia and 27 good sleepers. Subjects completed prospective, daily measures of stressful events, presleep arousal, and sleep for 21 consecutive days. In addition, they completed several retrospective and global measures of depression, anxiety, stressful life events, and coping skills.\n\n\nRESULTS\nThe results showed that poor and good sleepers reported equivalent numbers of minor stressful life events. However, insomniacs rated both the impact of daily minor stressors and the intensity of major negative life events higher than did good sleepers. In addition, insomniacs perceived their lives as more stressful, relied more on emotion-oriented coping strategies, and reported greater presleep arousal than good sleepers. Prospective daily data showed significant relationships between daytime stress and nighttime sleep, but presleep arousal and coping skills played an important mediating role.\n\n\nCONCLUSIONS\nThe findings suggest that the appraisal of stressors and the perceived lack of control over stressful events, rather than the number of stressful events per se, enhance the vulnerability to insomnia. Arousal and coping skills play an important mediating role between stress and sleep. The main implication of these results is that insomnia treatments should incorporate clinical methods designed to teach effective stress appraisal and coping skills.", "title": "" }, { "docid": "ea041a1df42906b0d5a3644ae8ba933b", "text": "In recent years, program verifiers and interactive theorem provers have become more powerful and more suitable for verifying large programs or proofs. This has demonstrated the need for improving the user experience of these tools to increase productivity and to make them more accessible to nonexperts. This paper presents an integrated development environment for Dafny—a programming language, verifier, and proof assistant—that addresses issues present in most state-of-the-art verifiers: low responsiveness and lack of support for understanding non-obvious verification failures. The paper demonstrates several new features that move the state-of-the-art closer towards a verification environment that can provide verification feedback as the user types and can present more helpful information about the program or failed verifications in a demand-driven and unobtrusive way.", "title": "" }, { "docid": "8704a4033132a1d26cf2da726a60045e", "text": "In practical classification, there is often a mix of learnable and unlearnable classes and only a classifier above a minimum performance threshold can be deployed. This problem is exacerbated if the training set is created by active learning. The bias of actively learned training sets makes it hard to determine whether a class has been learned. We give evidence that there is no general and efficient method for reducing the bias and correctly identifying classes that have been learned. However, we characterize a number of scenarios where active learning can succeed despite these difficulties.", "title": "" }, { "docid": "5e2cfcfb49286b50bcfc6eb1648afc99", "text": "Face analysis is a rapidly developing research area and facial landmark detection is one of the pre-processing steps. In recent years, many algorithms and comprehensive survey/ challenge papers have been published on facial landmark detection. In this work, we analysed six survey/challenge papers and observed that among open source systems deep learning (TCDCN, DCR) and regression based (CFSS) methods show superior performance.", "title": "" }, { "docid": "906659aa61bbdb5e904a1749552c4741", "text": "The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "28cf177349095e7db4cdaf6c9c4a6cb1", "text": "Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks.", "title": "" } ]
scidocsrr
dd6d377b0614dce9021713a3d9572e68
Altruism and selfishness.
[ { "docid": "7beb0fa9fa3519d291aa3d224bfc1b74", "text": "In comparisons among Chicago neighbourhoods, homicide rates in 1988-93 varied more than 100-fold, while male life expectancy at birth ranged from 54 to 77 years, even with effects of homicide mortality removed. This \"cause deleted\" life expectancy was highly correlated with homicide rates; a measure of economic inequality added significant additional prediction, whereas median household income did not. Deaths from internal causes (diseases) show similar age patterns, despite different absolute levels, in the best and worst neighbourhoods, whereas deaths from external causes (homicide, accident, suicide) do not. As life expectancy declines across neighbourhoods, women reproduce earlier; by age 30, however, neighbourhood no longer affects age specific fertility. These results support the hypothesis that life expectancy itself may be a psychologically salient determinant of risk taking and the timing of life transitions.", "title": "" } ]
[ { "docid": "df1e281417844a0641c3b89659e18102", "text": "In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a deep primal-dual network. The joint network computes a noise-free, highresolution estimate from a noisy, low-resolution input depth map. Additionally, a highresolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.", "title": "" }, { "docid": "4a6d231ce704e4acf9320ac3bd5ade14", "text": "Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.", "title": "" }, { "docid": "3eb022b3ec1517bc54670a68c8a14106", "text": "Waste as a management issue has been evident for over four millennia. Disposal of waste to the biosphere has given way to thinking about, and trying to implement, an integrated waste management approach. In 1996 the United Nations Environmental Programme (UNEP) defined 'integrated waste management' as 'a framework of reference for designing and implementing new waste management systems and for analysing and optimising existing systems'. In this paper the concept of integrated waste management as defined by UNEP is considered, along with the parameters that constitute integrated waste management. The examples used are put into four categories: (1) integration within a single medium (solid, aqueous or atmospheric wastes) by considering alternative waste management options, (2) multi-media integration (solid, aqueous, atmospheric and energy wastes) by considering waste management options that can be applied to more than one medium, (3) tools (regulatory, economic, voluntary and informational) and (4) agents (governmental bodies (local and national), businesses and the community). This evaluation allows guidelines for enhancing success: (1) as experience increases, it is possible to deal with a greater complexity; and (2) integrated waste management requires a holistic approach, which encompasses a life cycle understanding of products and services. This in turn requires different specialisms to be involved in the instigation and analysis of an integrated waste management system. Taken together these advance the path to sustainability.", "title": "" }, { "docid": "e808606994c3fd8eea1b78e8a3e55b8c", "text": "We describe a Japanese-English patent parallel corpus created from the Japanese and US patent data provided for the NTCIR-6 patent retrieval task. The corpus contains about 2 million sentence pairs that were aligned automatically. This is the largest Japanese-English parallel corpus, which will be available to the public after the 7th NTCIR workshop meeting. We estimated that about 97% of the sentence pairs were correct alignments and about 90% of the alignments were adequate translations whose English sentences reflected almost perfectly the contents of the corresponding Japanese sentences.", "title": "" }, { "docid": "dc4a2fa822a685997c83e6fd49b30f56", "text": "Complex event processing (CEP) has become increasingly important for tracking and monitoring applications ranging from health care, supply chain management to surveillance. These monitoring applications submit complex event queries to track sequences of events that match a given pattern. As these systems mature the need for increasingly complex nested sequence queries arises, while the state-of-the-art CEP systems mostly focus on the execution of flat sequence queries only. In this paper, we now introduce an iterative execution strategy for nested CEP queries composed of sequence, negation, AND and OR operators. Lastly we have introduced the promising direction of applying selective caching of intermediate results to optimize the execution. Our experimental study using real-world stock trades evaluates the performance of our proposed iterative execution strategy for different query types.", "title": "" }, { "docid": "25786c5516b559fc4a566e72485fdcc6", "text": "We propose an algorithm to improve the quality of depth-maps used for Multi-View Stereo (MVS). Many existing MVS techniques make use of a two stage approach which estimates depth-maps from neighbouring images and then merges them to extract a final surface. Often the depth-maps used for the merging stage will contain outliers due to errors in the matching process. Traditional systems exploit redundancy in the image sequence (the surface is seen in many views), in order to make the final surface estimate robust to these outliers. In the case of sparse data sets there is often insufficient redundancy and thus performance degrades as the number of images decreases. In order to improve performance in these circumstances it is necessary to remove the outliers from the depth-maps. We identify the two main sources of outliers in a top performing algorithm: (1) spurious matches due to repeated texture and (2) matching failure due to occlusion, distortion and lack of texture. We propose two contributions to tackle these failure modes. Firstly, we store multiple depth hypotheses and use a spatial consistently constraint to extract the true depth. Secondly, we allow the algorithm to return an unknown state when the a true depth estimate cannot be found. By combining these in a discrete label MRF optimisation we are able to obtain high accuracy depthmaps with low numbers of outliers. We evaluate our algorithm in a multi-view stereo framework and find it to confer state-of-the-art performance with the leading techniques, in particular on the standard evaluation sparse data sets.", "title": "" }, { "docid": "9e1e42d27521eb20b6fef10087dd2d9a", "text": "This paper identifies the need for developing new ways to study curiosity in the context of today’s pervasive technologies and unprecedented information access. Curiosity is defined in this paper in a way which incorporates the concomitant constructs of interest and engagement. A theoretical model for curiosity, interest and engagement in new media technology-pervasive learning environments is advanced, taking into consideration personal, situational and contextual factors as influencing variables. While the path associated with curiosity, interest, and engagement during learning and research has remained essentially the same, how individuals tackle research and information-seeking tasks and factors which sustain such efforts have changed. Learning modalities for promoting this theoretical model are discussed leading to a series of recommendations for future research. This article offers a multi-lens perspective on curiosity and suggests a multi-method research agenda for validating such a perspective.", "title": "" }, { "docid": "5f45659c16ca98f991a31d62fd70cdab", "text": "Iris recognition has legendary resistance to false matches, and the tools of information theory can help to explain why. The concept of entropy is fundamental to understanding biometric collision avoidance. This paper analyses the bit sequences of IrisCodes computed both from real iris images and from synthetic white noise iris images, whose pixel values are random and uncorrelated. The capacity of the IrisCode as a channel is found to be 0.566 bits per bit encoded, of which 0.469 bits of entropy per bit is encoded from natural iris images. The difference between these two rates reflects the existence of anatomical correlations within a natural iris, and the remaining gap from one full bit of entropy per bit encoded reflects the correlations in both phase and amplitude introduced by the Gabor wavelets underlying the IrisCode. A simple two-state hidden Markov model is shown to emulate exactly the statistics of bit sequences generated both from natural and white noise iris images, including their imposter distributions, and may be useful for generating large synthetic IrisCode databases.", "title": "" }, { "docid": "a90c56a22559807463b46d1c7ab36cb3", "text": "We have studied manual motor function in a man deafferented by a severe peripheral sensory neuropathy. Motor power was almost unaffected. Our patients could produce a very wide range of preprogrammed finger movements with remarkable accuracy, involving complex muscle synergies of the hand and forearm muscles. He could perform individual finger movements and outline figures in the air with high eyes closed. He had normal pre- and postmovement EEG potentials, and showed the normal bi/triphasic pattern of muscle activation in agonist and antagonist muscles during fast limb movements. He could also move his thumb accurately through three different distances at three different speeds, and could produce three different levels of force at his thumb pad when required. Although he could not judge the weights of objects placed in his hands without vision, he was able to match forces applied by the experimenter to the pad of each thumb if he was given a minimal indication of thumb movement. Despite his success with these laboratory tasks, his hands were relatively useless to him in daily life. He was unable to grasp a pen and write, to fasten his shirt buttons or to hold a cup in one hand. Part of hist difficulty lay in the absence of any automatic reflex correction in his voluntary movements, and also to an inability to sustain constant levels of muscle contraction without visual feedback over periods of more than one or two seconds. He was also unable to maintain long sequences of simple motor programmes without vision.", "title": "" }, { "docid": "48a18e689b226936813f8dcfd2664819", "text": "This report explores integrating fuzzy logic with two data mining methods (association rules and frequency episodes) for intrusion detection. Data mining methods are capable of extracting patterns automatically from a large amount of data. The integration with fuzzy logic can produce more abstract and flexible patterns for intrusion detection, since many quantitative features are involved in intrusion detection and security itself is fuzzy. In this report, Chapter I introduces the concept of intrusion detection and the practicality of applying fuzzy logic to intrusion detection. In Chapter II, two types of intrusion detection systems, host-based systems and network-based systems, are briefly reviewed. Some important artificial intelligence techniques that have been applied to intrusion detection are also reviewed here, including data mining methods for anomaly detection. Chapter III summarizes a set of desired characteristics for the Intelligent Intrusion Detection Model (IIDM) being developed at Mississippi State University. A preliminary architecture which we have developed for integrating machine learning methods with other intrusion detection methods is also described. Chapter IV discusses basic fuzzy logic theory, traditional algorithms for mining association rules, and an original algorithm for mining frequency episodes. In Chapter V, the algorithms we have extended for mining fuzzy association rules and fuzzy frequency episodes are described. We add a normalization step to the procedure for mining fuzzy association rules in order to prevent one data instance from contributing more than others. We also modify the procedure for mining frequency episodes to learn fuzzy frequency episodes. Chapter VI describes a set of experiments of applying fuzzy association rules and fuzzy episode rules for off-line anomaly detection and real-time intrusion detection. We use fuzzy association rules and fuzzy frequency episodes to extract patterns for temporal statistical measurements at a higher level than the data level. We define a modified similarity evaluation function which is continuous and monotonic for the application of fuzzy association rules and fuzzy frequency episodes in anomaly detection. We also present a new real-time intrusion detection method using fuzzy episode rules. The experimental results show the utility of fuzzy association rules and fuzzy frequency episodes in intrusion detection. The conclusions are included in Chapter VII. ii DEDICATION I would like to dedicate this research to my family and my wife. iii ACKNOWLEDGMENTS I am deeply grateful to Dr. Susan Bridges for expending much time to direct me in this entire research project and directing my graduate study and research work …", "title": "" }, { "docid": "db1abd38db0295fc573bdfca2c2b19a3", "text": "BACKGROUND\nBacterial vaginosis (BV) has been most consistently linked to sexual behaviour, and the epidemiological profile of BV mirrors that of established sexually transmitted infections (STIs). It remains a matter of debate however whether BV pathogenesis does actually involve sexual transmission of pathogenic micro-organisms from men to women. We therefore made a critical appraisal of the literature on BV in relation to sexual behaviour.\n\n\nDISCUSSION\nG. vaginalis carriage and BV occurs rarely with children, but has been observed among adolescent, even sexually non-experienced girls, contradicting that sexual transmission is a necessary prerequisite to disease acquisition. G. vaginalis carriage is enhanced by penetrative sexual contact but also by non-penetrative digito-genital contact and oral sex, again indicating that sex per se, but not necessarily coital transmission is involved. Several observations also point at female-to-male rather than at male-to-female transmission of G. vaginalis, presumably explaining the high concordance rates of G. vaginalis carriage among couples. Male antibiotic treatment has not been found to protect against BV, condom use is slightly protective, whereas male circumcision might protect against BV. BV is also common among women-who-have-sex-with-women and this relates at least in part to non-coital sexual behaviours. Though male-to-female transmission cannot be ruled out, overall there is little evidence that BV acts as an STD. Rather, we suggest BV may be considered a sexually enhanced disease (SED), with frequency of intercourse being a critical factor. This may relate to two distinct pathogenetic mechanisms: (1) in case of unprotected intercourse alkalinisation of the vaginal niche enhances a shift from lactobacilli-dominated microflora to a BV-like type of microflora and (2) in case of unprotected and protected intercourse mechanical transfer of perineal enteric bacteria is enhanced by coitus. A similar mechanism of mechanical transfer may explain the consistent link between non-coital sexual acts and BV. Similar observations supporting the SED pathogenetic model have been made for vaginal candidiasis and for urinary tract infection.\n\n\nSUMMARY\nThough male-to-female transmission cannot be ruled out, overall there is incomplete evidence that BV acts as an STI. We believe however that BV may be considered a sexually enhanced disease, with frequency of intercourse being a critical factor.", "title": "" }, { "docid": "07631274713ad80653552767d2fe461c", "text": "Life cycle assessment (LCA) methodology was used to determine the optimum municipal solid waste (MSW) management strategy for Eskisehir city. Eskisehir is one of the developing cities of Turkey where a total of approximately 750tons/day of waste is generated. An effective MSW management system is needed in this city since the generated MSW is dumped in an unregulated dumping site that has no liner, no biogas capture, etc. Therefore, five different scenarios were developed as alternatives to the current waste management system. Collection and transportation of waste, a material recovery facility (MRF), recycling, composting, incineration and landfilling processes were considered in these scenarios. SimaPro7 libraries were used to obtain background data for the life cycle inventory. One ton of municipal solid waste of Eskisehir was selected as the functional unit. The alternative scenarios were compared through the CML 2000 method and these comparisons were carried out from the abiotic depletion, global warming, human toxicity, acidification, eutrophication and photochemical ozone depletion points of view. According to the comparisons and sensitivity analysis, composting scenario, S3, is the more environmentally preferable alternative. In this study waste management alternatives were investigated only on an environmental point of view. For that reason, it might be supported with other decision-making tools that consider the economic and social effects of solid waste management.", "title": "" }, { "docid": "68f797b34880bf08a8825332165a955b", "text": "The immune system responds to pathogens by a variety of pattern recognition molecules such as the Toll-like receptors (TLRs), which promote recognition of dangerous foreign pathogens. However, recent evidence indicates that normal intestinal microbiota might also positively influence immune responses, and protect against the development of inflammatory diseases. One of these elements may be short-chain fatty acids (SCFAs), which are produced by fermentation of dietary fibre by intestinal microbiota. A feature of human ulcerative colitis and other colitic diseases is a change in ‘healthy’ microbiota such as Bifidobacterium and Bacteriodes, and a concurrent reduction in SCFAs. Moreover, increased intake of fermentable dietary fibre, or SCFAs, seems to be clinically beneficial in the treatment of colitis. SCFAs bind the G-protein-coupled receptor 43 (GPR43, also known as FFAR2), and here we show that SCFA–GPR43 interactions profoundly affect inflammatory responses. Stimulation of GPR43 by SCFAs was necessary for the normal resolution of certain inflammatory responses, because GPR43-deficient (Gpr43-/-) mice showed exacerbated or unresolving inflammation in models of colitis, arthritis and asthma. This seemed to relate to increased production of inflammatory mediators by Gpr43-/- immune cells, and increased immune cell recruitment. Germ-free mice, which are devoid of bacteria and express little or no SCFAs, showed a similar dysregulation of certain inflammatory responses. GPR43 binding of SCFAs potentially provides a molecular link between diet, gastrointestinal bacterial metabolism, and immune and inflammatory responses.", "title": "" }, { "docid": "cfa58ab168beb2d52fe6c2c47488e93a", "text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.", "title": "" }, { "docid": "244116ffa1ed424fc8519eedc7062277", "text": "This paper describes a method of automatic placement for standard cells (polycells) that yields areas within 10-20 percent of careful hand placements. The method is based on graph partitioning to identify groups of modules that ought to be close to each other, and a technique for properly accounting for external connections at each level of partitioning. The placement procedure is in production use as part of an automated design system; it has been used in the design of more than 40 chips, in CMOS, NMOS, and bipolar technologies.", "title": "" }, { "docid": "8a4956ba4209b4c557f4f85ee7a885e7", "text": "In the Brand literature, few studies especially in Iran investigated the brand functions and business success. Hence, this study aims to provide the desirable model to creation and developing a deeper insight into the role of brand equity in the relationship between brand personality and customers purchase intention. The study statistical population consists of the whole Mellat Bank customers in Qazvin province, which used a questionnaire to collect data from them. In addition to, four hypotheses were announced and tested using structural equation modeling techniques. Research findings show the significant and positive effects of the brand personality on brand equity and purchase intention. Likewise, the results revealed that brand equity has a positive influence on customers' purchase intention and has a positive mediator role for the other two variables. According to the results of study, it is recommended to organizations and those marketing managers to take action to create a positive brand personality until make differentiation in customers minds compared to other brands and enhance brand equity and achieved to the comprehensive understanding of consumer behavior. © 2015 AESS Publications. All Rights Reserved.", "title": "" }, { "docid": "d7527aeeb5f26f23930b8d674beb0a13", "text": "A three-part investigation was conducted to explore the meaning of color preferences. Phase 1 used a Q-sort technique to assess intra-individual stability of preferences over 5 wk. Phase 2 used principal components analysis to discern the manner in which preferences were being made. Phase 3 used canonical correlation to evaluate a hypothesized relationship between color preferences and personality, with five scales of the Personality Research Form serving as the criterion measure. Munsell standard papers, a standard light source, and a color vision test were among control devices applied. There were marked differences in stability of color preferences. Sex differences in intra-individual stability were also apparent among the 90 subjects. An interaction of hue and lightness appeared to underlie such judgments when saturation was kept constant. An unexpected breakdown in control pointed toward the possibly powerful effect of surface finish upon color preference. No relationship to five manifest needs were found. It was concluded that the beginning steps had been undertaken toward psychometric development of a reliable technique for the measurement of color preference.", "title": "" }, { "docid": "4d2e8924181d129e23f8b51eccd7e1ef", "text": "This paper presents the design, fabrication, and characterization of millimeter-scale rotary electromagnetic generators. The axial-flux synchronous machines consist of a three-phase microfabricated surface-wound copper coil and a multipole permanent-magnet (PM) rotor measuring 2 mm in diameter. Several machines with various geometries and numbers of magnetic poles and turns per pole are designed and compared. Moreover, the use of different PM materials is investigated. Multipole magnetic rotors are modeled using finite element analysis to analyze magnetic field distributions. In operation, the rotor is spun above the microfabricated stator coils using an off-the-shelf air-driven turbine. As a result of design choices, the generators present different levels of operating frequency and electrical output power. The four-pole six-turn/pole NdFeB generator exhibits up to 6.6 mWrms of ac electrical power across a resistive load at a rotational speed of 392 000 r/min. This milliwatt-scale power generation indicates the feasibility of such ultrasmall machines for low-power applications. [2008-0078].", "title": "" }, { "docid": "34123b021d95c2380cde6390e9fdac6e", "text": "Because the leg is known to exhibit springlike behavior during the stance phase of running, several exoskeletons have attempted to place external springs in parallel with some or all of the leg during stance, but these designs have failed to permit natural kinematics during swing. To this end, a parallel-elastic exoskeleton is presented that introduces a clutch to disengage the parallel leg-spring and thereby not constrain swing-phase movements of the biological leg. A custom interference clutch with integrated planetary gear transmission, made necessary by the requirement for high holding torque but low mass, is presented and shown to withstand up to 190 N m at 1.8 deg resolution with a mass of only 710 g. A suitable control strategy for locking the clutch at peak knee extension is also presented, where only an onboard rate gyroscope and exoskeletal joint encoder are employed as sensory inputs. Exoskeletal electromechanics, sensing, and control are shown to achieve design critieria necessary to emulate biological knee stiffness behaviors in running. [DOI: 10.1115/1.4027841]", "title": "" }, { "docid": "feec0094203fdae5a900831ea81fcfb0", "text": "Costs, market fragmentation, and new media channels that let customers bypass advertisements seem to be in league against the old ways of marketing. Relying on mass media campaigns to build strong brands may be a thing of the past. Several companies in Europe, making a virtue of necessity, have come up with alternative brand-building approaches and are blazing a trail in the post-mass-media age. In England, Nestlé's Buitoni brand grew through programs that taught the English how to cook Italian food. The Body Shop garnered loyalty with its support of environmental and social causes. Cadbury funded a theme park tied to its history in the chocolate business. Häagen-Dazs opened posh ice-cream parlors and got itself featured by name on the menus of fine restaurants. Hugo Boss and Swatch backed athletic or cultural events that became associated with their brands. The various campaigns shared characteristics that could serve as guidelines for any company hoping to build a successful brand: senior managers were closely involved with brand-building efforts; the companies recognized the importance of clarifying their core brand identity; and they made sure that all their efforts to gain visibility were tied to that core identity. Studying the methods of companies outside one's own industry and country can be instructive for managers. Pilot testing and the use of a single and continuous measure of brand equity also help managers get the most out of novel approaches in their ever more competitive world.", "title": "" } ]
scidocsrr
1368e066976a3d74e6f0ebef805748d0
Efficient Implementations of Apriori and Eclat Christian Borgelt
[ { "docid": "e66f2052a2e9a7e870f8c1b4f2bfb56d", "text": "New algorithms with previous native palm pdf reader approaches, with gains of over an order of magnitude using.We present two new algorithms for solving this problem. Regularities, association rules, and gave an algorithm for finding such rules. 4 An.fast discovery of association rules based on our ideas in 33, 35. New algorithms with previous approaches, with gains of over an order of magnitude using.", "title": "" } ]
[ { "docid": "d7acbf20753e2c9c50b2ab0683d7f03a", "text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.", "title": "" }, { "docid": "69561d0f42cf4aae73d4c97c1871739e", "text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.", "title": "" }, { "docid": "b77d297feeff92a2e7b03bf89b5f20db", "text": "Dependability evaluation main objective is to assess the ability of a system to correctly function over time. There are many possible approaches to the evaluation of dependability: in these notes we are mainly concerned with dependability evaluation based on probabilistic models. Starting from simple probabilistic models with very efficient solution methods we shall then come to the main topic of the paper: how Petri nets can be used to evaluate the dependability of complex systems.", "title": "" }, { "docid": "0f503bded2c4b0676de16345d4596280", "text": "An emerging approach to the problem of reducing the identity theft is represented by the adoption of biometric authentication systems. Such systems however present however several challenges, related to privacy, reliability, security of the biometric data. Inter-operability is also required among the devices used for the authentication. Moreover, very often biometric authentication in itself is not sufficient as a conclusive proof of identity and has to be complemented with multiple other proofs of identity like passwords, SSN, or other user identifiers. Multi-factor authentication mechanisms are thus required to enforce strong authentication based on the biometric and identifiers of other nature.In this paper we provide a two-phase authentication mechanism for federated identity management systems. The first phase consists of a two-factor biometric authentication based on zero knowledge proofs. We employ techniques from vector-space model to generate cryptographic biometric keys. These keys are kept secret, thus preserving the confidentiality of the biometric data, and at the same time exploit the advantages of a biometric authentication. The second authentication combines several authentication factors in conjunction with the biometric to provide a strong authentication. A key advantage of our approach is that any unanticipated combination of factors can be used. Such authentication system leverages the information of the user that are available from the federated identity management system.", "title": "" }, { "docid": "b6614633537319c500e70a1866019969", "text": "The life of a teenager today is far different than in past decades. Through semi-structured interviews with 10 teenagers and 10 parents of teenagers, we investigate parent-teen privacy decision making in these uncharted waters. Parents and teens generally agreed that teens had a need for some degree of privacy from their parents and that respecting teens’ privacy demonstrated trust and fostered independence. We explored the boundaries of teen privacy in both the physical and digital worlds. While parents commonly felt none of their children’s possessions should ethically be exempt from parental monitoring, teens felt strongly that cell phones, particularly text messages, were private. Parents discussed struggling to keep up with new technologies and to understand teens’ technology-mediated socializing. While most parents said they thought similarly about privacy in the physical and digital worlds, half of teens said they thought about these concepts differently. We present cases where parents made privacy decisions using false analogies with the physical world or outdated assumptions. We also highlight directions for more usable digital parenting tools.", "title": "" }, { "docid": "ed544d89c317a91cdfe9f5ee8a2f574b", "text": "The rapid growth of web resources lead to a need of enhanced Search scheme for information retrieval. Every single user contributes a part of new information to be added to the web every day. This huge data supplied are of diverse area in origin being added, without a mere relation. Hence, a novel search scheme must be applied for bringing out the relevant results on querying web for data. The current web search scheme could bring out only relevant pages to be as results. But, a Semantic web is a solution to this issue through providing a suitable result on understanding the appropriate need of information. It can be acquired through extending the support for databases in machine readable form. It leads to redefinition of current web into semantic web by adding semantic annotations. This paper gives an overview of Semantic mapping approaches. The main goal of this paper is to propose the steps for bringing out a new Semantic web discovery algorithm with an efficient Semantic mapping and a novel Classification Scheme for categorization of concepts.", "title": "" }, { "docid": "563af54f4fd71ac011477ed32c041483", "text": "In Image Processing efficient algorithms are always pursued for applications that use the most advanced hardware architectures. Distance Transform is a classic operation for blurring effects, skeletonizing, segmentation and various other purposes. This article presents two implementations of the Euclidean Distance Transform using CUDA (Compute Unified Device Architecture) in GPU (Graphics Process Unit): of the Meijster's Sequential Algorithm and another is a very efficient algorithm of simple structure. Both using only shared memory. The results presented herein used images of various types and sizes to show a faster run time compared with the best-known implementations in CPU.", "title": "" }, { "docid": "91b6b9e22f191cfec87d7b62d809542c", "text": "In the past few years, the storage and analysis of large-scale and fast evolving networks present a great challenge. Therefore, a number of different techniques have been proposed for sampling large networks. In general, network exploration techniques approximate the original networks more accurately than random node and link selection. Yet, link selection with additional subgraph induction step outperforms most other techniques. In this paper, we apply subgraph induction also to random walk and forest-fire sampling. We analyze different real-world networks and the changes of their properties introduced by sampling. We compare several sampling techniques based on the match between the original networks and their sampled variants. The results reveal that the techniques with subgraph induction underestimate the degree and clustering distribution, while overestimate average degree and density of the original networks. Techniques without subgraph induction step exhibit exactly the opposite behavior. Hence, the performance of the sampling techniques from random selection category compared to network exploration sampling does not differ significantly, while clear differences exist between the techniques with subgraph induction step and the ones without it.", "title": "" }, { "docid": "71dd012b54ae081933bddaa60612240e", "text": "This paper analyzes & compares four adders with different logic styles (Conventional, transmission gate, 14 transistors & GDI based technique) for transistor count, power dissipation, delay and power delay product. It is performed in virtuoso platform, using Cadence tool with available GPDK - 90nm kit. The width of NMOS and PMOS is set at 120nm and 240nm respectively. Transmission gate full adder has sheer advantage of high speed but consumes more power. GDI full adder gives reduced voltage swing not being able to pass logic 1 and logic 0 completely showing degraded output. Transmission gate full adder shows better performance in terms of delay (0.417530 ns), whereas 14T full adder shows better performance in terms of all three aspects.", "title": "" }, { "docid": "dcf7214c15c13f13d33c9a7b2c216588", "text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.", "title": "" }, { "docid": "c7d23af5ad79d9863e83617cf8bbd1eb", "text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.", "title": "" }, { "docid": "192f8528ca2416f9a49ce152def2fbe6", "text": "We study the extent to which we can infer users’ geographical locations from social media. Location inference from social media can bene€t many applications, such as disaster management, targeted advertising, and news content tailoring. In recent years, a number of algorithms have been proposed for identifying user locations on social media platforms such as TwiŠer and Facebook from message contents, friend networks, and interactions between users. In this paper, we propose a novel probabilistic model based on factor graphs for location inference that o‚ers several unique advantages for this task. First, the model generalizes previous methods by incorporating content, network, and deep features learned from social context. Œe model is also ƒexible enough to support both supervised learning and semi-supervised learning. Second, we explore several learning algorithms for the proposed model, and present a Two-chain Metropolis-Hastings (MH+) algorithm, which improves the inference accuracy. Œird, we validate the proposed model on three di‚erent genres of data – TwiŠer, Weibo, and Facebook – and demonstrate that the proposed model can substantially improve the inference accuracy (+3.3-18.5% by F1-score) over that of several state-of-the-art methods.", "title": "" }, { "docid": "fb5f52c0b845ff23e82d29f8fb705a0b", "text": "Organizational culture continues to be cited as one of the most important factors for organizations’ success in an increasingly competitive and IT-driven global environment. Given the fact that organizational culture has an influence all over the organization, the complexity of its nature is increased when considering the relationship between business and IT. As a result, different factors that have influence on changing organizational culture were highlighted in literature. These factors are found in the research literature distributed in three main group; micro-environment factors, macro-environment factors and leader’s impact. One of the factors that have not been yet well investigated in researches is concerning business-IT alignment (BITA. Therefore the purpose of this paper is to investigate the impact of BITA maturity on organizational culture. The research process that we have followed is a literature survey followed by an in-depth case study. The result of this research shows a clear interrelation in theories of both BITA and organizational culture, and clear indications of BITA impact on organizational culture and its change. The findings may support both practitioners and researchers in order to understand the insights of the relationships between BITA and organizational culture components and provide a roadmap for improvements or desired changes in organizational culture with highlighted target business area.", "title": "" }, { "docid": "e2af17b368fef36187c895ad5fd20a58", "text": "We study in this paper the problem of jointly clustering and learning representations. As several previous studies have shown, learning representations that are both faithful to the data to be clustered and adapted to the clustering algorithm can lead to better clustering performance, all the more so that the two tasks are performed jointly. We propose here such an approach for k-Means clustering based on a continuous reparametrization of the objective function that leads to a truly joint solution. The behavior of our approach is illustrated on various datasets showing its efficacy in learning representations for objects while clustering them.", "title": "" }, { "docid": "565831ad3bd5c7efcd258e48fc7dc64b", "text": "I n his 2003 book Moneyball, financial reporter Michael Lewis made a striking claim: the valuation of skills in the market for baseball players was grossly inefficient. The discrepancy was so large that when the Oakland Athletics hired an unlikely management group consisting of Billy Beane, a former player with mediocre talent, and two quantitative analysts, the team was able to exploit this inefficiency and outproduce most of the competition, while operating on a shoestring budget. The publication of Moneyball triggered a firestorm of criticism from baseball insiders (Lewis, 2004), and it raised the eyebrows of many economists as well. Basic price theory implies a tight correspondence between pay and productivity when markets are competitive and rich in information, as would seem to be the case in baseball. The market for baseball players receives daily attention from the print and broadcast media, along with periodic in-depth analysis from lifelong baseball experts and academic economists. Indeed, a case can be made that more is known about pay and quantified performance in this market than in any other labor market in the American economy. In this paper, we test the central portion of Lewis’s (2003) argument with elementary econometric tools and confirm his claims. In particular, we find that hitters’ salaries during this period did not accurately reflect the contribution of various batting skills to winning games. This inefficiency was sufficiently large that knowledge of its existence, and the ability to exploit it, enabled the Oakland Athletics to gain a substantial advantage over their competition. Further, we find", "title": "" }, { "docid": "b8172acdca89e720783a803d98b271ad", "text": "Vertically stacked nanowire field effect transistors currently dominate the race to become mainstream devices for 7-nm CMOS technology node. However, these devices are likely to suffer from the issue of nanowire stack position dependent drain current. In this paper, we show that the nanowire located at the bottom of the stack is farthest away from the source/drain silicide contacts and suffers from higher series resistance as compared to the nanowires that are higher up in the stack. It is found that upscaling the diameter of lower nanowires with respect to the upper nanowires improved uniformity of the current in each nanowire, but with the drawback of threshold voltage reduction. We propose to increase source/drain trench silicide depth as a more promising solution to this problem over the nanowire diameter scaling, without compromising on power or performance of these devices.", "title": "" }, { "docid": "3346848a0b6d41856fe05fe2503065ed", "text": "It has long been recognized that temporal anaphora in French and English depends on the aspectual distinction between events and states. For example, temporal location as well as temporal update depends on the aspectual type. This paper presents a general theory of aspect-based temporal anaphora, which extends from languages with grammatical tenses (like French and English) to tenseless languages (e.g. Kalaallisut). This theory also extends to additional aspect-dependent phenomena and to non-atomic aspectual types, processes and habits, which license anaphora to proper atomic parts (cf. nominal pluralities and kinds).", "title": "" }, { "docid": "fbdda2f44b65944a0a47cee2418ed9dc", "text": "Volume 5 • Issue 2 • 1000226 Adv Tech Biol Med, an open access journal ISSN: 2379-1764 The main focus of the forensic taphonomy is the study of environmental conditions influencing the decomposition process to estimate the postmortem interval and determine the cause and manner of death. The study is part of a specific branch of the forensic science that makes use of a broad aspect of methodologies taken from different areas of expertise such as botany, archeology, soil microbiology and entomology, all used for uncovering and examining clandestine graves allowing to succeed in the investigation process. Therefore, the “Forensic Mycology” emerges as a new science term meaning the study of the coexistence of fungal species nearby human cadavers as well as those fungal groups potentially useful in establishing a time of death [1,2].", "title": "" }, { "docid": "0b507193ca68d05a3432a9e735df5d95", "text": "Capturing image with defocused background by using a large aperture is a widely used technique in digital single-lens reflex (DSLR) camera photography. It is also desired to provide this function to smart phones. In this paper, a new algorithm is proposed to synthesize such an effect for a single portrait image. The foreground portrait is detected using a face prior based salient object detection algorithm. Then with an improved gradient domain guided image filter, the details in the foreground are enhanced while the background pixels are blurred. In this way, the background objects are defocused and thus the foreground objects are emphasized. The resultant image looks similar to image captured using a camera with a large aperture. The proposed algorithm can be adopted in smart phones, especially for the front cameras of smart phones.", "title": "" } ]
scidocsrr
baa20c7aac59f9d95d7f7153fe5817aa
Beyond Object Proposals: Random Crop Pooling for Multi-Label Image Recognition
[ { "docid": "a5f17126a90b45921f70439ff96a0091", "text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.", "title": "" } ]
[ { "docid": "ea73c0a2ef6196429a29591a758bc4ca", "text": "Broadband and planar microstrip-to-waveguide transitions are developed in the millimeter-wave band. Novel printed pattern is applied to the microstrip substrate in the ordinary back-short-type transition to operate over extremely broad frequency bandwidth. Furthermore, in order to realize flat and planar transition which does not need back-short waveguide, the transition is designed in multi-layer substrate. Both transitions are fabricated and their performances are measured and simulated in the millimeter-wave band.", "title": "" }, { "docid": "078287ad3a2f4794b38e6f6e24c676cd", "text": "Odontomas, benign tumors that develop in the jaw, rarely erupt into the oral cavity. We report an erupted odontoma which delayed eruption of the first molar. The patient was a 10-year-old Japanese girl who came to our hospital due to delayed eruption of the right maxillary first molar. All the deciduous teeth had been shed. The second premolar on the right side had erupted, but not the first molar. Slight inflammation of the alveolar mucosa around the first molar had exposed a tooth-like, hard tissue. Panoramic radiography revealed a radiopaque mass indicating a lesion approximately 1 cm in diameter. The border of the image was clear, and part of the mass was situated close to the occlusal surface of the first molar. The root of the maxillary right first molar was only half-developed. A clinical diagnosis of odontoma was made. The odontoma was subsequently extracted, allowing the crown of the first molar to erupt almost 5 months later. The dental germ of the permanent tooth had been displaced by the odontoma. However, after the odontoma had been extracted, the permanent tooth was still able to erupt spontaneously, as eruptive force still remained. When the eruption of a tooth is significantly delayed, we believe that it is necessary to examine the area radiographically. If there is any radiographic evidence of a physical obstruction that might delay eruption, that obstruction should be removed before any problems can arise. Regular dental checkups at schools might improve our ability to detect evidence of delayed eruption earlier.", "title": "" }, { "docid": "218ac1ea6bde76d9620269f74f5958fd", "text": "Emotion recognition from EEG signals allows the direct assessment of the “inner” state of a user, which is considered an important factor in human-machine-interaction. Many methods for feature extraction have been studied and the selection of both appropriate features and electrode locations is usually based on neuro-scientific findings. Their suitability for emotion recognition, however, has been tested using a small amount of distinct feature sets and on different, usually small data sets. A major limitation is that no systematic comparison of features exists. Therefore, we review feature extraction methods for emotion recognition from EEG based on 33 studies. An experiment is conducted comparing these features using machine learning techniques for feature selection on a self recorded data set. Results are presented with respect to performance of different feature selection methods, usage of selected feature types, and selection of electrode locations. Features selected by multivariate methods slightly outperform univariate methods. Advanced feature extraction techniques are found to have advantages over commonly used spectral power bands. Results also suggest preference to locations over parietal and centro-parietal lobes.", "title": "" }, { "docid": "01f9384b33a84c3ece4db5337e708e24", "text": "Broken rails are the leading cause of major derailments in North America. Class I freight railroads average 84 mainline broken-rail derailments per year with an average track and equipment cost of approximately $525,000 per incident. The number of mainline broken-railcaused derailments has increased from 77 in 1997, to 91 in 2006; therefore, efforts to reduce their occurrence remain important. We conducted an analysis of the factors that influence the occurrence of broken rails and developed a quantitative model to predict locations where they are most likely to occur. Among the factors considered were track and rail characteristics, maintenance activities and frequency, and on-track testing results. Analysis of these factors involved the use of logistic regression techniques to develop a statistical model for the prediction of broken rail locations. For such a model to have value for railroads it must be feasible to use and provide information in a useful manner. Consequently, an optimal prediction model containing only the top eight factors related to broken rails was developed. The economic impact of broken rail events was also studied. This included the costs associated with broken rail derailments and service failures, as well as the cost of typical prevention measures. A train delay calculator was also developed based on industry operating averages. Overall, the information presented here can assist railroads to more effectively allocate resources to prevent the occurrence of broken rails. INTRODUCTION Understanding the factors related to broken rails is an important topic for U.S. freight railroads and is becoming more so because of the increase in their occurrence in recent years. This increase is due to several factors, but the combination of increased traffic and heavier axle loads are probably the most important. Broken rails are generally caused by the undetected growth of either internal or surface defects in the rail (1). Previous research has focused on both mechanistic analyses (2-8) and statistical analyses (9-13) in order to understand the factors that cause crack growth in rails and ultimately broken rails. The first objective of this analysis was to develop a predictive tool that will enable railroads to identify locations with a high probability of broken rail. The possible predictive factors that were evaluated included rail characteristics, infrastructure data, maintenance activity, operational information, and rail testing results. The second objective was to study the economic impact of broken rails based on industry operating averages. Our analysis on this topic incorporates previous work that developed a framework for the cost of broken rails (14). The purpose of this paper is to provide information to enable more efficient evaluation of options to reduce the occurence of broken rails. DEVELOPMENT OF SERVICE FAILURE PREDICTION MODEL The first objective of this paper was to develop a model to identify locations in the rail network with a high probability of broken rail occurrence based on broken rail service failure data and possible influence factors. All of the factors that might affect service failure occurrence and for which we had data were considered in this analysis. Several broken rail predictive models were developed and evaluated using logistic regression techniques. Data Available for Study In order to develop a predictive tool, it is desirable to initially consider as many factors as possible that might affect the occurrence of broken rails. From the standpoint of rail maintenance planning it is important to determine which factors are and are not correlated with broken rail occurence. Therefore the analysis included a wide-range of possible variables for which data were available. This included track and rail characteristics such as rail age, rail curvature, track speed, grade, and rail weight. Also, changes in track modulus due to the presence of infrastructure features such as bridges and turnouts have a potential effect on rail defect growth and were examined as well. Additionally, maintenance activities were included that can reduce the likelihood of broken rail occurrence, such as rail grinding and tie replacement. Finally, track geometry and ultrasonic testing for rail defects were used by railroads to assess the condition of track and therefore the results of these tests are included as they may provide predictive information about broken rail occurrence. The BNSF Railway provided data on the location of service failures and a variety of other infrastructure, inspection and operational parameters. In this study a “service failure” was defined as an incident where a track was taken out of service due to a broken rail. A database was developed from approximately 23,000 miles of mainline track maintained by the BNSF Railway covering the four-year period, 2003 through 2006. BNSF’s network was divided into 0.01-mile-long segments (approximately 53 feet each) and the location of each reported service failure was recorded. BNSF experienced 12,685 service failures during the four-year study period. For the case of modeling rare events it is common to sample all of the rare events and compare these with a similar sized sample of instances where the event did not occur (15). Therefore an additional 12,685 0.01-mile segments that did not experience a service failure during the four-year period were randomly selected from the same network. Each non-failure location was also assigned a random date within the four-year time period for use in evaluating certain temporal variables that might be factors. Thus, the dataset used in this analysis included a total of 25,370 segment locations and dates when a service failure did or did not occur in the railroad’s network during the study period. All available rail characteristics, infrastructure data, maintenance activity, operational information, and track testing results were linked to each of these locations, for a total of 28 unique input variables. Evaluation of Previous Service Failure Model In a previous study Dick developed a predictive model of service failures based on relevant track and traffic data for a two-year period (10, 11). The outcome of that study was a multivariate statistical model that could quantify the probability of a service failure at any particular location based on a number of track and traffic related variables. Dick‘s model used 11 possible predictor factors for broken rails and could correctly classify failure locations with 87.4% accuracy using the dataset provided to him. Our first step was to test this model using data from a more recent two-year period. From 2005 through 2006, the BNSF experienced 6,613 service failures and data on these, along with 6,613 randomly selected non-failure locations, were analyzed. 7,247 of the 13,226 cases were classified correctly (54.8%), considerably lower than in the earlier study causing us to ask why the predictive power seemed to have declined. Examination of the service failure dataset used previously revealed that it may not have included all the trackage from the network. This resulted in a dataset that generated the particular model and accuracy levels reported in the earlier study (10, 11). Therefore a new, updated statistical model was developed to predict service failure locations. Development of Updated Statistical Classification Model The updated model that was developed to predict service failure locations used similar logistic regression techniques. Logistic regression was selected because it is a discrete choice model that calculates the probability of failure based on available input variables. These probabilities are used to classify each case as either failure or non-failure. A statistical regression equation was developed based on the significant input parameters to determine the probability of failure. To find the best classification model, the input parameters were evaluated with and without multiple-term interactions allowed. Logistic Regression Methodology and Techniques The model was developed as a discrete choice classification problem of either failure or non-failure using the new dataset described above. The objective was to find the best combination of variables and mathematical relationships among the 28 available input variables to predict the occurrence of broken rails. The service failure probability model was developed using Statistical Analysis Software (SAS) and the LOGISTIC procedure (16). This procedure fits a discrete choice logistic regression model to the input data. The output of this model is an index value between zero and one corresponding to the probability of a service failure occurrence. Four commonly used variable selection techniques were evaluated in this analysis to find the best model. The simplest method is referred to as “full-model”, or variable selection type “none” in SAS. The full-model method uses every available input variable to determine the best regression model. The next technique examined was selection type “forward”, which evaluates each input variable and systematically adds the most significant variables to the model. The forward selection process continues adding the most significant variable until no additional variables meet a defined significance level for inclusion in the model. The entry and removal level used in this analysis for all variable selection techniques was a 0.05 significance threshold. The “backward” variable selection technique was also used. This method starts with all input variables included in the model. In the first step, the model determines the least significant variable that does not meet the defined significance level and removes it from the model. This process continues until no other variables included in the model meet the defined criteria for removal. The final logistic regression selection technique used was “step-wise” selection. The step-wise selection method is s", "title": "" }, { "docid": "1f5c52945d83872a93749adc0e1a0909", "text": "Turmeric, derived from the plant Curcuma longa, is a gold-colored spice commonly used in the Indian subcontinent, not only for health care but also for the preservation of food and as a yellow dye for textiles. Curcumin, which gives the yellow color to turmeric, was first isolated almost two centuries ago, and its structure as diferuloylmethane was determined in 1910. Since the time of Ayurveda (1900 B.C) numerous therapeutic activities have been assigned to turmeric for a wide variety of diseases and conditions, including those of the skin, pulmonary, and gastrointestinal systems, aches, pains, wounds, sprains, and liver disorders. Extensive research within the last half century has proven that most of these activities, once associated with turmeric, are due to curcumin. Curcumin has been shown to exhibit antioxidant, antiinflammatory, antiviral, antibacterial, antifungal, and anticancer activities and thus has a potential against various malignant diseases, diabetes, allergies, arthritis, Alzheimer’s disease, and other chronic illnesses. Curcumin can be considered an ideal “Spice for Life”. Curcumin is the most important fraction of turmeric which is responsible for its biological activity. In the present work we have investigated the qualitative and quantitative determination of curcumin in the ethanolic extract of C.longa. Qualitative estimation was carried out by thin layer chromatographic (TLC) method. The total phenolic content of the ethanolic extract of C.longa was found to be 11.24 as mg GAE/g. The simultaneous determination of the pharmacologically important active curcuminoids viz. curcumin, demethoxycurcumin and bisdemethoxycurcumin in Curcuma longa was carried out by spectrophotometric and HPLC techniques. HPLC separation was performed on a Cyber Lab C-18 column (250 x 4.0 mm, 5μ) using acetonitrile and 0.1 % orthophosphoric acid solution in water in the ratio 60 : 40 (v/v) at flow rate of 0.5 mL/min. Detection of curcuminoids were performed at 425 nm.", "title": "" }, { "docid": "cc99e806503b158aa8a41753adecd50c", "text": "Semantic Mutation Testing (SMT) is a technique that aims to capture errors caused by possible misunderstandings of the semantics of a description language. It is intended to target a class of errors which is different from those captured by traditional Mutation Testing (MT). This paper describes our experiences in the development of an SMT tool for the C programming language: SMT-C. In addition to implementing the essential requirements of SMT (generating semantic mutants and running SMT analysis) we also aimed to achieve the following goals: weak MT/SMT for C, good portability between different configurations, seamless integration into test routines of programming with C and an easy to use front-end.", "title": "" }, { "docid": "645fa4f1d49955109bfdf52111b1b460", "text": "We describe our work in the collection and analysis of massive data describing the connections between participants to online social networks. Alternative approaches to social network data collection are defined and evaluated in practice, against the popular Facebook Web site. Thanks to our ad-hoc, privacy-compliant crawlers, two large samples, comprising millions of connections, have been collected; the data is anonymous and organized as an undirected graph. We describe a set of tools that we developed to analyze specific properties of such social-network graphs, i.e., among others, degree distribution, centrality measures, scaling laws and distribution of friendship.", "title": "" }, { "docid": "bb416322f9ce64045f2bd98cfeacb715", "text": "This abstract presents our preliminary results on development of a cognitive assistant system for emergency response that aims to improve situational awareness and safety of first responders. This system integrates a suite of smart wearable sensors, devices, and analytics for real-time collection and analysis of in-situ data from incident scene and providing dynamic data-driven insights to responders on the most effective response actions to take.", "title": "" }, { "docid": "2804384964bc8996e6574bdf67ed9cb5", "text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.", "title": "" }, { "docid": "9d6a0b31bf2b64f1ec624222a2222e2a", "text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati", "title": "" }, { "docid": "185ae8a2c89584385a810071c6003c15", "text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.", "title": "" }, { "docid": "a0dc5016dfd424846177e8bb563395d3", "text": "BACKGROUND\nGiven that the prevalence of antenatal and postnatal depression is high, with estimates around 13%, and the consequences serious, efforts have been made to identify risk factors to assist in prevention, identification and treatment. Most risk factors associated with postnatal depression have been well researched, whereas predictors of antenatal depression have been less researched. Risk factors associated with early parenting stress have not been widely researched, despite the strong link with depression. The aim of this study was to further elucidate which of some previously identified risk factors are most predictive of three outcome measures: antenatal depression, postnatal depression and parenting stress and to examine the relationship between them.\n\n\nMETHODS\nPrimipara and multiparae women were recruited antenatally from two major hoitals as part of the beyondblue National Postnatal Depression Program 1. In this subsidiary study, 367 women completed an additional large battery of validated questionnaires to identify risk factors in the antenatal period at 26-32 weeks gestation. A subsample of these women (N = 161) also completed questionnaires at 10-12 weeks postnatally. Depression level was measured by the Beck Depression Inventory (BDI).\n\n\nRESULTS\nRegression analyses identified significant risk factors for the three outcome measures. (1). Significant predictors for antenatal depression: low self-esteem, antenatal anxiety, low social support, negative cognitive style, major life events, low income and history of abuse. (2). Significant predictors for postnatal depression: antenatal depression and a history of depression while also controlling for concurrent parenting stress, which was a significant variable. Antenatal depression was identified as a mediator between seven of the risk factors and postnatal depression. (3). Postnatal depression was the only significant predictor for parenting stress and also acted as a mediator for other risk factors.\n\n\nCONCLUSION\nRisk factor profiles for antenatal depression, postnatal depression and parenting stress differ but are interrelated. Antenatal depression was the strongest predictor of postnatal depression, and in turn postnatal depression was the strongest predictor for parenting stress. These results provide clinical direction suggesting that early identification and treatment of perinatal depression is important.", "title": "" }, { "docid": "7db989219c3c15aa90a86df84b134473", "text": "INTRODUCTION\nResearch indicated that: (i) vaginal orgasm (induced by penile-vaginal intercourse [PVI] without concurrent clitoral masturbation) consistency (vaginal orgasm consistency [VOC]; percentage of PVI occasions resulting in vaginal orgasm) is associated with mental attention to vaginal sensations during PVI, preference for a longer penis, and indices of psychological and physiological functioning, and (ii) clitoral, distal vaginal, and deep vaginal/cervical stimulation project via different peripheral nerves to different brain regions.\n\n\nAIMS\nThe aim of this study is to examine the association of VOC with: (i) sexual arousability perceived from deep vaginal stimulation (compared with middle and shallow vaginal stimulation and clitoral stimulation), and (ii) whether vaginal stimulation was present during the woman's first masturbation.\n\n\nMETHODS\nA sample of 75 Czech women (aged 18-36), provided details of recent VOC, site of genital stimulation during first masturbation, and their recent sexual arousability from the four genital sites.\n\n\nMAIN OUTCOME MEASURES\nThe association of VOC with: (i) sexual arousability perceived from the four genital sites and (ii) involvement of vaginal stimulation in first-ever masturbation.\n\n\nRESULTS\nVOC was associated with greater sexual arousability from deep vaginal stimulation but not with sexual arousability from other genital sites. VOC was also associated with women's first masturbation incorporating (or being exclusively) vaginal stimulation.\n\n\nCONCLUSIONS\nThe findings suggest (i) stimulating the vagina during early life masturbation might indicate individual readiness for developing greater vaginal responsiveness, leading to adult greater VOC, and (ii) current sensitivity of deep vaginal and cervical regions is associated with VOC, which might be due to some combination of different neurophysiological projections of the deep regions and their greater responsiveness to penile stimulation.", "title": "" }, { "docid": "7ff291833a25ca1a073ebc2a2e5274e7", "text": "High precision ground truth data is a very important factor for the development and evaluation of computer vision algorithms and especially for advanced driver assistance systems. Unfortunately, some types of data, like accurate optical flow and depth as well as pixel-wise semantic annotations are very difficult to obtain. In order to address this problem, in this paper we present a new framework for the generation of high quality synthetic camera images, depth and optical flow maps and pixel-wise semantic annotations. The framework is based on a realistic driving simulator called VDrift [1], which allows us to create traffic scenarios very similar to those in real life. We show how we can use the proposed framework to generate an extensive dataset for the task of multi-class image segmentation. We use the dataset to train a pairwise CRF model and to analyze the effects of using various combinations of features in different image modalities.", "title": "" }, { "docid": "f7d64d093df1aa158636482af2dd7bff", "text": "Vision-based Human activity recognition is becoming a trendy area of research due to its wide application such as security and surveillance, human–computer interactions, patients monitoring system, and robotics. In the past two decades, there are several publically available human action, and activity datasets are reported based on modalities, view, actors, actions, and applications. The objective of this survey paper is to outline the different types of video datasets and highlights their merits and demerits under practical considerations. Based on the available information inside the dataset we can categorise these datasets into RGB (Red, Green, and Blue) and RGB-D(depth). The most prominent challenges involved in these datasets are occlusions, illumination variation, view variation, annotation, and fusion of modalities. The key specification of these datasets is discussed such as resolutions, frame rate, actions/actors, background, and application domain. We have also presented the state-of-the-art algorithms in a tabular form that give the best performance on such datasets. In comparison with earlier surveys, our works give a better presentation of datasets on the well-organised comparison, challenges, and latest evaluation technique on existing datasets.", "title": "" }, { "docid": "1dc07b02a70821fdbaa9911755d1e4b0", "text": "The AROMA project is exploring the kind of awareness that people effortless are able to maintain about other beings who are located physically close. We are designing technology that attempts to mediate a similar kind of awareness among people who are geographically dispersed but want to stay better in touch. AROMA technology can be thought of as a stand-alone communication device or -more likely -an augmentation of existing technologies such as the telephone or full-blown media spaces. Our approach differs from other recent designs for awareness (a) by choosing pure abstract representations on the display site, (b) by possibly remapping the signal across media between capture and display, and, finally, (c) by explicitly extending the application domain to include more than the working life, to embrace social interaction in general. We are building a series of prototypes to learn if abstract representation of activity data does indeed convey a sense of remote presence and does so in a sutTiciently subdued manner to allow the user to concentrate on his or her main activity. We have done some initial testing of the technical feasibility of our designs. What still remains is an extensive effort of designing a symbolic language of remote presence, done in parallel with studies of how people will connect and communicate through such a language as they live with the AROMA system.", "title": "" }, { "docid": "d46172afedf3e86d64ee3c7dcfbd5c3c", "text": "This paper compares the radial vibration forces in 10-pole/12-slot fractional-slot SPM and IPM machines which are designed to produce the same output torque, and employ an identical stator but different SPM, V-shape and arc-shape IPM rotor topologies. The airgap field and radial vibration force density distribution as a function of angular position and corresponding space harmonics (vibration modes) are analysed using the finite element method together with frozen permeability technique. It is shown that not only the lowest harmonic of radial force in IPM machine is much higher, but also the (2p)th harmonic of radial force in IPM machine is also higher than that in SPM machine.", "title": "" }, { "docid": "69de2f8098a0618c75baeb259cb94ca1", "text": "Medicine may stand at the cusp of a mobile transformation. Mobile health, or “mHealth,” is the use of portable devices such as smartphones and tablets for medical purposes, including diagnosis, treatment, or support of general health and well-being. Users can interface with mobile devices through software applications (“apps”) that typically gather input from interactive questionnaires, separate medical devices connected to the mobile device, or functionalities of the device itself, such as its camera, motion sensor, or microphone. Apps may even process these data with the use of medical algorithms or calculators to generate customized diagnoses and treatment recommendations. Mobile devices make it possible to collect more granular patient data than can be collected from devices that are typically used in hospitals or physicians’ offices. The experiences of a single patient can then be measured against large data sets to provide timely recommendations about managing both acute symptoms and chronic conditions.1,2 To give but a few examples: One app allows users who have diabetes to plug glucometers into their iPhones as it tracks insulin doses and sends alerts for abnormally high or low blood sugar levels.3,4 Another app allows patients to use their smartphones to record electrocardiograms,5 using a single lead that snaps to the back of the phone. Users can hold the phone against their chests, record cardiac events, and transmit results to their cardiologists.6 An imaging app allows users to analyze diagnostic images in multiple modalities, including positronemission tomography, computed tomography, magnetic resonance imaging, and ultrasonography.7 An even greater number of mHealth products perform health-management functions, such as medication reminders and symptom checkers, or administrative functions, such as patient scheduling and billing. The volume and variety of mHealth products are already immense and defy any strict taxonomy. More than 97,000 mHealth apps were available as of March 2013, according to one estimate.8 The number of mHealth apps, downloads, and users almost doubles every year.9 Some observers predict that by 2018 there could be 1.7 billion mHealth users worldwide.8 Thus, mHealth technologies could have a profound effect on patient care. However, mHealth has also become a challenge for the Food and Drug Administration (FDA), the regulator responsible for ensuring that medical devices are safe and effective. The FDA’s oversight of mHealth devices has been controversial to members of Congress and industry,10 who worry that “applying a complex regulatory framework could inhibit future growth and innovation in this promising market.”11 But such oversight has become increasingly important. A bewildering array of mHealth products can make it difficult for individual patients or physicians to evaluate their quality or utility. In recent years, a number of bills have been proposed in Congress to change FDA jurisdiction over mHealth products, and in April 2014, a key federal advisory committee laid out its recommendations for regulating mHealth and other health-information technologies.12 With momentum toward legislation building, this article focuses on the public health benefits and risks of mHealth devices under FDA jurisdiction and considers how to best use the FDA’s authority.", "title": "" }, { "docid": "2595c67531f0da4449f5914cac3488a7", "text": "In this paper we present a novel interaction metaphor for handheld projectors we label MotionBeam. We detail a number of interaction techniques that utilize the physical movement of a handheld projector to better express the motion and physicality of projected objects. Finally we present the first iteration of a projected character design that uses the MotionBeam metaphor for user interaction.", "title": "" } ]
scidocsrr
f85919d864264c7f1266b68b1291cd28
Predicting Billboard Success Using Data-Mining in P2P Networks
[ { "docid": "66f684ba92fe735fecfbfb53571bad5f", "text": "Some empirical learning tasks are concerned with predicting values rather than the more familiar categories. This paper describes a new system, m5, that constructs tree-based piecewise linear models. Four case studies are presented in which m5 is compared to other methods.", "title": "" } ]
[ { "docid": "f70ff7f71ff2424fbcfea69d63a19de0", "text": "We propose a method for learning similaritypreserving hash functions that map highdimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "title": "" }, { "docid": "c69e805751421b516e084498e7fc6f44", "text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.", "title": "" }, { "docid": "99574bec7125cfa9e2ebc19bb6bb4bf5", "text": "Health care delivery and education has become a challenge for providers. Nurses and other professionals are challenged daily to assure that the patient has the necessary information to make informed decisions. Patients and their families are given a multitude of information about their health and commonly must make important decisions from these facts. Obstacles that prevent easy delivery of health care information include literacy, culture, language, and physiological barriers. It is up to the nurse to assess and evaluate the patient's learning needs and readiness to learn because everyone learns differently. This article will examine how each of these barriers impact care delivery along with teaching and learning strategies will be examined.", "title": "" }, { "docid": "f381cce9e26441779b2741e19875f0d9", "text": "Human affect recognition is the field of study associated with using automatic techniques to identify human emotion or human affective state. A person's affective states is often communicated non-verbally through body language. A large part of human body language communication is the use of head gestures. Almost all cultures use subtle head movements to convey meaning. Two of the most common and distinct head gestures are the head nod and the head shake gestures. In this paper we present a robust system to automatically detect head nod and shakes. We employ the Microsoft Kinect and utilise discrete Hidden Markov Models (HMMs) as the backbone to a machine learning based classifier within the system. The system achieves 86% accuracy on test datasets and results are provided.", "title": "" }, { "docid": "e1885f9c373c355a4df9307c6d90bf83", "text": "Ricinulei possess movable, slender pedipalps with small chelae. When ricinuleids walk, they occasionally touch the soil surface with the tips of their pedipalps. This behavior is similar to the exploration movements they perform with their elongated second legs. We studied the distal areas of the pedipalps of the cavernicolous Mexican species Pseudocellus pearsei with scanning and transmission electron microscopy. Five different surface structures are characteristic for the pedipalps: (1) slender sigmoidal setae with smooth shafts resembling gustatory terminal pore single-walled (tp-sw) sensilla; (2) conspicuous long, mechanoreceptive slit sensilla; (3) a single, short, clubbed seta inside a deep pit representing a no pore single walled (np-sw) sensillum; (4) a single pore organ containing one olfactory wall pore single-walled (wp-sw) sensillum; and (5) gustatory terminal pore sensilla in the fingers of the pedipalp chela. Additionally, the pedipalps bear sensilla which also occur on the other appendages. With this sensory equipment, the pedipalps are highly effective multimodal short range sensory organs which complement the long range sensory function of the second legs. In order to present the complete sensory equipment of all appendages of the investigated Pseudocellus a comparative overview is provided.", "title": "" }, { "docid": "799573bf08fb91b1ac644c979741e7d2", "text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.", "title": "" }, { "docid": "095dbdc1ac804487235cdd0aeffe8233", "text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.", "title": "" }, { "docid": "aaff9bc2844f2631e11944e049190ba4", "text": "There has been little work on examining how deep neural networks may be adapted to speakers for improved speech recognition accuracy. Past work has examined using a discriminatively trained affine transformation of the input features applied at a frame level or the re-training of the entire shallow network for a specific speaker. This work explores how deep neural networks may be adapted to speakers by re-training the input layer, the output layer or the entire network. We look at how L2 regularization using weight decay to the speaker independent model improves generalization. Other training factors are examined including the role momentum plays and stochastic mini-batch versus batch training. While improvements are significant for smaller networks, the largest show little gain from adaptation on a large vocabulary mobile speech recognition task.", "title": "" }, { "docid": "26787002ed12cc73a3920f2851449c5e", "text": "This article brings together three current themes in organizational behavior: (1) a renewed interest in assessing person-situation interactional constructs, (2) the quantitative assessment of organizational culture, and (3) the application of \"Q-sort,\" or template-matching, approaches to assessing person-situation interactions. Using longitudinal data from accountants and M.B.A. students and cross-sectional data from employees of government agencies and public accounting firms, we developed and validated an instrument for assessing personorganization fit, the Organizational Culture Profile (OCP). Results suggest that the dimensionality of individual preferences for organizational cultures and the existence of these cultures are interpretable. Further, person-organization fit predicts job satisfaction and organizational commitment a year after fit was measured and actual turnover after two years. This evidence attests to the importance of understanding the fit between individuals' preferences and organizational cultures.", "title": "" }, { "docid": "52f95d1c0e198c64455269fd09108703", "text": "Dynamic control theory has long been used in solving optimal asset allocation problems, and a number of trading decision systems based on reinforcement learning methods have been applied in asset allocation and portfolio rebalancing. In this paper, we extend the existing work in recurrent reinforcement learning (RRL) and build an optimal variable weight portfolio allocation under a coherent downside risk measure, the expected maximum drawdown, E(MDD). In particular, we propose a recurrent reinforcement learning method, with a coherent risk adjusted performance objective function, the Calmar ratio, to obtain both buy and sell signals and asset allocation weights. Using a portfolio consisting of the most frequently traded exchange-traded funds, we show that the expected maximum drawdown risk based objective function yields superior return performance compared to previously proposed RRL objective functions (i.e. the Sharpe ratio and the Sterling ratio), and that variable weight RRL long/short portfolios outperform equal weight RRL long/short portfolios under different transaction cost scenarios. We further propose an adaptive E(MDD) risk based RRL portfolio rebalancing decision system with a transaction cost and market condition stop-loss retraining mechanism, and we show that the ∗Corresponding author: Steve Y. Yang, Postal address: School of Business, Stevens Institute of Technology, 1 Castle Point on Hudson, Hoboken, NJ 07030 USA. Tel.: +1 201 216 3394 Fax: +1 201 216 5385 Email addresses: salmahdi@stevens.edu (Saud Almahdi), steve.yang@stevens.edu (Steve Y. Yang) Preprint submitted to Expert Systems with Applications June 15, 2017", "title": "" }, { "docid": "e03d8f990cfcb07d8088681c3811b542", "text": "The environments in which we live and the tasks we must perform to survive and reproduce have shaped the design of our perceptual systems through evolution and experience. Therefore, direct measurement of the statistical regularities in natural environments (scenes) has great potential value for advancing our understanding of visual perception. This review begins with a general discussion of the natural scene statistics approach, of the different kinds of statistics that can be measured, and of some existing measurement techniques. This is followed by a summary of the natural scene statistics measured over the past 20 years. Finally, there is a summary of the hypotheses, models, and experiments that have emerged from the analysis of natural scene statistics.", "title": "" }, { "docid": "6c7284ca77809210601c213ee8a685bb", "text": "Patients with non-small cell lung cancer (NSCLC) require careful staging at the time of diagnosis to determine prognosis and guide treatment recommendations. The seventh edition of the TNM Classification of Malignant Tumors is scheduled to be published in 2009 and the International Association for the Study of Lung Cancer (IASLC) created the Lung Cancer Staging Project (LCSP) to guide revisions to the current lung cancer staging system. These recommendations will be submitted to the American Joint Committee on Cancer (AJCC) and to the Union Internationale Contre le Cancer (UICC) for consideration in the upcoming edition of the staging manual. Data from over 100,000 patients with lung cancer were submitted for analysis and several modifications were suggested for the T descriptors and the M descriptors although the current N descriptors remain unchanged. These recommendations will further define homogeneous patient subsets with similar survival rates. More importantly, these revisions will help guide clinicians in making optimal, stage-specific, treatment recommendations.", "title": "" }, { "docid": "7a356a485b46c6fc712a0174947e142e", "text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related forearm, wrist, and hand injuries and illnesses was conducted as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review provides a comprehensive overview and analysis of 36 studies that addressed many of the interventions commonly used in hand rehabilitation. Findings reveal that the use of occupation-based activities has reasonable yet limited evidence to support its effectiveness. This review supports the premise that many client factors can be positively affected through the use of several commonly used occupational therapy-related modalities and methods. The implications for occupational therapy practice, research, and education and limitations of reviewed studies are also discussed.", "title": "" }, { "docid": "aa1c565018371cf12e703e06f430776b", "text": "We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.", "title": "" }, { "docid": "a825bab34866182aa585e079a1596b92", "text": "Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIξ model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIξ model can formally solve them. The major drawback of the AIξ model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIξtl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIξtl is of the order t ·2l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIξ theory to other AI approaches. Any response to marcus@hutter1.de is welcome.", "title": "" }, { "docid": "ee73847c9dd27672c9860219c293b8dd", "text": "Sensing cost and data quality are two primary concerns in mobile crowd sensing. In this article, we propose a new crowd sensing paradigm, sparse mobile crowd sensing, which leverages the spatial and temporal correlation among the data sensed in different sub-areas to significantly reduce the required number of sensing tasks allocated, thus lowering overall sensing cost (e.g., smartphone energy consumption and incentives) while ensuring data quality. Sparse mobile crowdsensing applications intelligently select only a small portion of the target area for sensing while inferring the data of the remaining unsensed area with high accuracy. We discuss the fundamental research challenges in sparse mobile crowdsensing, and design a general framework with potential solutions to the challenges. To verify the effectiveness of the proposed framework, a sparse mobile crowdsensing prototype for temperature and traffic monitoring is implemented and evaluated. With several future research directions identified in sparse mobile crowdsensing, we expect that more research interests will be stimulated in this novel crowdsensing paradigm.", "title": "" }, { "docid": "36e72fe58858b4caf4860a3bba5fced4", "text": "When operating over extended periods of time, an autonomous system will inevitably be faced with severe changes in the appearance of its environment. Coping with such changes is more and more in the focus of current robotics research. In this paper, we foster the development of robust place recognition algorithms in changing environments by describing a new dataset that was recorded during a 728 km long journey in spring, summer, fall, and winter. Approximately 40 hours of full-HD video cover extreme seasonal changes over almost 3000 km in both natural and man-made environments. Furthermore, accurate ground truth information are provided. To our knowledge, this is by far the largest SLAM dataset available at the moment. In addition, we introduce an open source Matlab implementation of the recently published SeqSLAM algorithm and make it available to the community. We benchmark SeqSLAM using the novel dataset and analyse the influence of important parameters and algorithmic steps.", "title": "" }, { "docid": "e5fc30045f458f84435363349d22204d", "text": "Today, root cause analysis of failures in data centers is mostly done through manual inspection. More often than not, cus- tomers blame the network as the culprit. However, other components of the system might have caused these failures. To troubleshoot, huge volumes of data are collected over the entire data center. Correlating such large volumes of diverse data collected from different vantage points is a daunting task even for the most skilled technicians. In this paper, we revisit the question: how much can you infer about a failure in the data center using TCP statistics collected at one of the endpoints? Using an agent that cap- tures TCP statistics we devised a classification algorithm that identifies the root cause of failure using this information at a single endpoint. Using insights derived from this classi- fication algorithm we identify dominant TCP metrics that indicate where/why problems occur in the network. We val- idate and test these methods using data that we collect over a period of six months in a production data center.", "title": "" }, { "docid": "33ae11cfc67a9afe34483444a03bfd5a", "text": "In today’s interconnected digital world, targeted attacks have become a serious threat to conventional computer systems and critical infrastructure alike. Many researchers contribute to the fight against network intrusions or malicious software by proposing novel detection systems or analysis methods. However, few of these solutions have a particular focus on Advanced Persistent Threats or similarly sophisticated multi-stage attacks. This turns finding domain-appropriate methodologies or developing new approaches into a major research challenge. To overcome these obstacles, we present a structured review of semantics-aware works that have a high potential for contributing to the analysis or detection of targeted attacks. We introduce a detailed literature evaluation schema in addition to a highly granular model for article categorization. Out of 123 identified papers, 60 were found to be relevant in the context of this study. The selected articles are comprehensively reviewed and assessed in accordance to Kitchenham’s guidelines for systematic literature reviews. In conclusion, we combine new insights and the status quo of current research into the concept of an ideal systemic approach capable of semantically processing and evaluating information from different observation points.", "title": "" }, { "docid": "ea8256df8504cd392f98d92612e4a9a0", "text": "Employment specialists play a pivotal role in assisting youth and adults with disabilities find and retain jobs. This requires a unique combination of skills, competencies and personal attributes. While the fields of career counseling, vocational rehabilitation and special education transition have documented the ideal skills sets needed to achieve desired outcomes, the authors characterize these as essential mechanics. What have not been examined are the personal qualities that effective employment specialists possess. Theorizing that these successful professionals exhibit traits and behaviors beyond the mechanics, the authors conducted a qualitative study incorporating in-depth interviews with 17 top-performing staff of a highly successful national program, The Marriott Foundation’s Bridges from school to work. Four personal attributes emerged from the interviews: (a) principled optimism; (b) cultural competence; (c) business-oriented professionalism; and (d) networking savvy. In presenting these findings, the authors discuss the implications for recruitment, hiring, training, and advancing truly effective employment specialists, and offer recommendations for further research.", "title": "" } ]
scidocsrr
3639e5a245922d1dec3cdca188c5b5be
Knowledge , Motivation , and Adaptive Behavior : A Framework for Improving Selling Effectiveness
[ { "docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97", "text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.", "title": "" } ]
[ { "docid": "86e646b845384d3cfbb146075be5c02a", "text": "Content-Based Image Retrieval (CBIR) has become one of the most active research areas in the past few years. Many visual feature representations have been explored and many systems built. While these research e orts establish the basis of CBIR, the usefulness of the proposed approaches is limited. Speci cally, these e orts have relatively ignored two distinct characteristics of CBIR systems: (1) the gap between high level concepts and low level features; (2) subjectivity of human perception of visual content. This paper proposes a relevance feedback based interactive retrieval approach, which e ectively takes into account the above two characteristics in CBIR. During the retrieval process, the user's high level query and perception subjectivity are captured by dynamically updated weights based on the user's relevance feedback. The experimental results show that the proposed approach greatly reduces the user's e ort of composing a query and captures the user's information need more precisely.", "title": "" }, { "docid": "4ec7af75127df22c9cb7bd279cb2bcf3", "text": "This paper describes a real-time walking control system developed for the biped robots JOHNNIE and LOLA. Walking trajectories are planned on-line using a simplified robot model and modified by a stabilizing controller. The controller uses hybrid position/force control in task space based on a resolved motion rate scheme. Inertial stabilization is achieved by modifying the contact force trajectories. The paper includes an analysis of the dynamics of controlled bipeds, which is the basis for the proposed control system. The system was tested both in forward dynamics simulations and in experiments with JOHNNIE.", "title": "" }, { "docid": "64d755d95353a66ec967c7f74aaf2232", "text": "Purpose: Platinum-based drugs, in particular cisplatin (cis-diamminedichloridoplatinum(II), CDDP), are used for treatment of squamous cell carcinoma of the head and neck (SCCHN). Despite initial responses, CDDP treatment often results in chemoresistance, leading to therapeutic failure. The role of primary resistance at subclonal level and treatment-induced clonal selection in the development of CDDP resistance remains unknown.Experimental Design: By applying targeted next-generation sequencing, fluorescence in situ hybridization, microarray-based transcriptome, and mass spectrometry-based phosphoproteome analysis to the CDDP-sensitive SCCHN cell line FaDu, a CDDP-resistant subline, and single-cell derived subclones, the molecular basis of CDDP resistance was elucidated. The causal relationship between molecular features and resistant phenotypes was determined by siRNA-based gene silencing. The clinical relevance of molecular findings was validated in patients with SCCHN with recurrence after CDDP-based chemoradiation and the TCGA SCCHN dataset.Results: Evidence of primary resistance at clonal level and clonal selection by long-term CDDP treatment was established in the FaDu model. Resistance was associated with aneuploidy of chromosome 17, increased TP53 copy-numbers and overexpression of the gain-of-function (GOF) mutant variant p53R248L siRNA-mediated knockdown established a causal relationship between mutant p53R248L and CDDP resistance. Resistant clones were also characterized by increased activity of the PI3K-AKT-mTOR pathway. The poor prognostic value of GOF TP53 variants and mTOR pathway upregulation was confirmed in the TCGA SCCHN cohort.Conclusions: Our study demonstrates a link of intratumoral heterogeneity and clonal evolution as important mechanisms of drug resistance in SCCHN and establishes mutant GOF TP53 variants and the PI3K/mTOR pathway as molecular targets for treatment optimization. Clin Cancer Res; 24(1); 158-68. ©2017 AACR.", "title": "" }, { "docid": "3d9c02413c80913cb32b5094dcf61843", "text": "There is an explosion of youth subscriptions to original content-media-sharing Web sites such as YouTube. These Web sites combine media production and distribution with social networking features, making them an ideal place to create, connect, collaborate, and circulate. By encouraging youth to become media creators and social networkers, new media platforms such as YouTube offer a participatory culture in which youth can develop, interact, and learn. As youth development researchers, we must be cognizant of this context and critically examine what this platform offers that might be unique to (or redundant of) typical adolescent experiences in other developmental contexts.", "title": "" }, { "docid": "7ec6790b96e9185bf822eea3a27ad7ab", "text": "Multi-level converter architectures have been explored for a variety of applications including high-power DC-AC inverters and DC-DC converters. In this work, we explore flying-capacitor multi-level (FCML) DC-DC topologies as a class of hybrid switched-capacitor/inductive converter. Compared to other candidate architectures in this area (e.g. Series-Parallel, Dickson), FCML converters have notable advantages such as the use of single-rated low-voltage switches, potentially lower switching loss, lower passive component volume, and enable regulation across the full VDD-VOUT range. It is shown that multimode operation, including previously published resonant and dynamic off-time modulation, form a single set of techniques that can be used to extend high efficiency over a wide power density range. Some of the general operating considerations of FCML converters, such as the challenge of maintaining voltage balance on flying capacitors, are shown to be of equal concern in other soft-switched SC converter topologies. Experimental verification from a 24V:12V, 3-level converter is presented to show multimode operation with a nominally 2:1 topology. A second 50V:7V 4-level FCML converter demonstrates operation with variable regulation. A method is presented to balance flying capacitor voltages through low frequency closed-loop feedback.", "title": "" }, { "docid": "7192e2ae32eb79aaefdf8e54cdbba715", "text": "Recently, ridge gap waveguides are considered as guiding structures in high-frequency applications. One of the major problems facing this guiding structure is the limited ability of using all the possible bandwidths due to the limited bandwidth of the transition to the coaxial lines. Here, a review of the different excitation techniques associated with this guiding structure is presented. Next, some modifications are proposed to improve its response in order to cover the possible actual bandwidth. The major aim of this paper is to introduce a wideband coaxial to ridge gap waveguide transition based on five sections of matching networks. The introduced transition shows excellent return loss, which is better than 15 dB over the actual possible bandwidth for double transitions.", "title": "" }, { "docid": "74ad888a96e6dd43bc5f909623f72e43", "text": "The goal of this roadmap paper is to summarize the stateof-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for adaptive solutions, processes, from centralized to decentralized control, and practical run-time verification and validation. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.", "title": "" }, { "docid": "92e62d56458c3e7c4cd845e1de94178f", "text": "We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.", "title": "" }, { "docid": "c8a9919a2df2cfd730816cd0171f08dd", "text": "In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classi fication (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual fea tures from both global and local views. Existing image emotion classification works using hand-crafted features o r deep features mainly focus on either low-level visual featu res or semantic-level image representations without taking al l factors into consideration. Our proposed MldrNet unifies deep representations of three levels, i.e. image semantics , image aesthetics and low-level visual features through mul tiple instance learning (MIL) in order to effectively cope wit h noisy labeled data, such as images collected from the Intern et. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-craf ted features. The proposed approach also outperforms the state of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.", "title": "" }, { "docid": "fcbd256ad05ef96c9f2997fbfbace473", "text": "The Internet of Things (IoT) envisions a world-wide, interconnected network of smart physical entities. These physical entities generate a large amount of data in operation, and as the IoT gains momentum in terms of deployment, the combined scale of those data seems destined to continue to grow. Increasingly, applications for the IoT involve analytics. Data analytics is the process of deriving knowledge from data, generating value like actionable insights from them. This article reviews work in the IoT and big data analytics from the perspective of their utility in creating efficient, effective, and innovative applications and services for a wide spectrum of domains. We review the broad vision for the IoT as it is shaped in various communities, examine the application of data analytics across IoT domains, provide a categorisation of analytic approaches, and propose a layered taxonomy from IoT data to analytics. This taxonomy provides us with insights on the appropriateness of analytical techniques, which in turn shapes a survey of enabling technology and infrastructure for IoT analytics. Finally, we look at some tradeoffs for analytics in the IoT that can shape future research.", "title": "" }, { "docid": "a6cf26910cb0cff08b390a1814cc2a40", "text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. In this paper, we describe an algorithm for following ill-structured roads in which dominant texture orientations computed with multi-scale Gabor wavelet filters vote for a consensus road vanishing point location. In-plane road curvature and out-of-plane undulation are estimated in each image by tracking the vanishing point indicated by a horizontal image strip as it moves up toward the putative vanishing line. Particle filtering is also used to track the vanishing point sequence induced by road curvature from image to image. Results are shown for vanishing point localization on a variety of road scenes ranging from gravel roads to dirt trails to highways.", "title": "" }, { "docid": "77666dea1c0788352d0172a4a3395d59", "text": "A top-down page segmentation technique known as the recursive X-Y cut decomposes a document image recursively into a set of rectanguzar blocks. This paper proposes that the recursive X-Y cut be implemented using bounding bozes of connected components of black pixels instead of using image pizels. The advantage is that great improvement can be achieved in computation. In fact, once bounding boxes of connected components are obtained, the recursive X-Y cut is completed within an order of a second on Spare-10 workutations for letter-sized document images scanned at 300 dpi resolution. keywords: page segmentation, recursive X-Y cut, projection profile, connected components", "title": "" }, { "docid": "545509f9e3aa65921a7d6faa41247ae6", "text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.", "title": "" }, { "docid": "44a6cfa975745624ae4bebec17702d2a", "text": "OBJECTIVE\nTo evaluate the performance of the International Ovarian Tumor Analysis (IOTA) ADNEX model in the preoperative discrimination between benign ovarian (including tubal and para-ovarian) tumors, borderline ovarian tumors (BOT), Stage I ovarian cancer (OC), Stage II-IV OC and ovarian metastasis in a gynecological oncology center in Brazil.\n\n\nMETHODS\nThis was a diagnostic accuracy study including 131 women with an adnexal mass invited to participate between February 2014 and November 2015. Before surgery, pelvic ultrasound examination was performed and serum levels of tumor marker CA 125 were measured in all women. Adnexal masses were classified according to the IOTA ADNEX model. Histopathological diagnosis was the gold standard. Receiver-operating characteristics (ROC) curve analysis was used to determine the diagnostic accuracy of the model to classify tumors into different histological types.\n\n\nRESULTS\nOf 131 women, 63 (48.1%) had a benign ovarian tumor, 16 (12.2%) had a BOT, 17 (13.0%) had Stage I OC, 24 (18.3%) had Stage II-IV OC and 11 (8.4%) had ovarian metastasis. The area under the ROC curve (AUC) was 0.92 (95% CI, 0.88-0.97) for the basic discrimination between benign vs malignant tumors using the IOTA ADNEX model. Performance was high for the discrimination between benign vs Stage II-IV OC, BOT vs Stage II-IV OC and Stage I OC vs Stage II-IV OC, with AUCs of 0.99, 0.97 and 0.94, respectively. Performance was poor for the differentiation between BOT vs Stage I OC and between Stage I OC vs ovarian metastasis with AUCs of 0.64.\n\n\nCONCLUSION\nThe majority of adnexal masses in our study were classified correctly using the IOTA ADNEX model. On the basis of our findings, we would expect the model to aid in the management of women with an adnexal mass presenting to a gynecological oncology center. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.", "title": "" }, { "docid": "aae97dd982300accb15c05f9aa9202cd", "text": "Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans, as partner and as friends for us. The authors think that the emotion expression of a robot is effective in joint activities of human and robot. In addition, we also think that bipedal walking is necessary to robots which are active in human living environment. But, there was no robot which has those functions. And, it is not clear what kinds of functions are effective actually. Therefore we developed a new bipedal walking robot which is capable to express emotions. In this paper, we present the design and the preliminary evaluation of the new head of the robot with only a small number of degrees of freedom for facial expression.", "title": "" }, { "docid": "02f62ec1ea8b7dba6d3a5d4ea08abe2d", "text": "MicroRNAs (miRNAs) are short, 22–25 nucleotide long transcripts that may suppress entire signaling pathways by interacting with the 3’-untranslated region (3’-UTR) of coding mRNA targets, interrupting translation and inducing degradation of these targets. The long 3’-UTRs of brain transcripts compared to other tissues predict important roles for brain miRNAs. Supporting this notion, we found that brain miRNAs co-evolved with their target transcripts, that non-coding pseudogenes with miRNA recognition elements compete with brain coding mRNAs on their miRNA interactions, and that Single Nucleotide Polymorphisms (SNPs) on such pseudogenes are enriched in mental diseases including autism and schizophrenia, but not Alzheimer’s disease (AD). Focusing on evolutionarily conserved and primate-specifi c miRNA controllers of cholinergic signaling (‘CholinomiRs’), we fi nd modifi ed CholinomiR levels in the brain and/or nucleated blood cells of patients with AD and Parkinson’s disease, with treatment-related diff erences in their levels and prominent impact on the cognitive and anti-infl ammatory consequences of cholinergic signals. Examples include the acetylcholinesterase (AChE)-targeted evolutionarily conserved miR-132, whose levels decline drastically in the AD brain. Furthermore, we found that interruption of AChE mRNA’s interaction with the primatespecifi c CholinomiR-608 in carriers of a SNP in the AChE’s miR-608 binding site induces domino-like eff ects that reduce the levels of many other miR-608 targets. Young, healthy carriers of this SNP express 40% higher brain AChE activity than others, potentially aff ecting the responsiveness to AD’s anti-AChE therapeutics, and show elevated trait anxiety, infl ammation and hypertension. Non-coding regions aff ecting miRNA-target interactions in neurodegenerative brains thus merit special attention.", "title": "" }, { "docid": "c29b91a5b580a620bb245519695a6cd9", "text": "It is commonly believed that datacenter networking software must sacri ce generality to attain high performance. The popularity of specialized distributed systems designed speci cally for niche technologies such as RDMA, lossless networks, FPGAs, and programmable switches testi es to this belief. In this paper, we show that such specialization is unnecessary. eRPC is a new general-purpose remote procedure call (RPC) library that o ers performance comparable to specialized systems, while running on commodity CPUs in traditional datacenter networks based on either lossy Ethernet or lossless fabrics. eRPC performs well in three key metrics: message rate for small messages; bandwidth for large messages; and scalability to a large number of nodes and CPU cores. It handles packet loss, congestion, and background request execution. In microbenchmarks, one CPU core can handle up to 5 million small eRPC requests per second, or saturate a 40 Gbps link with large messages. We port a production-grade implementation of Raft state machine replication to eRPC without modifying the core Raft source code. We achieve 5.5 μs of replication latency on lossy Ethernet, which is faster or comparable to specialized replication systems that use programmable switches, FPGAs, or RDMA.", "title": "" }, { "docid": "d88067f2dbcd55dae083134b5eeb7868", "text": "Current state-of-the-art human activity recognition is fo cused on the classification of temporally trimmed videos in which only one action occurs per frame. We propose a simple, yet effective, method for the temporal detection of activities in temporally untrimmed videos with the help of untrimmed classification. Firstly, our model predicts th e top k labels for each untrimmed video by analysing global video-level features. Secondly, frame-level binary class ification is combined with dynamic programming to generate the temporally trimmed activity proposals . Finally, each proposal is assigned a label based on the global label, and scored with the score of the temporal activity proposal and the global score. Ultimately, we show that untrimmed video classification models can be used as stepping stone for temporal detection.", "title": "" }, { "docid": "4fa9f9ac4204de1394cd7133254aa046", "text": "Over the last ten years, face recognition has become a specialized applications area within the field of computer vision. Sophisticated commercial systems have been developed that achieve high recognition rates. Although elaborate, many of these systems include a subspace projection step and a nearest neighbor classifier. The goal of this paper is to rigorously compare two subspace projection techniques within the context of a baseline system on the face recognition task. The first technique is principal component analysis (PCA), a well-known “baseline” for projection techniques. The second technique is independent component analysis (ICA), a newer method that produces spatially localized and statistically independent basis vectors. Testing on the FERET data set (and using standard partitions), we find that, when a proper distance metric is used, PCA significantly outperforms ICA on a human face recognition task. This is contrary to previously", "title": "" }, { "docid": "aa7026774074ed81dd7836ef6dc44334", "text": "To improve safety on the roads, next-generation vehicles will be equipped with short-range communication technologies. Many applications enabled by such communication will be based on a continuous broadcast of information about the own status from each vehicle to the neighborhood, often referred as cooperative awareness or beaconing. Although the only standardized technology allowing direct vehicle-to-vehicle (V2V) communication has been IEEE 802.11p until now, the latest release of long-term evolution (LTE) included advanced device-to-device features designed for the vehicular environment (LTE-V2V) making it a suitable alternative to IEEE 802.11p. Advantages and drawbacks are being considered for both technologies, and which one will be implemented is still under debate. The aim of this paper is thus to provide an insight into the performance of both technologies for cooperative awareness and to compare them. The investigation is performed analytically through the implementation of novel models for both IEEE 802.11p and LTE-V2V able to address the same scenario, with consistent settings and focusing on the same output metrics. The proposed models take into account several aspects that are often neglected by related works, such as hidden terminals and capture effect in IEEE 802.11p, the impact of imperfect knowledge of vehicles position on the resource allocation in LTE-V2V, and the various modulation and coding scheme combinations that are available in both technologies. Results show that LTE-V2V allows us to maintain the required quality of service at even double or more the distance than IEEE 802.11p in moderate traffic conditions. However, due to the half-duplex nature of devices and the structure of LTE frames, it shows lower capacity than IEEE 802.11p if short distances and very high vehicle density are targeted.", "title": "" } ]
scidocsrr
cec3ee6652ec779e0f0dfd20b8ab828d
Effective Exploration for MAVs Based on the Expected Information Gain
[ { "docid": "88a21d973ec80ee676695c95f6b20545", "text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "title": "" } ]
[ { "docid": "fcd3eb613db484d7d2bd00a03e5192bc", "text": "A design methodology by including the finite PSR of the error amplifier to improve the low frequency PSR of the Low dropout regulator with improved voltage subtractor circuit is proposed. The gm/ID method based on exploiting the all regions of operation of the MOS transistor is utilized for the design of LDO regulator. The PSR of the LDO regulator is better than -50dB up to 10MHz frequency for the load currents up to 20mA with 0.15V drop-out voltage. A comparison is made between different schematics of the LDO regulator and proposed methodology for the LDO regulator with improved voltage subtractor circuit. Low frequency PSR of the regulator can be significantly improved with proposed methodology.", "title": "" }, { "docid": "741efb8046bb888b944768784b87d70a", "text": "Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.", "title": "" }, { "docid": "7ea777ccae8984c26317876d804c323c", "text": "The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated proteins) system was first identified in bacteria and archaea and can degrade exogenous substrates. It was developed as a gene editing technology in 2013. Over the subsequent years, it has received extensive attention owing to its easy manipulation, high efficiency, and wide application in gene mutation and transcriptional regulation in mammals and plants. The process of CRISPR/Cas is optimized constantly and its application has also expanded dramatically. Therefore, CRISPR/Cas is considered a revolutionary technology in plant biology. Here, we introduce the mechanism of the type II CRISPR/Cas called CRISPR/Cas9, update its recent advances in various applications in plants, and discuss its future prospects to provide an argument for its use in the study of medicinal plants.", "title": "" }, { "docid": "0f5c1d2503a2845e409d325b085bf600", "text": "We present Accel, a novel semantic video segmentation system that achieves high accuracy at low inference cost by combining the predictions of two network branches: (1) a reference branch that extracts high-detail features on a reference keyframe, and warps these features forward using frame-to-frame optical flow estimates, and (2) an update branch that computes features of adjustable quality on the current frame, performing a temporal update at each video frame. The modularity of the update branch, where feature subnetworks of varying layer depth can be inserted (e.g. ResNet-18 to ResNet-101), enables operation over a new, state-of-the-art accuracy-throughput trade-off spectrum. Over this curve, Accel models achieve both higher accuracy and faster inference times than the closest comparable single-frame segmentation networks. In general, Accel significantly outperforms previous work on efficient semantic video segmentation, correcting warping-related error that compounds on datasets with complex dynamics. Accel is end-to-end trainable and highly modular: the reference network, the optical flow network, and the update network can each be selected independently, depending on application requirements, and then jointly fine-tuned. The result is a robust, general system for fast, high-accuracy semantic segmentation on video.", "title": "" }, { "docid": "798f8c412ac3fbe1ab1b867bc8ce68d0", "text": "We introduce a new mobile system framework, SenSec, which uses passive sensory data to ensure the security of applications and data on mobile devices. SenSec constantly collects sensory data from accelerometers, gyroscopes and magnetometers and constructs the gesture model of how a user uses the device. SenSec calculates the sureness that the mobile device is being used by its owner. Based on the sureness score, mobile devices can dynamically request the user to provide active authentication (such as a strong password), or disable certain features of the mobile devices to protect user's privacy and information security. In this paper, we model such gesture patterns through a continuous n-gram language model using a set of features constructed from these sensors. We built mobile application prototype based on this model and use it to perform both user classification and user authentication experiments. User studies show that SenSec can achieve 75% accuracy in identifying the users and 71.3% accuracy in detecting the non-owners with only 13.1% false alarms.", "title": "" }, { "docid": "7eb4e5b88843d81390c14aae2a90c30b", "text": "A low-power, high-speed, but with a large input dynamic range and output swing class-AB output buffer circuit, which is suitable for the flat-panel display application, is proposed. The circuit employs an elegant comparator to sense the transients of the input to turn on charging/discharging transistors, thus draws little current during static, but has an improved driving capability during transients. It is demonstrated in a 0.6 m CMOS technology.", "title": "" }, { "docid": "1090297224c76a5a2c4ade47cb932dba", "text": "Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.", "title": "" }, { "docid": "cb47cc2effac1404dd60a91a099699d1", "text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.", "title": "" }, { "docid": "c71d229d69d79747eca7e87e342ba6d8", "text": "This paper proposes a road detection approach based solely on dense 3D-LIDAR data. The approach is built up of four stages: (1) 3D-LIDAR points are projected to a 2D reference plane; then, (2) dense height maps are computed using an upsampling method; (3) applying a sliding-window technique in the upsampled maps, probability distributions of neighbouring regions are compared according to a similarity measure; finally, (4) morphological operations are used to enhance performance against disturbances. Our detection approach does not depend on road marks, thus it is suitable for applications on rural areas and inner-city with unmarked roads. Experiments have been carried out in a wide variety of scenarios using the recent KITTI-ROAD benchmark, obtaining promising results when compared to other state-of-art approaches.", "title": "" }, { "docid": "e84699f276c807eb7fddb49d61bd8ae8", "text": "Cyberbotics Ltd. develops Webots, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.", "title": "" }, { "docid": "c9e9807acbc69afd9f6a67d9bda0d535", "text": "Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.", "title": "" }, { "docid": "6bea1d7242fc23ec8f462b1c8478f2c1", "text": "Determining a consensus opinion on a product sold online is no longer easy, because assessments have become more and more numerous on the Internet. To address this problem, researchers have used various approaches, such as looking for feelings expressed in the documents and exploring the appearance and syntax of reviews. Aspect-based evaluation is the most important aspect of opinion mining, and researchers are becoming more interested in product aspect extraction; however, more complex algorithms are needed to address this issue precisely with large data sets. This paper introduces a method to extract and summarize product aspects and corresponding opinions from a large number of product reviews in a specific domain. We maximize the accuracy and usefulness of the review summaries by leveraging knowledge about product aspect extraction and providing both an appropriate level of detail and rich representation capabilities. The results show that the proposed system achieves F1-scores of 0.714 for camera reviews and 0.774 for laptop reviews.", "title": "" }, { "docid": "43fa16b19c373e2d339f45c71a0a2c22", "text": "McKusick-Kaufman syndrome is a human developmental anomaly syndrome comprising mesoaxial or postaxial polydactyly, congenital heart disease and hydrometrocolpos. This syndrome is diagnosed most frequently in the Old Order Amish population and is inherited in an autosomal recessive pattern with reduced penetrance and variable expressivity. Homozygosity mapping and linkage analyses were conducted using two pedigrees derived from a larger pedigree published in 1978. The PedHunter software query system was used on the Amish Genealogy Database to correct the previous pedigree, derive a minimal pedigree connecting those affected sibships that are in the database and determine the most recent common ancestors of the affected persons. Whole genome short tandem repeat polymorphism (STRP) screening showed homozygosity in 20p12, between D20S162 and D20S894 , an area that includes the Alagille syndrome critical region. The peak two-point LOD score was 3.33, and the peak three-point LOD score was 5.21. The physical map of this region has been defined, and additional polymorphic markers have been isolated. The region includes several genes and expressed sequence tags (ESTs), including the jagged1 gene that recently has been shown to be haploinsufficient in the Alagille syndrome. Sequencing of jagged1 in two unrelated individuals affected with McKusick-Kaufman syndrome has not revealed any disease-causing mutations.", "title": "" }, { "docid": "44d4114280e3ab9f6bfa0f0b347114b7", "text": "Dozens of Electronic Control Units (ECUs) can be found on modern vehicles for safety and driving assistance. These ECUs also introduce new security vulnerabilities as recent attacks have been reported by plugging the in-vehicle system or through wireless access. In this paper, we focus on the security of the Controller Area Network (CAN), which is a standard for communication among ECUs. CAN bus by design does not have sufficient security features to protect it from insider or outsider attacks. Intrusion detection system (IDS) is one of the most effective ways to enhance vehicle security on the insecure CAN bus protocol. We propose a new IDS based on the entropy of the identifier bits in CAN messages. The key observation is that all the known CAN message injection attacks need to alter the CAN ID bits and analyzing the entropy of such bits can be an effective way to detect those attacks. We collected real CAN messages from a vehicle (2016 Ford Fusion) and performed simulated message injection attacks. The experimental results showed that our entropy based IDS can successfully detect all the injection attacks without disrupting the communication on CAN.", "title": "" }, { "docid": "a48b7c679008235568d3d431081277b4", "text": "This paper discusses the security aspects of a registration protocol in a mobile satellite communication system. We propose a new mobile user authentication and data encryption scheme for mobile satellite communication systems. The scheme can remedy a replay attack.", "title": "" }, { "docid": "9a1151e45740dfa663172478259b77b6", "text": "Every year, several new ontology matchers are proposed in the literature, each one using a different heuristic, which implies in different performances according to the characteristics of the ontologies. An ontology metamatcher consists of an algorithm that combines several approaches in order to obtain better results in different scenarios. To achieve this goal, it is necessary to define a criterion for the use of matchers. We presented in this work an ontology meta-matcher that combines several ontology matchers making use of the evolutionary meta-heuristic prey-predator as a means of parameterization of the same. Resumo. Todo ano, diversos novos alinhadores de ontologias são propostos na literatura, cada um utilizando uma heurı́stica diferente, o que implica em desempenhos distintos de acordo com as caracterı́sticas das ontologias. Um meta-alinhador consiste de um algoritmo que combina diversas abordagens a fim de obter melhores resultados em diferentes cenários. Para atingir esse objetivo, é necessária a definição de um critério para melhor uso de alinhadores. Neste trabalho, é apresentado um meta-alinhador de ontologias que combina vários alinhadores através da meta-heurı́stica evolutiva presa-predador como meio de parametrização das mesmas.", "title": "" }, { "docid": "a32c635c1f4f4118da20cee6ffb5c1ea", "text": "We analyzed the influence of education and of culture on the neuropsychological profile of an indigenous and a nonindigenous population. The sample included 27 individuals divided into four groups: (a) seven illiterate Maya indigenous participants, (b) six illiterate Pame indigenous participants, (c) seven nonindigenous participants with no education, and (d) seven Maya indigenous participants with 1 to 4 years of education . A brief neuropsychological test battery developed and standardized in Mexico was individually administered. Results demonstrated differential effects for both variables. Both groups of indigenous participants (Maya and Pame) obtained higher scores in visuospatial tasks, and the level of education had significant effects on working and verbal memory. Our data suggested that culture dictates what it is important for survival and that education could be considered as a type of subculture that facilitates the development of certain skills.", "title": "" }, { "docid": "c460660e6ea1cc38f4864fe4696d3a07", "text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.", "title": "" }, { "docid": "a25fa0c0889b62b70bf95c16f9966cc4", "text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.", "title": "" }, { "docid": "273abcab379d49680db121022fba3e8f", "text": "Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEGbased Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valence and Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods. © 2015 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
6e7a76546b6b3b81447034e21dbcca74
THE FLEXIBLE CORRECTION MODEL : THE ROLE OF NAIVE THEORIES OF BIAS IN BIAS CORRECTION
[ { "docid": "eed70d4d8bfbfa76382bfc32dd12c3db", "text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.", "title": "" } ]
[ { "docid": "5ba6ec8c7f9dc4d2b6c55a505ce394a7", "text": "We develop a data structure, the spatialized normal cone hierarchy, and apply it to interactive solutions for model silhouette extraction, local minimum distance computations, and area light source shadow umbra and penumbra boundary determination. The latter applications extend the domain of surface normal encapsulation from problems described by a point and a model to problems involving two models.", "title": "" }, { "docid": "4d9f0cf629cd3695a2ec249b81336d28", "text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.", "title": "" }, { "docid": "f0916caf8abc62643a1e55781798c18e", "text": "In this paper, we consider the problem of learning a policy by observing numerous non-expert agents. Our goal is to extract a policy that, with high-confidence, acts better than the agents’ average performance. Such a setting is important for real-world problems where expert data is scarce but non-expert data can easily be obtained, e.g. by crowdsourcing. Our approach is to pose this problem as safe policy improvement in reinforcement learning. First, we evaluate an average behavior policy and approximate its value function. Then, we develop a stochastic policy improvement algorithm that safely improves the average behavior. The primary advantages of our approach, termed Rerouted Behavior Improvement (RBI), over other safe learning methods are its stability in the presence of value estimation errors and the elimination of a policy search process. We demonstrate these advantages in the Taxi grid-world domain and in four games from the Atari learning environment.", "title": "" }, { "docid": "ea5e08627706532504b9beb6f4dc6650", "text": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia.", "title": "" }, { "docid": "1f753b8e3c0178cabbc8a9f594c40c8c", "text": "For easy comprehensibility, rules are preferrable to non-linear kernel functions in the analysis of bio-medical data. In this paper, we describe two rule induction approaches—C4.5 and our PCL classifier—for discovering rules from both traditional clinical data and recent gene expression or proteomic profiling data. C4.5 is a widely used method, but it has two weaknesses, the single coverage constraint and the fragmentation problem, that affect its accuracy. PCL is a new rule-based classifier that overcomes these two weaknesses of decision trees by using many significant rules. We present a thorough comparison to show that our PCL method is much more accurate than C4.5, and it is also superior to Bagging and Boosting in general.", "title": "" }, { "docid": "753a964fe17040a43ecbd2ae85b0701c", "text": "We are analyzing the visualizations in the scientific literature to enhance search services, detect plagiarism, and study bibliometrics. An immediate problem is the ubiquitous use of multi-part figures: single images with multiple embedded sub-visualizations. Such figures account for approximately 35% of the figures in the scientific literature. Conventional image segmentation techniques and other existing approaches have been shown to be ineffective for parsing visualizations. We propose an algorithm to automatically segment multi-chart visualizations into a set of single-chart visualizations, thereby enabling downstream analysis. Our approach first splits an image into fragments based on background color and layout patterns. An SVM-based binary classifier then distinguishes complete charts from auxiliary fragments such as labels, ticks, and legends, achieving an average 98.1% accuracy. Next, we recursively merge fragments to reconstruct complete visualizations, choosing between alternative merge trees using a novel scoring function. To evaluate our approach, we used 261 scientific multi-chart figures randomly selected from the Pubmed database. Our algorithm achieves 80% recall and 85% precision of perfect extractions for the common case of eight or fewer sub-figures per figure. Further, even imperfect extractions are shown to be sufficient for most chart classification and reasoning tasks associated with bibliometrics and academic search applications.", "title": "" }, { "docid": "24855976195933799d110122cbbbe6d5", "text": "Association of audio events with video events presents a challenge to a typical camera-microphone approach in order to capture AV signals from a large distance. Setting up a long range microphone array and performing geo-calibration of both audio and video sensors is difficult. In this work, in addition to a geo-calibrated electro-optical camera, we propose to use a novel optical sensor a Laser Doppler Vibrometer (LDV) for real-time audio sensing, which allows us to capture acoustic signals from a large distance, and to use the same geo-calibration for both the camera and the audio (via LDV). We have promising preliminary results on association of the audio recording of speech with the video of the human speaker.", "title": "" }, { "docid": "20171d6fa41e3c1a02e800b1792e0942", "text": "Plastics pollution in the ocean is an area of growing concern, with research efforts focusing on both the macroplastic (>5mm) and microplastic (<5mm) fractions. In the 1990 s it was recognized that a minor source of microplastic pollution was derived from liquid hand-cleansers that would have been rarely used by the average consumer. In 2009, however, the average consumer is likely to be using microplastic-containing products on a daily basis, as the majority of facial cleansers now contain polyethylene microplastics which are not captured by wastewater plants and will enter the oceans. Four microplastic-containing facial cleansers available in New Zealand supermarkets were used to quantify the size of the polythelene fragments. Three-quarters of the brands had a modal size of <100 microns and could be immediately ingested by planktonic organisms at the base of the food chain. Over time the microplastics will be subject to UV-degradation and absorb hydrophobic materials such as PCBs, making them smaller and more toxic in the long-term. Marine scientists need to educate the public to the dangers of using products that pose an immediate and long-term threat to the health of the oceans and the food we eat.", "title": "" }, { "docid": "b81c0d819f2afb0a0ff79b7c6aeb8ff7", "text": "This paper proposes a framework to identify and evaluate companies from the technological perspective to support merger and acquisition (M&A) target selection decision-making. This employed a text mining-based patent map approach to identify companies which can fulfill a specific strategic purpose of M&A for enhancing technological capabilities. The patent map is the visualized technological landscape of a technology industry by using technological proximities among patents, so companies which closely related to the strategic purpose can be identified. To evaluate the technological aspects of the identified companies, we provide the patent indexes that evaluate both current and future technological capabilities and potential technology synergies between acquiring and acquired companies. Furthermore, because the proposed method evaluates potential targets from the overall corporate perspective and the specific strategic perspectives simultaneously, more robust and meaningful result can be obtained than when only one perspective is considered. Thus, the proposed framework can suggest the appropriate target companies that fulfill the strategic purpose of M&A for enhancing technological capabilities. For the verification of the framework, we provide an empirical study using patent data related to flexible display technology.", "title": "" }, { "docid": "99d57cef03e21531be9f9663ec023987", "text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.", "title": "" }, { "docid": "b3ebbff355dfc23b4dfbab3bc3012980", "text": "Research with young children has shown that, like adults, they focus selectively on the aspects of an actor's behavior that are relevant to his or her underlying intentions. The current studies used the visual habituation paradigm to ask whether infants would similarly attend to those aspects of an action that are related to the actor's goals. Infants saw an actor reach for and grasp one of two toys sitting side by side on a curtained stage. After habituation, the positions of the toys were switched and babies saw test events in which there was a change in either the path of motion taken by the actor's arm or the object that was grasped by the actor. In the first study, 9-month-old infants looked longer when the actor grasped a new toy than when she moved through a new path. Nine-month-olds who saw an inanimate object of approximately the same dimensions as the actor's arm touch the toy did not show this pattern in test. In the second study, 5-month-old infants showed similar, though weaker, patterns. A third study provided evidence that the findings for the events involving a person were not due to perceptual changes in the objects caused by occlusion by the hand. A fourth study replicated the 9 month results for a human grasp at 6 months, and revealed that these effects did not emerge when infants saw an inanimate object with digits that moved to grasp the toy. Taken together, these findings indicate that young infants distinguish in their reasoning about human action and object motion, and that by 6 months infants encode the actions of other people in ways that are consistent with more mature understandings of goal-directed action.", "title": "" }, { "docid": "125655821a44bbce2646157c8465e345", "text": "Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VM algorithms is studied together, under a common experimental setting.", "title": "" }, { "docid": "3ab4b094f3e32a4f467a849347157264", "text": "Overview of geographically explicit momentary assessment research, applied to the study of mental health and well-being, which allows for cross-validation, extension, and enrichment of research on place and health. Building on the historical foundations of both ecological momentary assessment and geographic momentary assessment research, this review explores their emerging synergy into a more generalized and powerful research framework. Geographically explicit momentary assessment methods are rapidly advancing across a number of complimentary literatures that intersect but have not yet converged. Key contributions from these areas reveal tremendous potential for transdisciplinary and translational science. Mobile communication devices are revolutionizing research on mental health and well-being by physically linking momentary experience sampling to objective measures of socio-ecological context in time and place. Methodological standards are not well-established and will be required for transdisciplinary collaboration and scientific inference moving forward.", "title": "" }, { "docid": "cae9e77074db114690a6ed1330d9b14c", "text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.", "title": "" }, { "docid": "d146a363006aa6cc5dde35f740a28aab", "text": "Website privacy policies are often ignored by Internet users, because these documents tend to be long and difficult to understand. However, the significance of privacy policies greatly exceeds the attention paid to them: these documents are binding legal agreements between website operators and their users, and their opaqueness is a challenge not only to Internet users but also to policy regulators. One proposed alternative to the status quo is to automate or semi-automate the extraction of salient details from privacy policy text, using a combination of crowdsourcing, natural language processing, and machine learning. However, there has been a relative dearth of datasets appropriate for identifying data practices in privacy policies. To remedy this problem, we introduce a corpus of 115 privacy policies (267K words) with manual annotations for 23K fine-grained data practices. We describe the process of using skilled annotators and a purpose-built annotation tool to produce the data. We provide findings based on a census of the annotations and show results toward automating the annotation procedure. Finally, we describe challenges and opportunities for the research community to use this corpus to advance research in both privacy and language technologies.", "title": "" }, { "docid": "b54ca99ae8818517d5c04100bad0f3b4", "text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo starkeynature@hotmail.com Liqun Qi liqun.qi@polyu.edu.hk Naihua Xiu nhxiu@bjtu.edu.cn 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy", "title": "" }, { "docid": "d164ead192d1ba25472935f517608faa", "text": "Real-world machine learning applications may require functions to be fast-to-evaluate and interpretable, in particular, guaranteed monotonicity of the learned function can be critical to user trust. We propose meeting these goals for low-dimensional machine learning problems by learning flexible, monotonic functions using calibrated interpolated look-up tables. We extend the structural risk minimization framework of lattice regression to train monotonic functions by solving a convex problem with appropriate linear inequality constraints. In addition, we propose jointly learning interpretable calibrations of each feature to normalize continuous features and handle categorical or missing data, at the cost of making the objective non-convex. We address large-scale learning through parallelization, mini-batching, and propose random sampling of additive regularizer terms. Case studies for six real-world problems with five to sixteen features and thousands to millions of training samples demonstrate the proposed monotonic functions can achieve state-of-the-art accuracy on practical problems while providing greater transparency to users.", "title": "" }, { "docid": "45e1a424ad0807ce49cd4e755bdd9351", "text": "Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend towards deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.", "title": "" } ]
scidocsrr
f4051641f29c54cf41b7f648aecc44e6
Investigating the relationship between : Smartphone Addiction , Social Anxiety , Self-Esteem , Age and Gender
[ { "docid": "e08854e0fc17a8f80ede1fc05a07805c", "text": "While many researches have analyzed the psychological antecedents of mobile phone addiction and mobile phone usage behavior, their relationship with psychological characteristics remains mixed. We investigated the relationship between psychological characteristics, mobile phone addiction and use of mobile phones for 269 Taiwanese female university students who were administered Rosenberg’s selfesteem scale, Lai’s personality inventory, and a mobile phone usage questionnaire and mobile phone addiction scale. The result showing that: (1) social extraversion and anxiety have positive effects on mobile phone addiction, and self-esteem has negative effects on mobile phone addiction. (2) Mobile phone addiction has a positive predictive effect on mobile phone usage behavior. The results of this study identify personal psychological characteristics of Taiwanese female university students which can significantly predict mobile phone addiction; female university students with mobile phone addiction will make more phone calls and send more text messages. These results are discussed and suggestions for future research for school and university students are provided. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2acbfab9d69f3615930c1960a2e6dda9", "text": "OBJECTIVE\nThe aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale) and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS) was demonstrated.\n\n\nMETHODS\nA total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale), visual analogue scale (VAS), and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96). Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS.\n\n\nRESULTS\nBased on the factor analysis results, the subscale \"disturbance of reality testing\" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967). SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (p<0.05), education (p<0.05), and self-reported smartphone addiction scores (p<0.001) in SAS.\n\n\nCONCLUSIONS\nThis study developed the first scale of the smartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.", "title": "" } ]
[ { "docid": "7a24f978a349c897c1ae91de66b2cdc6", "text": "Synthetic biology is a research field that combines the investigative nature of biology with the constructive nature of engineering. Efforts in synthetic biology have largely focused on the creation and perfection of genetic devices and small modules that are constructed from these devices. But to view cells as true 'programmable' entities, it is now essential to develop effective strategies for assembling devices and modules into intricate, customizable larger scale systems. The ability to create such systems will result in innovative approaches to a wide range of applications, such as bioremediation, sustainable energy production and biomedical therapies.", "title": "" }, { "docid": "b7a04d56d6d06a0d89f6113c3ab639a8", "text": "Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents’ play.", "title": "" }, { "docid": "858acbd02250ff2f8325786475b4f3f3", "text": "One of the most important aspects of Grice’s theory of conversation is the drawing of a borderline between what is said and what is implicated. Grice’s views concerning this borderline have been strongly and influentially criticised by relevance theorists. In particular, it has become increasingly widely accepted that Grice’s notion of what is said is too limited, and that pragmatics has a far larger role to play in determining what is said than Grice would have allowed. (See for example Bezuidenhuit 1996; Blakemore 1987; Carston 1991; Recanati 1991, 1993, 2001; Sperber and Wilson 1986; Wilson and Sperber 1981.) In this paper, I argue that the rejection of Grice has moved too swiftly, as a key line of objection which has led to this rejection is flawed. The flaw, we will see, is that relevance theorists rely on a misunderstanding of Grice’s project in his theory of conversation. I am not arguing that Grice’s versions of saying and implicating are right in all details, but simply that certain widespread reasons for rejecting his theory are based on misconceptions.1 Relevance theorists, I will suggest, systematically misunderstand Grice by taking him to be engaged in the same project that they are: making sense of the psychological processes by which we interpret utterances. Notions involved with this project will need to be ones that are relevant to the psychology of utterance interpretation. Thus, it is only reasonable that relevance theorists will require that what is said and what is implicated should be psychologically real to the audience. (We will see that this requirement plays a crucial role in their arguments against Grice.) Grice, I will argue, was not pursuing this project. Rather, I will suggest that he was trying to make sense of quite a different notion of what is said: one on which both speaker and audience may be wrong about what is said. On this sort of notion, psychological reality is not a requirement. So objections to Grice based on a requirement of psychological reality will fail.", "title": "" }, { "docid": "17833f9cf4eec06dbc4d7954b6cc6f3f", "text": "Automated vehicles rely on the accurate and robust detection of the drivable area, often classified into free space, road area and lane information. Most current approaches use monocular or stereo cameras to detect these. However, LiDAR sensors are becoming more common and offer unique properties for road area detection such as precision and robustness to weather conditions. We therefore propose two approaches for a pixel-wise semantic binary segmentation of the road area based on a modified U-Net Fully Convolutional Network (FCN) architecture. The first approach UView-Cam employs a single camera image, whereas the second approach UGrid-Fused incorporates a early fusion of LiDAR and camera data into a multi-dimensional occupation grid representation as FCN input. The fusion of camera and LiDAR allows for efficient and robust leverage of individual sensor properties in a single FCN. For the training of UView-Cam, multiple publicly available datasets of street environments are used, while the UGrid-Fused is trained with the KITTI dataset. In the KITTI Road/Lane Detection benchmark, the proposed networks reach a MaxF score of 94.23% and 93.81% respectively. Both approaches achieve realtime performance with a detection rate of about 10 Hz.", "title": "" }, { "docid": "5931169b6433d77496dfc638988399eb", "text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.", "title": "" }, { "docid": "f03054e65555fce682c9ce2ea3ee5258", "text": "Synthetic biology, despite still being in its infancy, is increasingly providing valuable information for applications in the clinic, the biotechnology industry and in basic molecular research. Both its unique potential and the challenges it presents have brought together the expertise of an eclectic group of scientists, from cell biologists to engineers. In this Viewpoint article, five experts discuss their views on the future of synthetic biology, on its main achievements in basic and applied science, and on the bioethical issues that are associated with the design of new biological systems.", "title": "" }, { "docid": "552baf04d696492b0951be2bb84f5900", "text": "We examined whether reduced perceptual specialization underlies atypical perception in autism spectrum disorder (ASD) testing classifications of stimuli that differ either along integral dimensions (prototypical integral dimensions of value and chroma), or along separable dimensions (prototypical separable dimensions of value and size). Current models of the perception of individuals with an ASD would suggest that on these tasks, individuals with ASD would be as, or more, likely to process dimensions as separable, regardless of whether they represented separable or integrated dimensions. In contrast, reduced specialization would propose that individuals with ASD would respond in a more integral manner to stimuli that differ along separable dimensions, and at the same time, respond in a more separable manner to stimuli that differ along integral dimensions. A group of nineteen adults diagnosed with high functioning ASD and seventeen typically developing participants of similar age and IQ, were tested on speeded and restricted classifications tasks. Consistent with the reduced specialization account, results show that individuals with ASD do not always respond more analytically than typically developed (TD) observers: Dimensions identified as integral for TD individuals evoke less integral responding in individuals with ASD, while those identified as separable evoke less analytic responding. These results suggest that perceptual representations are more broadly tuned and more flexibly represented in ASD. Autism Res 2017, 10: 1510-1522. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.", "title": "" }, { "docid": "af4db4d9be3f652445a47e2985070287", "text": "BACKGROUND\nSurgical Site Infections (SSIs) are infections of incision or deep tissue at operation sites. These infections prolong hospitalization, delay wound healing, and increase the overall cost and morbidity.\n\n\nOBJECTIVES\nThis study aimed to investigate anaerobic and aerobic bacteria prevalence in surgical site infections and determinate antibiotic susceptibility pattern in these isolates.\n\n\nMATERIALS AND METHODS\nOne hundred SSIs specimens were obtained by needle aspiration from purulent material in depth of infected site. These specimens were cultured and incubated in both aerobic and anaerobic condition. For detection of antibiotic susceptibility pattern in aerobic and anaerobic bacteria, we used disk diffusion, agar dilution, and E-test methods.\n\n\nRESULTS\nA total of 194 bacterial strains were isolated from 100 samples of surgical sites. Predominant aerobic and facultative anaerobic bacteria isolated from these specimens were the members of Enterobacteriaceae family (66, 34.03%) followed by Pseudomonas aeruginosa (26, 13.4%), Staphylococcus aureus (24, 12.37%), Acinetobacter spp. (18, 9.28%), Enterococcus spp. (16, 8.24%), coagulase negative Staphylococcus spp. (14, 7.22%) and nonhemolytic streptococci (2, 1.03%). Bacteroides fragilis (26, 13.4%), and Clostridium perfringens (2, 1.03%) were isolated as anaerobic bacteria. The most resistant bacteria among anaerobic isolates were B. fragilis. All Gram-positive isolates were susceptible to vancomycin and linezolid while most of Enterobacteriaceae showed sensitivity to imipenem.\n\n\nCONCLUSIONS\nMost SSIs specimens were polymicrobial and predominant anaerobic isolate was B. fragilis. Isolated aerobic and anaerobic strains showed high level of resistance to antibiotics.", "title": "" }, { "docid": "72535e221c8d0a274ed7b025a17c8a7c", "text": "Along with increasing demand on improving power quality, the most popular technique that has been used is Active Power Filter (APF); this is because APF can easily eliminate unwanted harmonics, improve power factor and overcome voltage sags. This paper will discuss and analyze the simulation result for a three-phase shunt active power filter using MATLAB/Simulink program. This simulation will implement a non-linear load and compensate line current harmonics under balance and unbalance load. As a result of the simulation, it is found that an active power filter is the better way to reduce the total harmonic distortion (THD) which is required by quality standards IEEE-519.", "title": "" }, { "docid": "c232d9b7283a580b96ff00d196d69aea", "text": "We present an algorithm for performing Lambertian photometric stereo in the presence of shadows. The algorithm has three novel features. First, a fast graph cuts based method is used to estimate per pixel light source visibility. Second, it allows images to be acquired with multiple illuminants, and there can be fewer images than light sources. This leads to better surface coverage and improves the reconstruction accuracy by enhancing the signal to noise ratio and the condition number of the light source matrix. The ability to use fewer images than light sources means that the imaging effort grows sublinearly with the number of light sources. Finally, the recovered shadow maps are combined with shading information to perform constrained surface normal integration. This reduces the low frequency bias inherent to the normal integration process and ensures that the recovered surface is consistent with the shadowing configuration The algorithm works with as few as four light sources and four images. We report results for light source visibility detection and high quality surface reconstructions for synthetic and real datasets.", "title": "" }, { "docid": "644936acfe1f9ffa0b5f3e8751015d86", "text": "The use of electromagnetic induction lamps without electrodes has increased because of their long life and energy efficiency. The control of the ignition and luminosity of the lamp is provided by an electronic ballast. Beyond that, the electronic ballast also provides a power factor correction, allowing the minimizing of the lamps impact on the quality of service of the electrical network. The electronic ballast includes several blocks, namely a bridge rectifier, a power factor correcting circuit (PFC), an asymmetric half-bridge inverter with a resonant filter on the inverter output, and a circuit to control the conduction time ot the ballast transistors. Index Terms – SEPIC, PFC, electrodeless lamp, ressonant filter,", "title": "" }, { "docid": "9332c32039cf782d19367a9515768e42", "text": "Maternal drug use during pregnancy is associated with fetal passive addiction and neonatal withdrawal syndrome. Cigarette smoking—highly prevalent during pregnancy—is associated with addiction and withdrawal syndrome in adults. We conducted a prospective, two-group parallel study on 17 consecutive newborns of heavy-smoking mothers and 16 newborns of nonsmoking, unexposed mothers (controls). Neurologic examinations were repeated at days 1, 2, and 5. Finnegan withdrawal score was assessed every 3 h during their first 4 d. Newborns of smoking mothers had significant levels of cotinine in the cord blood (85.8 ± 3.4 ng/mL), whereas none of the controls had detectable levels. Similar findings were observed with urinary cotinine concentrations in the newborns (483.1 ± 2.5 μg/g creatinine versus 43.6 ± 1.5 μg/g creatinine; p = 0.0001). Neurologic scores were significantly lower in newborns of smokers than in control infants at days 1 (22.3 ± 2.3 versus 26.5 ± 1.1; p = 0.0001), 2 (22.4 ± 3.3 versus 26.3 ± 1.6; p = 0.0002), and 5 (24.3 ± 2.1 versus 26.5 ± 1.5; p = 0.002). Neurologic scores improved significantly from day 1 to 5 in newborns of smokers (p = 0.05), reaching values closer to control infants. Withdrawal scores were higher in newborns of smokers than in control infants at days 1 (4.5 ± 1.1 versus 3.2 ± 1.4; p = 0.05), 2 (4.7 ± 1.7 versus 3.1 ± 1.1; p = 0.002), and 4 (4.7 ± 2.1 versus 2.9 ± 1.4; p = 0.007). Significant correlations were observed between markers of nicotine exposure and neurologic-and withdrawal scores. We conclude that withdrawal symptoms occur in newborns exposed to heavy maternal smoking during pregnancy.", "title": "" }, { "docid": "1a750462f0f5dea5e703c2f852e7aa38", "text": "Background: Land resource management measures, such as soil bund, trench, check dams and plantation had been practiced in Melaka watershed, Ethiopia since 2010. The objective of this study is to assess the impact of above measures on soil loss rate, vegetative cover and livelihood of the population. Results: The land cover spatial data sets were created from Landsat satellite images of 2010 and 2015 using ERDAS IMAGINE 2014®. Soil loss rate was calculated using Revised Universal Soil Loss Equation (RUSLE) and its input data were generated from field investigation, satellite imageries and rainfall analysis. Data on land resource of the study area and its impact on livelihood were collected through face-to-face interview and key informants. The results revealed that cropland decreased by 9% whereas vegetative cover and grassland increased by 96 and 136%, respectively. The soil loss rate was 19.2 Mg ha−1 year−1 in 2010 and 12.4 Mg ha−1 year−1 in 2015, accounting to 34% decrease over 5 years. It may be attributed to construction of soil bund and the biological measures practiced by the stakeholders. Consequently, land productivity and availability of forage was improved which substantially contributed to the betterment of people’s livelihood. Conclusions: The land resource management measures practiced in the study area were highly effective for reducing soil loss, improving vegetation cover and livelihood of the population.", "title": "" }, { "docid": "3f394e57febd3ffdc7414cf1af94c53b", "text": "Background recovery is a very important theme in computer vision applications. Recent research shows that robust principal component analysis (RPCA) is a promising approach for solving problems such as noise removal, video background modeling, and removal of shadows and specularity. RPCA utilizes the fact that the background is common in multiple views of a scene, and attempts to decompose the data matrix constructed from input images into a low-rank matrix and a sparse matrix. This is possible if the sparse matrix is sufficiently sparse, which may not be true in computer vision applications. Moreover, algorithmic parameters need to be fine tuned to yield accurate results. This paper proposes a fixed-rank RPCA algorithm for solving background recovering problems whose low-rank matrices have known ranks. Comprehensive tests show that, by fixing the rank of the low-rank matrix to a known value, the fixed-rank algorithm produces more reliable and accurate results than existing low-rank RPCA algorithm.", "title": "" }, { "docid": "a74aef75f5b1d5bc44da2f6d2c9284cf", "text": "In this paper, we define irregular bipolar fuzzy graphs and its various classifications. Size of regular bipolar fuzzy graphs is derived. The relation between highly and neighbourly irregular bipolar fuzzy graphs are established. Some basic theorems related to the stated graphs have also been presented.", "title": "" }, { "docid": "9e4b7e87229dfb02c2600350899049be", "text": "This paper presents an efficient and reliable swarm intelligence-based approach, namely elitist-mutated particle swarm optimization EMPSO technique, to derive reservoir operation policies for multipurpose reservoir systems. Particle swarm optimizers are inherently distributed algorithms, in which the solution for a problem emerges from the interactions between many simple individuals called particles. In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance. The proposed approach is first tested on a hypothetical multireservoir system, used by earlier researchers. EMPSO showed promising results, when compared with other techniques. To show practical utility, EMPSO is then applied to a realistic case study, the Bhadra reservoir system in India, which serves multiple purposes, namely irrigation and hydropower generation. To handle multiple objectives of the problem, a weighted approach is adopted. The results obtained demonstrate that EMPSO is consistently performing better than the standard PSO and genetic algorithm techniques. It is seen that EMPSO is yielding better quality solutions with less number of function evaluations. DOI: 10.1061/ ASCE 0733-9496 2007 133:3 192 CE Database subject headings: Reservoir operation; Optimization; Irrigation; Hydroelectric power generation.", "title": "" }, { "docid": "f267e8cfbe10decbe16fa83c97e76049", "text": "The growing prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities, backgrounds and styles. There is thus a growing need to accomodate for individual differences in such e-learning systems. This paper presents a new algorithm for personliazing educational content to students that combines collaborative filtering algorithms with social choice theory. The algorithm constructs a “difficulty” ranking over questions for a target student by aggregating the ranking of similar students, as measured by different aspects of their performance on common past questions, such as grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for a target student, rather than ordering them according to predicted performance, which is prone to error. The algorithm was tested on two large real world data sets containing tens of thousands of students and a million records. Its performance was compared to a variety of personalization methods as well as a non-personalized method that relied on a domain expert. It was able to significantly outperform all of these approaches according to standard information retrieval metrics. Our approach can potentially be used to support teachers in tailoring problem sets and exams to individual students and students in informing them about areas they may need to strengthen.", "title": "" }, { "docid": "8ce33eef3eaa1f89045d916869813d5d", "text": "This paper introduces a deep neural network model for subband-based speech 1 synthesizer. The model benefits from the short bandwidth of the subband signals 2 to reduce the complexity of the time-domain speech generator. We employed 3 the multi-level wavelet analysis/synthesis to decompose/reconstruct the signal to 4 subbands in time domain. Inspired from the WaveNet, a convolutional neural 5 network (CNN) model predicts subband speech signals fully in time domain. Due 6 to the short bandwidth of the subbands, a simple network architecture is enough to 7 train the simple patterns of the subbands accurately. In the ground truth experiments 8 with teacher forcing, the subband synthesizer outperforms the fullband model 9 significantly. In addition, by conditioning the model on the phoneme sequence 10 using a pronunciation dictionary, we have achieved the first fully time-domain 11 neural text-to-speech (TTS) system. The generated speech of the subband TTS 12 shows comparable quality as the fullband one with a slighter network architecture 13 for each subband. 14", "title": "" }, { "docid": "8bd93bf2043a356ff40531acb372992d", "text": "Liver lesion segmentation is an important step for liver cancer diagnosis, treatment planning and treatment evaluation. LiTS (Liver Tumor Segmentation Challenge) provides a common testbed for comparing different automatic liver lesion segmentation methods. We participate in this challenge by developing a deep convolutional neural network (DCNN) method. The particular DCNN model works in 2.5D in that it takes a stack of adjacent slices as input and produces the segmentation map corresponding to the center slice. The model has 32 layers in total and makes use of both long range concatenation connections of U-Net [1] and short-range residual connections from ResNet [2]. The model was trained using the 130 LiTS training datasets and achieved an average Dice score of 0.67 when evaluated on the 70 test CT scans, which ranked first for the LiTS challenge at the time of the ISBI 2017 conference.", "title": "" }, { "docid": "1afd50a91b67bd1eab0db1c2a19a6c73", "text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.", "title": "" } ]
scidocsrr
15f5b0b5aab4f3bb6141fdac4c6471c4
The Compact 3 D Convolutional Neural Network for Medical Images
[ { "docid": "2d95b9919e1825ea46b5c5e6a545180c", "text": "Computed tomography (CT) generates a stack of cross-sectional images covering a region of the body. The visual assessment of these images for the identification of potential abnormalities is a challenging and time consuming task due to the large amount of information that needs to be processed. In this article we propose a deep artificial neural network architecture, ReCTnet, for the fully-automated detection of pulmonary nodules in CT scans. The architecture learns to distinguish nodules and normal structures at the pixel level and generates three-dimensional probability maps highlighting areas that are likely to harbour the objects of interest. Convolutional and recurrent layers are combined to learn expressive image representations exploiting the spatial dependencies across axial slices. We demonstrate that leveraging intra-slice dependencies substantially increases the sensitivity to detect pulmonary nodules without inflating the false positive rate. On the publicly available LIDC/IDRI dataset consisting of 1,018 annotated CT scans, ReCTnet reaches a detection sensitivity of 90.5% with an average of 4.5 false positives per scan. Comparisons with a competing multi-channel convolutional neural network for multislice segmentation and other published methodologies using the same dataset provide evidence that ReCTnet offers significant performance gains. 1 ar X iv :1 60 9. 09 14 3v 1 [ st at .M L ] 2 8 Se p 20 16", "title": "" } ]
[ { "docid": "345a59aac1e89df5402197cca90ca464", "text": "Tony Velkov,* Philip E. Thompson, Roger L. Nation, and Jian Li* School of Medicine, Deakin University, Pigdons Road, Geelong 3217, Victoria, Australia, Medicinal Chemistry and Drug Action and Facility for Anti-infective Drug Development and Innovation, Drug Delivery, Disposition and Dynamics, Monash Institute of Pharmaceutical Sciences, Monash University, 381 Royal Parade, Parkville 3052, Victoria, Australia", "title": "" }, { "docid": "0640f60855954fa2f12a58f403aec058", "text": "Corresponding Author: Vo Ngoc Phu Nguyen Tat Thanh University, 300A Nguyen Tat Thanh Street, Ward 13, District 4, Ho Chi Minh City, 702000, Vietnam Email: vongocphu03hca@gmail.com vongocphu@ntt.edu.vn Abstract: A Data Mining Has Already Had Many Algorithms Which A KNearest Neighbors Algorithm, K-NN, Is A Famous Algorithm For Researchers. K-NN Is Very Effective On Small Data Sets, However It Takes A Lot Of Time To Run On Big Datasets. Today, Data Sets Often Have Millions Of Data Records, Hence, It Is Difficult To Implement K-NN On Big Data. In This Research, We Propose An Improvement To K-NN To Process Big Datasets In A Shortened Execution Time. The Reformed KNearest Neighbors Algorithm (R-K-NN) Can Be Implemented On Large Datasets With Millions Or Even Billions Of Data Records. R-K-NN Is Tested On A Data Set With 500,000 Records. The Execution Time Of R-KNN Is Much Shorter Than That Of K-NN. In Addition, R-K-NN Is Implemented In A Parallel Network System With Hadoop Map (M) And Hadoop Reduce (R).", "title": "" }, { "docid": "d2e434f472b60e17ab92290c78706945", "text": "In recent years, a variety of review-based recommender systems have been developed, with the goal of incorporating the valuable information in user-generated textual reviews into the user modeling and recommending process. Advanced text analysis and opinion mining techniques enable the extraction of various types of review elements, such as the discussed topics, the multi-faceted nature of opinions, contextual information, comparative opinions, and reviewers’ emotions. In this article, we provide a comprehensive overview of how the review elements have been exploited to improve standard content-based recommending, collaborative filtering, and preference-based product ranking techniques. The review-based recommender system’s ability to alleviate the well-known rating sparsity and cold-start problems is emphasized. This survey classifies state-of-the-art studies into two principal branches: review-based user profile building and review-based product profile building. In the user profile sub-branch, the reviews are not only used to create term-based profiles, but also to infer or enhance ratings. Multi-faceted opinions can further be exploited to derive the weight/value preferences that users place on particular features. In another sub-branch, the product profile can be enriched with feature opinions or comparative opinions to better reflect its assessment quality. The merit of each branch of work is discussed in terms of both algorithm development and the way in which the proposed algorithms are evaluated. In addition, we discuss several future trends based on the survey, which may inspire investigators to pursue additional studies in this area.", "title": "" }, { "docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02", "text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.", "title": "" }, { "docid": "3615db7b4a62f981ef62062084597ca5", "text": "Adoption is a topic of crucial importance both to those directly involved and to society. Yet, at this writing, the federal government collects no comprehensive national statistics on adoption. The purpose of this article is to address what we do know, what we do not know, and what we need to know about the statistics on adoption. The article provides an overview of adoption and describes data available regarding adoption arrangements and the characteristics of parents who relinquish children, of children who are adopted or in substitute care, and of adults who seek to adopt. Recommendations for future data collection are offered, including the establishment of a national data collection system for adoption statistics. doption is an issue of vital importance for all persons involved in Kathy S. Stolley, M.A., is an instructor in the A the adoption triangle: the child, the adoptive parents, and the Department of Sociology birthparents. According to national estimates, one million children in the United States live with adoptive parents, and from 2% to and Criminal Justice at Old Dominion Univer4% of American families include an adopted child. sity, Norfolk, VA. Adoption is most important for infertile couples seeking children and children in need of parents. Yet adoption issues also have consequences for the larger society in such areas as public welfare and mental health. Additionally, adoption can be framed as a public health issue, particularly in light of increasing numbers of pediatric AIDS cases and concerns regarding drug-exposed infants, and “boarder” babies available for adoption. Adoption is also often supported as an alternative to abortion. Limitations of Available Data Despite the importance of adoption to many groups, it remains an underresearched area and a topic on which the data are incomplete. Indeed, at this writing, no comprehensive national data on adoption are collected by the federal government. Through the Children’s Bureau and later the National Center for Social Statistics (NCSS), the federal government collected adoption data periodically between 1944 and 1957, then annually from 1957 to 1975. States voluntarily reported summary statistics on all types of finalized adoptions using data primarily drawn from court records. The number of states and territories participating in this reporting system varied from year to year, ranging from a low of 22 in 1944 to a high of 52 during the early 1960s.4 This data collection effort ended in 1975 with the dissolution of the NCSS. The Future of Children ADOPTION Vol. 3 • No. 1 Spring 1993", "title": "" }, { "docid": "d34cc5c09e882c167b3ff273f5c52159", "text": "Received: 23 May 2011 Revised: 20 February 2012 2nd Revision: 7 September 2012 3rd Revision: 6 November 2012 Accepted: 7 November 2012 Abstract Competitive pressures are forcing organizations to be flexible. Being responsive to changing environmental conditions is an important factor in determining corporate performance. Earlier research, focusing primarily on IT infrastructure, has shown that organizational flexibility is closely related to IT infrastructure flexibility. Using real-world cases, this paper explores flexibility in the broader context of the IS function. An empirically derived framework for better understanding and managing IS flexibility is developed using grounded theory and content analysis. A process model for managing flexibility is presented; it includes steps for understanding contextual factors, recognizing reasons why flexibility is important, evaluating what needs to be flexible, identifying flexibility categories and stakeholders, diagnosing types of flexibility needed, understanding synergies and tradeoffs between them, and prescribing strategies for proactively managing IS flexibility. Three major flexibility categories, flexibility in IS operations, flexibility in IS systems & services development and deployment, and flexibility in IS management, containing 10 IS flexibility types are identified and described. European Journal of Information Systems (2014) 23, 151–184. doi:10.1057/ejis.2012.53; published online 8 January 2013", "title": "" }, { "docid": "e035233d3787ea79c446d1716553d41e", "text": "In this paper, we propose a method of detecting and classifying web application attacks. In contrast to current signature-based security methods, our solution is an ontology based technique. It specifies web application attacks by using semantic rules, the context of consequence and the specifications of application protocols. The system is capable of detecting sophisticated attacks effectively and efficiently by analyzing the specified portion of a user request where attacks are possible. Semantic rules help to capture the context of the application, possible attacks and the protocol that was used. These rules also allow inference to run over the ontological models in order to detect, the often complex polymorphic variations of web application attacks. The ontological model was developed using Description Logic that was based on the Web Ontology Language (OWL). The inference rules are Horn Logic statements and are implemented using the Apache JENA framework. The system is therefore platform and technology independent. Prior to the evaluation of the system the knowledge model was validated by using OntoClean to remove inconsistency, incompleteness and redundancy in the specification of ontological concepts. The experimental results show that the detection capability and performance of our system is significantly better than existing state of the art solutions. The system successfully detects web application attacks whilst generating few false positives. The examples that are presented demonstrate that a semantic approach can be used to effectively detect zero day and more sophisticated attacks in a real-world environment. 2013 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a394dafb3ffd6a66bdf4fe3fb0b03f40", "text": "Part-of-speech tagging, like any supervised statistical NLP task, is more difficult when test sets are very different from training sets, for example when tagging across genres or language varieties. We examined the problem of POS tagging of different varieties of Mandarin Chinese (PRC-Mainland, PRCHong Kong, and Taiwan). An analytic study first showed that unknown words were a major source of difficulty in cross-variety tagging. Unknown words in English tend to be proper nouns. By contrast, we found that Mandarin unknown words were mostly common nouns and verbs. We showed these results are caused by the high frequency of morphological compounding in Mandarin; in this sense Mandarin is more like German than English. Based on this analysis, we propose a variety of new morphological unknown-word features for POS tagging, extending earlier work by others on unknown-word tagging in English and German. Our features were implemented in a maximum entropy Markov model. Our system achieves state-of-the-art performance in Mandarin tagging, including improving unknown-word tagging performance on unseen varieties in Chinese Treebank 5.0 from 61% to 80% correct.", "title": "" }, { "docid": "0f56b99bc1d2c9452786c05242c89150", "text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.", "title": "" }, { "docid": "63a583de2dbbbd9aada8a685ec9edc78", "text": "BACKGROUND\nVarious nerve blocks with local anaesthetic agents have been used to reduce pain after hip fracture and subsequent surgery. This review was published originally in 1999 and was updated in 2001, 2002, 2009 and 2017.\n\n\nOBJECTIVES\nThis review focuses on the use of peripheral nerves blocks as preoperative analgesia, as postoperative analgesia or as a supplement to general anaesthesia for hip fracture surgery. We undertook the update to look for new studies and to update the methods to reflect Cochrane standards.\n\n\nSEARCH METHODS\nFor the updated review, we searched the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 8), MEDLINE (Ovid SP, 1966 to August week 1 2016), Embase (Ovid SP, 1988 to 2016 August week 1) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCO, 1982 to August week 1 2016), as well as trial registers and reference lists of relevant articles.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials (RCTs) involving use of nerve blocks as part of the care provided for adults aged 16 years and older with hip fracture.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed new trials for inclusion, determined trial quality using the Cochrane tool and extracted data. When appropriate, we pooled results of outcome measures. We rated the quality of evidence according to the GRADE Working Group approach.\n\n\nMAIN RESULTS\nWe included 31 trials (1760 participants; 897 randomized to peripheral nerve blocks and 863 to no regional blockade). Results of eight trials with 373 participants show that peripheral nerve blocks reduced pain on movement within 30 minutes of block placement (standardized mean difference (SMD) -1.41, 95% confidence interval (CI) -2.14 to -0.67; equivalent to -3.4 on a scale from 0 to 10; I2 = 90%; high quality of evidence). Effect size was proportionate to the concentration of local anaesthetic used (P < 0.00001). Based on seven trials with 676 participants, we did not find a difference in the risk of acute confusional state (risk ratio (RR) 0.69, 95% CI 0.38 to 1.27; I2 = 48%; very low quality of evidence). Three trials with 131 participants reported decreased risk for pneumonia (RR 0.41, 95% CI 0.19 to 0.89; I2 = 3%; number needed to treat for an additional beneficial outcome (NNTB) 7, 95% CI 5 to 72; moderate quality of evidence). We did not find a difference in risk of myocardial ischaemia or death within six months, but the number of participants included was well below the optimal information size for these two outcomes. Two trials with 155 participants reported that peripheral nerve blocks also reduced time to first mobilization after surgery (mean difference -11.25 hours, 95% CI -14.34 to -8.15 hours; I2 = 52%; moderate quality of evidence). One trial with 75 participants indicated that the cost of analgesic drugs was lower when they were given as a single shot block (SMD -3.48, 95% CI -4.23 to -2.74; moderate quality of evidence).\n\n\nAUTHORS' CONCLUSIONS\nHigh-quality evidence shows that regional blockade reduces pain on movement within 30 minutes after block placement. Moderate-quality evidence shows reduced risk for pneumonia, decreased time to first mobilization and cost reduction of the analgesic regimen (single shot blocks).", "title": "" }, { "docid": "9af37841feed808345c39ee96ddff914", "text": "Wake-up receivers (WuRXs) are low-power radios that continuously monitor the RF environment to wake up a higher-power radio upon detection of a predetermined RF signature. Prior-art WuRXs have 100s of kHz of bandwidth [1] with low signature-to-wake-up-signal latency to help synchronize communication amongst nominally asynchronous wireless devices. However, applications such as unattended ground sensors and smart home appliances wake-up infrequently in an event-driven manner, and thus WuRX bandwidth and latency are less critical; instead, the most important metrics are power consumption and sensitivity. Unfortunately, current state-of-the-art WuRXs utilizing direct envelope-detecting [2] and IF/uncertain-IF [1,3] architectures (Fig. 24.5.1) achieve only modest sensitivity at low-power (e.g., −39dBm at 104nW [2]), or achieve excellent sensitivity at higher-power (e.g., −97dBm at 99µW [3]) via active IF gain elements. Neither approach meets the needs of next-generation event-driven sensing networks.", "title": "" }, { "docid": "25cbc3f8f9ecbeb89c2c49c044e61c2a", "text": "This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt.", "title": "" }, { "docid": "93ee57bae5f3e7a9aabafe033302c7f8", "text": "Dialog state tracking - the process of updating the dialog state after each interaction with the user - is a key component of most dialog systems. Following a similar scheme to the fourth dialog state tracking challenge, this edition again focused on human-human dialogs, but introduced the task of cross-lingual adaptation of trackers. The challenge received a total of 32 entries from 9 research groups. In addition, several pilot track evaluations were also proposed receiving a total of 16 entries from 4 groups. In both cases, the results show that most of the groups were able to outperform the provided baselines for each task.", "title": "" }, { "docid": "2201ca2f10699276d68e380fd1069086", "text": "After integrating five higher-order personality traits in an extended model of technology acceptance, Devaraj et al. (2008) called for further research including personality in information systems research to understand the formation of perceptual beliefs and behaviors in more detail. To assist such future research endeavors, this article gives an overview on prior research discussing personality within the six plus two journals of the AIS Senior Basket (MISQ, ISR, JMIS, JAIS, EJIS, ISJ, JSIS, JIT) 1 . Therefore, the Theory of a Person approach (ToP) derived from psychology research serves as the underlying conceptual matrix. Within the literature analysis, we identify 30 articles discussing personality traits on distinct hierarchical levels in three fields of information systems research. Results of the literature analysis reveal a shift of examined traits over the last years. In addition, research gaps are identified so that propositions are derived. Further research results and implications are discussed within the article.", "title": "" }, { "docid": "3f9bb5e1b9b6d4d44cb9741a32f7325f", "text": "Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc.", "title": "" }, { "docid": "7a619f349e8b62b016db98e7526c04a6", "text": "Although sensor noise is generally known as a very reliable means to uniquely identify digital cameras, care has to be taken with respect to camera model characteristics that may cause false accusations. While earlier reports focused on so-called linear patterns with a regular grid structure, also distortions due to geometric corrections of radial lens distortion have recently gained interest. Here, we report observations from a case study with the 'Dresden Image Database' that revealed further artefacts. We found diagonal line artefacts in Nikon CoolPix S710 sensor noise, as well as non-trivial dependencies between sensor noise, exposure time (FujiFilm J50) and focal length (Casio EX-Z150). At slower shutter speeds, original J50 images exhibit a slight horizontal shift, whereas EX-Z150 images exhibit irregular geometric distortions, which depend on the focal length and which become visible in the p-map of state-of-the-art resampling detectors. The observed artefacts may provide valuable clues for camera model identification, but also call for particular attention when creating reference noise patterns for applications that require low false negative rates.", "title": "" }, { "docid": "893408bc41eb46a75fc59e23f74339cf", "text": "We discuss cutting stock problems (CSPs) from the perspective of the paper industry and the financial impact they make. Exact solution approaches and heuristics have been used for decades to support cutting stock decisions in that industry. We have developed polylithic solution techniques integrated in our ERP system to solve a variety of cutting stock problems occurring in real world problems. Among them is the simultaneous minimization of the number of rolls and the number of patterns while not allowing any overproduction. For two cases, CSPs minimizing underproduction and CSPs with master rolls of different widths and availability, we have developed new column generation approaches. The methods are numerically tested using real world data instances. An assembly of current solved and unsolved standard and non-standard CSPs at the forefront of research are put in perspective.", "title": "" }, { "docid": "e91c18f5509e05471d20d4e28e03b014", "text": "This paper describes the design of a broadside circularly polarized uniform circular array based on curved planar inverted F-antenna elements. Circular polarization (CP) is obtained by exploiting the sequential rotation technique and implementing it with a series feed network. The proposed structure is first introduced, and some geometrical considerations are derived. Second, the array radiation body is designed taking into account the mutual coupling among antenna elements. Third, the series feed network usually employed for four-antenna element arrays is analyzed and extended to three and more than four antennas exploiting the special case of equal power distribution. The array is designed with three-, four-, five-, and six-antenna elements, and dimensions, impedance bandwidth (defined for <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}\\leq -10$ </tex-math></inline-formula> dB), axial ratio (AR) bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\text {AR}\\leq 3$ </tex-math></inline-formula> dB), gain, beamwidth, front-to-back ratio, and cross-polarization level are compared. Arrays with three and five elements are also prototyped to benchmark the numerical analysis results, finding good correspondence.", "title": "" }, { "docid": "a6e71e4be58c51b580fcf08e9d1a100a", "text": "This dissertation is concerned with the processing of high velocity text using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing, in order to analyse fast and voluminous streams of text. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using eventdriven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information filtering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems.", "title": "" } ]
scidocsrr
c92e25f5d839b9fe1b8e7685305320fc
A novel paradigm for calculating Ramsey number via Artificial Bee Colony Algorithm
[ { "docid": "828c54f29339e86107f1930ae2a5e77f", "text": "Artificial bee colony (ABC) algorithm is an optimization algorithm based on a particular intelligent behaviour of honeybee swarms. This work compares the performance of ABC algorithm with that of differential evolution (DE), particle swarm optimization (PSO) and evolutionary algorithm (EA) for multi-dimensional numeric problems. The simulation results show that the performance of ABC algorithm is comparable to those of the mentioned algorithms and can be efficiently employed to solve engineering problems with high dimensionality. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "35ab98f6e5b594261e52a21740c70336", "text": "Artificial Bee Colony (ABC) algorithm which is one of the most recently introduced optimization algorithms, simulates the intelligent foraging behavior of a honey bee swarm. Clustering analysis, used in many disciplines and applications, is an important tool and a descriptive task seeking to identify homogeneous groups of objects based on the values of their attributes. In this work, ABC is used for data clustering on benchmark problems and the performance of ABC algorithm is compared with Particle Swarm Optimization (PSO) algorithm and other nine classification techniques from the literature. Thirteen of typical test data sets from the UCI Machine Learning Repository are used to demonstrate the results of the techniques. The simulation results indicate that ABC algorithm can efficiently be used for multivariate data clustering. © 2009 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "4847eb4451c597d4656cf48c242cf252", "text": "Despite the independent evolution of multicellularity in plants and animals, the basic organization of their stem cell niches is remarkably similar. Here, we report the genome-wide regulatory potential of WUSCHEL, the key transcription factor for stem cell maintenance in the shoot apical meristem of the reference plant Arabidopsis thaliana. WUSCHEL acts by directly binding to at least two distinct DNA motifs in more than 100 target promoters and preferentially affects the expression of genes with roles in hormone signaling, metabolism, and development. Striking examples are the direct transcriptional repression of CLAVATA1, which is part of a negative feedback regulation of WUSCHEL, and the immediate regulation of transcriptional repressors of the TOPLESS family, which are involved in auxin signaling. Our results shed light on the complex transcriptional programs required for the maintenance of a dynamic and essential stem cell niche.", "title": "" }, { "docid": "9c0d65ee42ccfaa291b576568bad59e0", "text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.", "title": "" }, { "docid": "ac52504a90be9cd685a10f73603d3776", "text": "Unsupervised domain adaption aims to learn a powerful classifier for the target domain given a labeled source data set and an unlabeled target data set. To alleviate the effect of ‘domain shift’, the major challenge in domain adaptation, studies have attempted to align the distributions of the two domains. Recent research has suggested that generative adversarial network (GAN) has the capability of implicitly capturing data distribution. In this paper, we thus propose a simple but effective model for unsupervised domain adaption leveraging adversarial learning. The same encoder is shared between the source and target domains which is expected to extract domain-invariant representations with the help of an adversarial discriminator. With the labeled source data, we introduce the center loss to increase the discriminative power of feature learned. We further align the conditional distribution of the two domains to enforce the discrimination of the features in the target domain. Unlike previous studies where the source features are extracted with a fixed pre-trained encoder, our method jointly learns feature representations of two domains. Moreover, by sharing the encoder, the model does not need to know the source of images during testing and hence is more widely applicable. We evaluate the proposed method on several unsupervised domain adaption benchmarks and achieve superior or comparable performance to state-of-the-art results.", "title": "" }, { "docid": "1aac7dedc18b437966b31cf04f1b7efc", "text": "Massive open online courses (MOOCs) continue to appear across the higher education landscape, originating from many institutions in the USA and around the world. MOOCs typically have low completion rates, at least when compared with traditional courses, as this course delivery model is very different from traditional, fee-based models, such as college courses. This research examined MOOC student demographic data, intended behaviours and course interactions to better understand variables that are indicative of MOOC completion. The results lead to ideas regarding how these variables can be used to support MOOC students through the application of learning analytics tools and systems.", "title": "" }, { "docid": "575d8fed62c2afa1429d16444b6b173c", "text": "Research into learning and teaching in higher education over the last 25 years has provided a variety of concepts, methods, and findings that are of both theoretical interest and practical relevance. It has revealed the relationships between students’ approaches to studying, their conceptions of learning, and their perceptions of their academic context. It has revealed the relationships between teachers’ approaches to teaching, their conceptions of teaching, and their perceptions of the teaching environment. And it has provided a range of tools that can be exploited for developing our understanding of learning and teaching in particular contexts and for assessing and enhancing the student experience on specific courses and programs.", "title": "" }, { "docid": "12d4c8ff1072fece3fea7eeac43c3fc5", "text": "Multi-agent path finding (MAPF) is well-studied in artificial intelligence, robotics, theoretical computer science and operations research. We discuss issues that arise when generalizing MAPF methods to real-world scenarios and four research directions that address them. We emphasize the importance of addressing these issues as opposed to developing faster methods for the standard formulation of the MAPF problem.", "title": "" }, { "docid": "e94f453a3301ca86bed19162ad1cb6e1", "text": "Linux scheduling is based on the time-sharing technique already introduced in the section \"CPU's Time Sharing\" in Chapter 5, Timing Measurements: several processes are allowed to run \"concurrently,\" which means that the CPU time is roughly divided into \"slices,\" one for each runnable process.[1] Of course, a single processor can run only one process at any given instant. If a currently running process is not terminated when its time slice or quantum expires, a process switch may take place. Time-sharing relies on timer interrupts and is thus transparent to processes. No additional code needs to be inserted in the programs in order to ensure CPU time-sharing.", "title": "" }, { "docid": "4d69284c25e1a9a503dd1c12fde23faa", "text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.", "title": "" }, { "docid": "2c266af949495f7cd32b8abdf1a04803", "text": "Humans rely on eye gaze and hand manipulations extensively in their everyday activities. Most often, users gaze at an object to perceive it and then use their hands to manipulate it. We propose applying a multimodal, gaze plus free-space gesture approach to enable rapid, precise and expressive touch-free interactions. We show the input methods are highly complementary, mitigating issues of imprecision and limited expressivity in gaze-alone systems, and issues of targeting speed in gesture-alone systems. We extend an existing interaction taxonomy that naturally divides the gaze+gesture interaction space, which we then populate with a series of example interaction techniques to illustrate the character and utility of each method. We contextualize these interaction techniques in three example scenarios. In our user study, we pit our approach against five contemporary approaches; results show that gaze+gesture can outperform systems using gaze or gesture alone, and in general, approach the performance of \"gold standard\" input systems, such as the mouse and trackpad.", "title": "" }, { "docid": "ceb42399b7cd30b15d27c30d7c4b57b6", "text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated from an informationtheoretic perspective. The relationships among the capacity r egion of broadcast channels and two rate regions achieved by NOMA and time-division multiple access (TDMA) are illustrated first. Then, the performance of NOMA is evaluated by considering TDMA as the benchmark, where both the sum rate and the individual use r rates are used as the criteria. In a wireless downlink scenar io with user pairing, the developed analytical results show that NOMA can outperform TDMA not only for the sum rate but also for each user’s individual rate, particularly when the difference between the users’ channels is large. I. I NTRODUCTION Because of its superior spectral efficiency, non-orthogona l multiple access (NOMA) has been recognized as a promising technique to be used in the fifth generation (5G) networks [1] – [4]. NOMA utilizes the power domain for achieving multiple access, i.e., different users are served at different power levels. Unlike conventional orthogonal MA, such as timedivision multiple access (TDMA), NOMA faces strong cochannel interference between different users, and success ive interference cancellation (SIC) is used by the NOMA users with better channel conditions for interference managemen t. The concept of NOMA is essentially a special case of superposition coding developed for broadcast channels (BC ). Cover first found the capacity region of a degraded discrete memoryless BC by using superposition coding [5]. Then, the capacity region of the Gaussian BC with single-antenna terminals was established in [6]. Moreover, the capacity re gion of the multiple-input multiple-output (MIMO) Gaussian BC was found in [7], by applying dirty paper coding (DPC) instea d of superposition coding. This paper mainly focuses on the single-antenna scenario. Specifically, consider a Gaussian BC with a single-antenna transmitter and two single-antenna receivers, where each r eceiver is corrupted by additive Gaussian noise with unit var iance. Denote the ordered channel gains from the transmitter to the two receivers byhw andhb, i.e., |hw| < |hb|. For a given channel pair(hw, hb), the capacity region is given by [6] C , ⋃ a1+a2=1, a1, a2 ≥ 0 { (R1, R2) : R1, R2 ≥ 0, R1≤ log2 ( 1+ a1x 1+a2x ) , R2≤ log2 (1+a2y) }", "title": "" }, { "docid": "8b5b4950177030e7664d57724acd52a3", "text": "With the fast development of industrial Internet of things (IIoT), a large amount of data is being generated continuously by different sources. Storing all the raw data in the IIoT devices locally is unwise considering that the end devices’ energy and storage spaces are strictly limited. In addition, the devices are unreliable and vulnerable to many threats because the networks may be deployed in remote and unattended areas. In this paper, we discuss the emerging challenges in the aspects of data processing, secure data storage, efficient data retrieval and dynamic data collection in IIoT. Then, we design a flexible and economical framework to solve the problems above by integrating the fog computing and cloud computing. Based on the time latency requirements, the collected data are processed and stored by the edge server or the cloud server. Specifically, all the raw data are first preprocessed by the edge server and then the time-sensitive data (e.g., control information) are used and stored locally. The non-time-sensitive data (e.g., monitored data) are transmitted to the cloud server to support data retrieval and mining in the future. A series of experiments and simulation are conducted to evaluate the performance of our scheme. The results illustrate that the proposed framework can greatly improve the efficiency and security of data storage and retrieval in IIoT.", "title": "" }, { "docid": "dc9a92313c58b5e688a3502b994e6d3a", "text": "This paper explores the application of Activity-Based Costing and Activity-Based Management in ecommerce. The proposed application may lead to better firm performance of many companies in offering their products and services over the Internet. A case study of a fictitious Business-to-Customer (B2C) company is used to illustrate the proposed structured implementation procedure and effects of an Activity-Based Costing analysis. The analysis is performed by using matrixes in order to trace overhead. The Activity-Based Costing analysis is then used to demonstrate operational and strategic Activity-Based Management in e-commerce.", "title": "" }, { "docid": "e3566963e4307c15086a54afe7661f32", "text": "Next-generation wireless networks must support ultra-reliable, low-latency communication and intelligently manage a massive number of Internet of Things (IoT) devices in real-time, within a highly dynamic environment. This need for stringent communication quality-of-service (QoS) requirements as well as mobile edge and core intelligence can only be realized by integrating fundamental notions of artificial intelligence (AI) and machine learning across the wireless infrastructure and end-user devices. In this context, this paper provides a comprehensive tutorial that introduces the main concepts of machine learning, in general, and artificial neural networks (ANNs), in particular, and their potential applications in wireless communications. For this purpose, we present a comprehensive overview on a number of key types of neural networks that include feed-forward, recurrent, spiking, and deep neural networks. For each type of neural network, we present the basic architecture and training procedure, as well as the associated challenges and opportunities. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality and edge caching.For each individual application, we present the main motivation for using ANNs along with the associated challenges while also providing a detailed example for a use case scenario and outlining future works that can be addressed using ANNs. In a nutshell, this article constitutes one of the first holistic tutorials on the development of machine learning techniques tailored to the needs of future wireless networks. This research was supported by the U.S. National Science Foundation under Grants CNS-1460316 and IIS-1633363. ar X iv :1 71 0. 02 91 3v 1 [ cs .I T ] 9 O ct 2 01 7", "title": "" }, { "docid": "ea8685f27096f3e3e589ea8af90e78f5", "text": "Acoustic data transmission is a technique to embed the data in a sound wave imperceptibly and to detect it at the receiver. This letter proposes a novel acoustic data transmission system designed based on the modulated complex lapped transform (MCLT). In the proposed system, data is embedded in an audio file by modifying the phases of the original MCLT coefficients. The data can be transmitted by playing the embedded audio and extracting it from the received audio. By embedding the data in the MCLT domain, the perceived quality of the resulting audio could be kept almost similar as the original audio. The system can transmit data at several hundreds of bits per second (bps), which is sufficient to deliver some useful short messages.", "title": "" }, { "docid": "a0f8af71421d484cbebb550a0bf59a6d", "text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.", "title": "" }, { "docid": "c21a1a07918d86dab06d84e0e4e7dc05", "text": "Big data potential value across business sectors has received tremendous attention from the practitioner and academia world. The huge amount of data collected in different forms in organizations promises to radically transform the business landscape globally. The impact of big data, which is spreading across all business sectors, has potential to create new opportunities for growth. With organizations now able to store huge diverse amounts of data from different sources and forms, big data is expected to deliver tremendous value across business sectors. This paper focuses on building a business case for big data adoption in organizations. This paper discusses some of the opportunities and potential benefits associated with big data adoption across various business sectors globally. The discussion is important for making a business case for big data investment in organizations, which is major challenge for its adoption globally. The paper uses the IT strategic grid to understand the current and future potential benefits of big data for different business sectors. The results of the study suggest that there is no one-size-fits-all to big data adoption potential benefits in organizations.", "title": "" }, { "docid": "636851f2fc41fbeb488d27c813d175dc", "text": "We propose DropMax, a stochastic version of softmax classifier which at each iteration drops non-target classes according to dropout probabilities adaptively decided for each instance. Specifically, we overlay binary masking variables over class output probabilities, which are input-adaptively learned via variational inference. This stochastic regularization has an effect of building an ensemble classifier out of exponentially many classifiers with different decision boundaries. Moreover, the learning of dropout rates for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes. We validate our model on multiple public datasets for classification, on which it obtains significantly improved accuracy over the regular softmax classifier and other baselines. Further analysis of the learned dropout probabilities shows that our model indeed selects confusing classes more often when it performs classification.", "title": "" }, { "docid": "6cfdad2bb361713616dd2971026758a7", "text": "We consider the problem of controlling a system with unknown, stochastic dynamics to achieve a complex, time-sensitive task. An example of this problem is controlling a noisy aerial vehicle with partially known dynamics to visit a pre-specified set of regions in any order while avoiding hazardous areas. In particular, we are interested in tasks which can be described by signal temporal logic (STL) specifications. STL is a rich logic that can be used to describe tasks involving bounds on physical parameters, continuous time bounds, and logical relationships over time and states. STL is equipped with a continuous measure called the robustness degree that measures how strongly a given sample path exhibits an STL property [4, 3]. This measure enables the use of continuous optimization problems to solve learning [7, 6] or formal synthesis problems [9] involving STL.", "title": "" }, { "docid": "a58130841813814dacd7330d04efe735", "text": "Under-reporting of food intake is one of the fundamental obstacles preventing the collection of accurate habitual dietary intake data. The prevalence of under-reporting in large nutritional surveys ranges from 18 to 54% of the whole sample, but can be as high as 70% in particular subgroups. This wide variation between studies is partly due to different criteria used to identify under-reporters and also to non-uniformity of under-reporting across populations. The most consistent differences found are between men and women and between groups differing in body mass index. Women are more likely to under-report than men, and under-reporting is more common among overweight and obese individuals. Other associated characteristics, for which there is less consistent evidence, include age, smoking habits, level of education, social class, physical activity and dietary restraint. Determining whether under-reporting is specific to macronutrients or food is problematic, as most methods identify only low energy intakes. Studies that have attempted to measure under-reporting specific to macronutrients express nutrients as percentage of energy and have tended to find carbohydrate under-reported and protein over-reported. However, care must be taken when interpreting these results, especially when data are expressed as percentages. A logical conclusion is that food items with a negative health image (e.g. cakes, sweets, confectionery) are more likely to be under-reported, whereas those with a positive health image are more likely to be over-reported (e.g. fruits and vegetables). This also suggests that dietary fat is likely to be under-reported. However, it is necessary to distinguish between under-reporting and genuine under-eating for the duration of data collection. The key to understanding this problem, but one that has been widely neglected, concerns the processes that cause people to under-report their food intakes. The little work that has been done has simply confirmed the complexity of this issue. The importance of obtaining accurate estimates of habitual dietary intakes so as to assess health correlates of food consumption can be contrasted with the poor quality of data collected. This phenomenon should be considered a priority research area. Moreover, misreporting is not simply a nutritionist's problem, but requires a multidisciplinary approach (including psychology, sociology and physiology) to advance the understanding of under-reporting in dietary intake studies.", "title": "" }, { "docid": "80e0a6c270bb146a1a45994d27340639", "text": "BACKGROUND\nThe promotion of active and healthy ageing is becoming increasingly important as the population ages. Physical activity (PA) significantly reduces all-cause mortality and contributes to the prevention of many chronic illnesses. However, the proportion of people globally who are active enough to gain these health benefits is low and decreases with age. Social support (SS) is a social determinant of health that may improve PA in older adults, but the association has not been systematically reviewed. This review had three aims: 1) Systematically review and summarise studies examining the association between SS, or loneliness, and PA in older adults; 2) clarify if specific types of SS are positively associated with PA; and 3) investigate whether the association between SS and PA differs between PA domains.\n\n\nMETHODS\nQuantitative studies examining a relationship between SS, or loneliness, and PA levels in healthy, older adults over 60 were identified using MEDLINE, PSYCInfo, SportDiscus, CINAHL and PubMed, and through reference lists of included studies. Quality of these studies was rated.\n\n\nRESULTS\nThis review included 27 papers, of which 22 were cross sectional studies, three were prospective/longitudinal and two were intervention studies. Overall, the study quality was moderate. Four articles examined the relation of PA with general SS, 17 with SS specific to PA (SSPA), and six with loneliness. The results suggest that there is a positive association between SSPA and PA levels in older adults, especially when it comes from family members. No clear associations were identified between general SS, SSPA from friends, or loneliness and PA levels. When measured separately, leisure time PA (LTPA) was associated with SS in a greater percentage of studies than when a number of PA domains were measured together.\n\n\nCONCLUSIONS\nThe evidence surrounding the relationship between SS, or loneliness, and PA in older adults suggests that people with greater SS for PA are more likely to do LTPA, especially when the SS comes from family members. However, high variability in measurement methods used to assess both SS and PA in included studies made it difficult to compare studies.", "title": "" } ]
scidocsrr
647b9b99a6f33511254b9be5c427a473
Market Index and Stock Price Direction Prediction using Machine Learning Techniques: An empirical study on the KOSPI and HSI
[ { "docid": "a4dbddafcdb2b0b3f26fb5aa2e2de933", "text": "Ability to predict direction of stock/index price accurately is crucial for market dealers or investors to maximize their profits. Data mining techniques have been successfully shown to generate high forecasting accuracy of stock price movement. Nowadays, in stead of a single method, traders need to use various forecasting techniques to gain multiple signals and more information about the future of the markets. In this paper, ten different techniques of data mining are discussed and applied to predict price movement of Hang Seng index of Hong Kong stock market. The approaches include Linear discriminant analysis (LDA), Quadratic discriminant analysis (QDA), K-nearest neighbor classification, Naïve Bayes based on kernel estimation, Logit model, Tree based classification, neural network, Bayesian classification with Gaussian process, Support vector machine (SVM) and Least squares support vector machine (LS-SVM). Experimental results show that the SVM and LS-SVM generate superior predictive performances among the other models. Specifically, SVM is better than LS-SVM for in-sample prediction but LS-SVM is, in turn, better than the SVM for the out-of-sample forecasts in term of hit rate and error rate criteria.", "title": "" }, { "docid": "386cd963cf70c198b245a3251c732180", "text": "Support vector machines (SVMs) are promising methods for the prediction of -nancial timeseries because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in -nancial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction. c © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "247c8cd5e076809a208849abe4dce3e5", "text": "This paper deals with the application of a novel neural network technique, support vector machine (SVM), in !nancial time series forecasting. The objective of this paper is to examine the feasibility of SVM in !nancial time series forecasting by comparing it with a multi-layer back-propagation (BP) neural network. Five real futures contracts that are collated from the Chicago Mercantile Market are used as the data sets. The experiment shows that SVM outperforms the BP neural network based on the criteria of normalized mean square error (NMSE), mean absolute error (MAE), directional symmetry (DS) and weighted directional symmetry (WDS). Since there is no structured way to choose the free parameters of SVMs, the variability in performance with respect to the free parameters is investigated in this study. Analysis of the experimental results proved that it is advantageous to apply SVMs to forecast !nancial time series. ? 2001 Elsevier Science Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "2c91e6ca6cf72279ad084c4a51b27b1c", "text": "Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.", "title": "" }, { "docid": "c8ba40dd66f57f6d192a73be94440d07", "text": "PURPOSE\nWound infection after an ileostomy reversal is a common problem. To reduce wound-related complications, purse-string skin closure was introduced as an alternative to conventional linear skin closure. This study is designed to compare wound infection rates and operative outcomes between linear and purse-string skin closure after a loop ileostomy reversal.\n\n\nMETHODS\nBetween December 2002 and October 2010, a total of 48 consecutive patients undergoing a loop ileostomy reversal were enrolled. Outcomes were compared between linear skin closure (group L, n = 30) and purse string closure (group P, n = 18). The operative technique for linear skin closure consisted of an elliptical incision around the stoma, with mobilization, and anastomosis of the ileum. The rectus fascia was repaired with interrupted sutures. Skin closure was performed with vertical mattress interrupted sutures. Purse-string skin closure consisted of a circumstomal incision around the ileostomy using the same procedures as used for the ileum. Fascial closure was identical to linear closure, but the circumstomal skin incision was approximated using a purse-string subcuticular suture (2-0 Polysorb).\n\n\nRESULTS\nBetween group L and P, there were no differences of age, gender, body mass index, and American Society of Anesthesiologists (ASA) scores. Original indication for ileostomy was 23 cases of malignancy (76.7%) in group L, and 13 cases of malignancy (77.2%) in group P. The median time duration from ileostomy to reversal was 4.0 months (range, 0.6 to 55.7 months) in group L and 4.1 months (range, 2.2 to 43.9 months) in group P. The median operative time was 103 minutes (range, 45 to 260 minutes) in group L and 100 minutes (range, 30 to 185 minutes) in group P. The median hospital stay was 11 days (range, 5 to 4 days) in group L and 7 days (range, 4 to 14 days) in group P (P < 0.001). Wound infection was found in 5 cases (16.7%) in group L and in one case (5.6%) in group L (P = 0.26).\n\n\nCONCLUSION\nBased on this study, purse-string skin closure after a loop ileostomy reversal showed comparable outcomes, in terms of wound infection rates, to those of linear skin closure. Thus, purse-string skin closure could be a good alternative to the conventional linear closure.", "title": "" }, { "docid": "e3ac61e2a8fe211124446c22f7f88b69", "text": "Requirement elicitation is a critical activity in the requirement development process and it explores the requirements of stakeholders. The common challenges that analysts face during elicitation process are to ensure effective communication between analyst and the users. Mostly errors in the systems are due to poor communication between user and analyst. This paper proposes an improved approach for requirements elicitation using paper prototype. The paper progresses through an assessment of the new approach using student projects developed for various organizations. A case study project is explained in the paper.", "title": "" }, { "docid": "1b22c3d5bb44340fcb66a1b44b391d71", "text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.", "title": "" }, { "docid": "3028de6940fb7a5af5320c506946edfc", "text": "Metaphor is ubiquitous in text, even in highly technical text. Correct inference about textual entailment requires computers to distinguish the literal and metaphorical senses of a word. Past work has treated this problem as a classical word sense disambiguation task. In this paper, we take a new approach, based on research in cognitive linguistics that views metaphor as a method for transferring knowledge from a familiar, well-understood, or concrete domain to an unfamiliar, less understood, or more abstract domain. This view leads to the hypothesis that metaphorical word usage is correlated with the degree of abstractness of the word’s context. We introduce an algorithm that uses this hypothesis to classify a word sense in a given context as either literal (denotative) or metaphorical (connotative). We evaluate this algorithm with a set of adjectivenoun phrases (e.g., in dark comedy , the adjective dark is used metaphorically; in dark hair, it is used literally) and with the TroFi (Trope Finder) Example Base of literal and nonliteral usage for fifty verbs. We achieve state-of-theart performance on both datasets.", "title": "" }, { "docid": "cf52fd01af4e01f28eeb14e0c6bce7e9", "text": "Most applications manipulate persistent data, yet traditional systems decouple data manipulation from persistence in a two-level storage model. Programming languages and system software manipulate data in one set of formats in volatile main memory (DRAM) using a load/store interface, while storage systems maintain persistence in another set of formats in non-volatile memories, such as Flash and hard disk drives in traditional systems, using a file system interface. Unfortunately, such an approach suffers from the system performance and energy overheads of locating data, moving data, and translating data between the different formats of these two levels of storage that are accessed via two vastly different interfaces. Yet today, new non-volatile memory (NVM) technologies show the promise of storage capacity and endurance similar to or better than Flash at latencies comparable to DRAM, making them prime candidates for providing applications a persistent single-level store with a single load/store interface to access all system data. Our key insight is that in future systems equipped with NVM, the energy consumed executing operating system and file system code to access persistent data in traditional systems becomes an increasingly large contributor to total energy. The goal of this work is to explore the design of a Persistent Memory Manager that coordinates the management of memory and storage under a single hardware unit in a single address space. Our initial simulation-based exploration shows that such a system with a persistent memory can improve energy efficiency and performance by eliminating the instructions and data movement traditionally used to perform I/O operations.", "title": "" }, { "docid": "e21c4b071723b68af1674740fbf3e993", "text": "Throughout history, cryptography has played an important role during times of war. The ability to read enemy messages can lead to invaluable knowledge that can be used to lessen casualties and secure victories. The Allied cryptographers during World War II had a major impact on the outcome of the war. The Allies’ ability to intercept and decrypt messages encrypted on the Japanese cipher machine, Purple, and the German cipher machine, Enigma, empowered the Allies with a major advantage during World War II. Without this advantage, the war may have had a different end result. 1 A Brief Introduction on Cryptography Cryptography is the art and science of secret communication [4]. It involves sending a message in such a way so that only the intended audience should be able to read the message with ease. Cryptography has affected many parts of history, including the outcome of World War II. Steganography is the earliest known form of secret communication, which involves hiding the existence of a message, not the meaning of it [4]. An example of concealing a message can be found in ancient China. A sender would use a messenger whose hair would be shaved off, then the message would be tattooed to the messenger’s head. Once the hair grew back thick enough, the existence of the message was concealed. The messenger was then free to travel to the destination to deliver the message. Once there, the messenger would shave his head again so that the message could be read by the intended recipients. This type of secret communication provides little security to a message, since if a message is found, the meaning is known immediately [4]. Consequently, a more secure system was needed to ensure the meaning of a message was not revealed to a potential eavesdropper. Cryptography, hiding the meaning of a message instead of its existence, is a more secure way of sending a message. In order to send a secret message using cryptographic techniques, one would start with the message that is to be sent, called the plaintext [5]. Before encoding, the sender and receiver agree on the algorithm, the rules by which the message is encoded, to use in order to ensure that both parties can read the message. These rules include the type of cipher that is used and", "title": "" }, { "docid": "b7d20190bdb3ef25110b58d87d7e5bf8", "text": "Field of soft robotics has been widely researched. Modularization of soft robots is one of the effort to expand the field. In this paper, we introduce a magnet connection for modularized soft units which were introduced in our previous research. The magnet connector was designed with off the shelf magnets. Thanks to the magnet connection, it was simpler and more intuitive than the connection method that we used in previous research. Connecting strength of the magnet connection and bending performance of a soft bending actuator assembled with the units were tested. Connecting strength and air leakage prevention of the connector was affordable in a range of actuating pneumatic pressure. We hope that this magnet connector enables modularized soft units being used as a daily item in the future.", "title": "" }, { "docid": "5e1f035df9a6f943c5632078831f5040", "text": "Animacy is a necessary property for a referent to be an agent, and thus animacy detection is useful for a variety of natural language processing tasks, including word sense disambiguation, co-reference resolution, semantic role labeling, and others. Prior work treated animacy as a word-level property, and has developed statistical classifiers to classify words as either animate or inanimate. We discuss why this approach to the problem is ill-posed, and present a new approach based on classifying the animacy of co-reference chains. We show that simple voting approaches to inferring the animacy of a chain from its constituent words perform relatively poorly, and then present a hybrid system merging supervised machine learning (ML) and a small number of handbuilt rules to compute the animacy of referring expressions and co-reference chains. This method achieves state of the art performance. The supervised ML component leverages features such as word embeddings over referring expressions, parts of speech, and grammatical and semantic roles. The rules take into consideration parts of speech and the hypernymy structure encoded in WordNet. The system achieves an F1 of 0.88 for classifying the animacy of referring expressions, which is comparable to state of the art results for classifying the animacy of words, and achieves an F1 of 0.75 for classifying the animacy of coreference chains themselves. We release our training and test dataset, which includes 142 texts (all narratives) comprising 156,154 words, 34,698 referring expressions, and 10,941 co-reference chains. We test the method on a subset of the OntoNotes dataset, showing using manual sampling that animacy classification is 90%±2% accurate for coreference chains, and 92%±1% for referring expressions. The data also contains 46 folktales, which present an interesting challenge because they often involve characters who are members of traditionally inanimate classes (e.g., stoves that walk, trees that talk). We show that our system is able to detect the animacy of these unusual referents with an F1 of 0.95.", "title": "" }, { "docid": "ba58cbfd68426359a50a5a60251e0322", "text": "Intelligent power allocation and load management systems have been playing an increasingly important role in aircrafts whose electrical network systems are getting more and more complex. Load shedding used to be the main means of aircraft power management. But the increasing number of electrical components and the emphasis of safety and human comfort call for more resilient power management. In this paper we present a novel power allocation and scheduling formulation which aims for minimum load shedding and optimal generator operational profiles. The problem is formulated as a mixed integer quadratic programming (MIQP) problem and solved by CPLEX optimization tool.", "title": "" }, { "docid": "bffddca72c7e9d6e5a8c760758a98de0", "text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.", "title": "" }, { "docid": "0ce7465e40b3b13e5c316fb420a766d9", "text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.", "title": "" }, { "docid": "70294e6680ad7d662596262c4068a352", "text": "As cancer development involves pathological vessel formation, 16 angiogenesis markers were evaluated as potential ovarian cancer (OC) biomarkers. Blood samples collected from 172 patients were divided based on histopathological result: OC (n = 38), borderline ovarian tumours (n = 6), non-malignant ovarian tumours (n = 62), healthy controls (n = 50) and 16 patients were excluded. Sixteen angiogenesis markers were measured using BioPlex Pro Human Cancer Biomarker Panel 1 immunoassay. Additionally, concentrations of cancer antigen 125 (CA125) and human epididymis protein 4 (HE4) were measured in patients with adnexal masses using electrochemiluminescence immunoassay. In the comparison between OC vs. non-OC, osteopontin achieved the highest area under the curve (AUC) of 0.79 (sensitivity 69%, specificity 78%). Multimarker models based on four to six markers (basic fibroblast growth factor-FGF-basic, follistatin, hepatocyte growth factor-HGF, osteopontin, platelet-derived growth factor AB/BB-PDGF-AB/BB, leptin) demonstrated higher discriminatory ability (AUC 0.80-0.81) than a single marker (AUC 0.79). When comparing OC with benign ovarian tumours, six markers had statistically different expression (osteopontin, leptin, follistatin, PDGF-AB/BB, HGF, FGF-basic). Osteopontin was the best single angiogenesis marker (AUC 0.825, sensitivity 72%, specificity 82%). A three-marker panel consisting of osteopontin, CA125 and HE4 better discriminated the groups (AUC 0.958) than HE4 or CA125 alone (AUC 0.941 and 0.932, respectively). Osteopontin should be further investigated as a potential biomarker in OC screening and differential diagnosis of ovarian tumours. Adding osteopontin to a panel of already used biomarkers (CA125 and HE4) significantly improves differential diagnosis between malignant and benign ovarian tumours.", "title": "" }, { "docid": "46de8aa53a304c3f66247fdccbe9b39f", "text": "The effect of pH and electrochemical potential on copper uptake, xanthate adsorption and the hydrophobicity of sphalerite were studied from flotation practice point of view using electrochemical and micro-flotation techniques. Voltammetric studies conducted using the combination of carbon matrix composite (CMC) electrode and surface conduction (SC) electrode show that the kinetics of activation increases with decreasing activating pH. Controlling potential contact angle measurements conducted on a copper-activated SC electrode in xanthate solution with different pHs show that, xanthate adsorption occurs at acidic and alkaline pHs and renders the mineral surface hydrophobic. At near neutral pH, although xanthate adsorbs on Cu:ZnS, the mineral surface is hydrophilic. Microflotation tests confirm this finding. Cleaning reagent was used to improve the flotation response of sphalerite at near neutral pH.", "title": "" }, { "docid": "459a3bc8f54b8f7ece09d5800af7c37b", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.", "title": "" }, { "docid": "de408de1915d43c4db35702b403d0602", "text": "real-time population health assessment and monitoring D. L. Buckeridge M. Izadi A. Shaban-Nejad L. Mondor C. Jauvin L. Dubé Y. Jang R. Tamblyn The fragmented nature of population health information is a barrier to public health practice. Despite repeated demands by policymakers, administrators, and practitioners to develop information systems that provide a coherent view of population health status, there has been limited progress toward developing such an infrastructure. We are creating an informatics platform for describing and monitoring the health status of a defined population by integrating multiple clinical and administrative data sources. This infrastructure, which involves a population health record, is designed to enable development of detailed portraits of population health, facilitate monitoring of population health indicators, enable evaluation of interventions, and provide clinicians and patients with population context to assist diagnostic and therapeutic decision-making. In addition to supporting public health professionals, clinicians, and the public, we are designing the infrastructure to provide a platform for public health informatics research. This early report presents the requirements and architecture for the infrastructure and describes the initial implementation of the population health record, focusing on indicators of chronic diseases related to obesity.", "title": "" }, { "docid": "81173801bcecfd51e828337d2613dcba", "text": "There is increasing awareness of the large degree of crosslinguistic diversity involved in the structural realisation of information packaging (or information structure). Whereas English and many Germanic languages primarily exploit intonation for informational purposes , in other languages, like Catalan, syntax plays the primary role in the realisation of information packaging and intonation is reduced to a secondary role. In yet another group of languages the primary structural correlate is morphology. This paper provides a contrastive analysis of the structural properties of information packaging in a number of languages. It also contains a discussion of some basic issues concerning information packaging and identiies a set of information-packaging primitives that are applied to the crosslinguistic facts.", "title": "" }, { "docid": "fdfbcacd5a31038ecc025315c7483b5a", "text": "Most work on natural language question answering today focuses on answer selection: given a candidate list of sentences, determine which contains the answer. Although important, answer selection is only one stage in a standard end-to-end question answering pipeline. Œis paper explores the e‚ectiveness of convolutional neural networks (CNNs) for answer selection in an end-to-end context using the standard TrecQA dataset. We observe that a simple idf-weighted word overlap algorithm forms a very strong baseline, and that despite substantial e‚orts by the community in applying deep learning to tackle answer selection, the gains are modest at best on this dataset. Furthermore, it is unclear if a CNN is more e‚ective than the baseline in an end-to-end context based on standard retrieval metrics. To further explore this €nding, we conducted a manual user evaluation, which con€rms that answers from the CNN are detectably beŠer than those from idf-weighted word overlap. Œis result suggests that users are sensitive to relatively small di‚erences in answer selection quality.", "title": "" }, { "docid": "ecf56a68fbd1df54b83251b9dfc6bf9f", "text": "All our lives, we interact with the space around us, whether we are finding our way to a remote cabana in an exotic tropical isle or reaching for a ripe mango on the tree beside the cabana or finding a comfortable position in the hammock to snack after the journey. Each of these natural situations is experienced differently, and as a consequence, each is conceptualized differently. Our knowledge of space, unlike geometry or physical measurements of space, is constructed out of the things in space, not space itself. Mental spaces are schematized, eliminating detail and simplifying features around a framework consisting of elements and the relations among them. Our research suggests that which elements and spatial relations are included and how they are schematized varies with the space in ways that reflect our experience in the space. The space of navigation is too large to be seen from a single place (short of flying over it, but that is a different experience). To find our way in a large environment requires putting together information from different views or different sources. For the most part, the space of navigation is conceptualized as a two-dimensional plane, like a map. Maps, too, are schematized, yet they differ in significant ways from mental representations of space. The space around the body stands in contrast to the space of navigation. It can be seen from a single place, given rotation in place. It is the space of immediate action, our own or the things around us. It is also conceptualized schematically, but in three dimensions. Finally, there is the space of our own bodies. This space is the space of our own actions and our own sensations, experienced from the inside as well as the outside. It is schematized in terms of our limbs. Knowledge of these three spaces, that is, knowledge of the relative locations of the places in navigation space that are critical to our lives, knowledge of the space we are currently interacting with, and knowledge of the space of our bodies, is essential to finding our way in the world, to fulfilling our needs, and to avoiding danger, in short, necessary to survival.", "title": "" }, { "docid": "33b129cb569c979c81c0cb1c0a5b9594", "text": "During animal development, accurate control of tissue specification and growth are critical to generate organisms of reproducible shape and size. The eye-antennal disc epithelium of Drosophila is a powerful model system to identify the signaling pathway and transcription factors that mediate and coordinate these processes. We show here that the Yorkie (Yki) pathway plays a major role in tissue specification within the developing fly eye disc epithelium at a time when organ primordia and regional identity domains are specified. RNAi-mediated inactivation of Yki, or its partner Scalloped (Sd), or increased activity of the upstream negative regulators of Yki cause a dramatic reorganization of the eye disc fate map leading to specification of the entire disc epithelium into retina. On the contrary, constitutive expression of Yki suppresses eye formation in a Sd-dependent fashion. We also show that knockdown of the transcription factor Homothorax (Hth), known to partner Yki in some developmental contexts, also induces an ectopic retina domain, that Yki and Scalloped regulate Hth expression, and that the gain-of-function activity of Yki is partially dependent on Hth. Our results support a critical role for Yki- and its partners Sd and Hth--in shaping the fate map of the eye epithelium independently of its universal role as a regulator of proliferation and survival.", "title": "" } ]
scidocsrr
3a411a79274b079b2646a0bba6249c86
Deep Abstract Q-Networks
[ { "docid": "28ee32149227e4a26bea1ea0d5c56d8c", "text": "We consider an agent’s uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA’S REVENGE.", "title": "" }, { "docid": "bf4594673a4e450b005096401e771cd5", "text": "The PixelCNN model used in this paper is a lightweight variant of the Gated PixelCNN introduced in (van den Oord et al., 2016a). It consists of a 7 × 7 masked convolution, followed by two residual blocks with 1×1 masked convolutions with 16 feature planes, and another 1×1 masked convolution producing 64 features planes, which are mapped by a final masked convolution to the output logits. Inputs are 42 × 42 greyscale images, with pixel values quantized to 8 bins.", "title": "" } ]
[ { "docid": "bfe9b8e84da087cfd3a3d8ece6dc9b9d", "text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.", "title": "" }, { "docid": "dcacbed90f45b76e9d40c427e16e89d6", "text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.", "title": "" }, { "docid": "084ceedc5a45b427503f776a5c9fea68", "text": "Although the worldwide incidence of infant botulism is rare, the majority of cases are diagnosed in the United States. An infant can acquire botulism by ingesting Clostridium botulinum spores, which are found in soil or honey products. The spores germinate into bacteria that colonize the bowel and synthesize toxin. As the toxin is absorbed, it irreversibly binds to acetylcholine receptors on motor nerve terminals at neuromuscular junctions. The infant with botulism becomes progressively weak, hypotonic and hyporeflexic, showing bulbar and spinal nerve abnormalities. Presenting symptoms include constipation, lethargy, a weak cry, poor feeding and dehydration. A high index of suspicion is important for the diagnosis and prompt treatment of infant botulism, because this disease can quickly progress to respiratory failure. Diagnosis is confirmed by isolating the organism or toxin in the stool and finding a classic electromyogram pattern. Treatment consists of nutritional and respiratory support until new motor endplates are regenerated, which results in spontaneous recovery. Neurologic sequelae are seldom seen. Some children require outpatient tube feeding and may have persistent hypotonia.", "title": "" }, { "docid": "8e4eb520c80dfa8d39c69b1273ea89c8", "text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.", "title": "" }, { "docid": "325796828b9d25d50eb69f62d9eabdbb", "text": "We present a new algorithm to reduce the space complexity of heuristic search. It is most effective for problem spaces that grow polynomially wi th problem size, but contain large numbers of short cycles. For example, the problem of finding a lowest-cost corner-to-corner path in a d-dimensional grid has application to gene sequence alignment in computational biology. The main idea is to perform a bidirectional search, but saving only the Open lists and not the Closed lists. Once the search completes, we have one node on an optimal path, but don't have the solution path itself. The path is then reconstructed by recursively applying the same algorithm between the in i t ia l node and the in termediate node, and also between the intermediate node and the goal node. If n is the length of the grid in each dimension, and d is the number of dimensions, this algorithm reduces the memory requirement from to The time complexity only increases by a constant factor of in two dimensions, and 1.8 in three dimensions.", "title": "" }, { "docid": "d399e142488766759abf607defd848f0", "text": "The high penetration of cell phones in today's global environment offers a wide range of promising mobile marketing activities, including mobile viral marketing campaigns. However, the success of these campaigns, which remains unexplored, depends on the consumers' willingness to actively forward the advertisements that they receive to acquaintances, e.g., to make mobile referrals. Therefore, it is important to identify and understand the factors that influence consumer referral behavior via mobile devices. The authors analyze a three-stage model of consumer referral behavior via mobile devices in a field study of a firm-created mobile viral marketing campaign. The findings suggest that consumers who place high importance on the purposive value and entertainment value of a message are likely to enter the interest and referral stages. Accounting for consumers' egocentric social networks, we find that tie strength has a negative influence on the reading and decision to refer stages and that degree centrality has no influence on the decision-making process. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "03e267aeeef5c59aab348775d264afce", "text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate &#x2248; object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu&#x2019;s multi-modal model with language priors [27].", "title": "" }, { "docid": "ee4ebafe1b40e3d2020b2fb9a4b881f6", "text": "Probing the lowest energy configuration of a complex system by quantum annealing was recently found to be more effective than its classical, thermal counterpart. By comparing classical and quantum Monte Carlo annealing protocols on the two-dimensional random Ising model (a prototype spin glass), we confirm the superiority of quantum annealing relative to classical annealing. We also propose a theory of quantum annealing based on a cascade of Landau-Zener tunneling events. For both classical and quantum annealing, the residual energy after annealing is inversely proportional to a power of the logarithm of the annealing time, but the quantum case has a larger power that makes it faster.", "title": "" }, { "docid": "647f8e9ece2c7663e2b8767f0694fec5", "text": "Modern retrieval systems are often driven by an underlying machine learning model. The goal of such systems is to identify and possibly rank the few most relevant items for a given query or context. Thus, such systems are typically evaluated using a ranking-based performance metric such as the area under the precision-recall curve, the Fβ score, precision at fixed recall, etc. Obviously, it is desirable to train such systems to optimize the metric of interest. In practice, due to the scalability limitations of existing approaches for optimizing such objectives, large-scale retrieval systems are instead trained to maximize classification accuracy, in the hope that performance as measured via the true objective will also be favorable. In this work we present a unified framework that, using straightforward building block bounds, allows for highly scalable optimization of a wide range of ranking-based objectives. We demonstrate the advantage of our approach on several real-life retrieval problems that are significantly larger than those considered in the literature, while achieving substantial improvement in performance over the accuracyobjective baseline. Proceedings of the 20 International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, Florida, USA. JMLR: W&CP volume 54. Copyright 2017 by the author(s).", "title": "" }, { "docid": "7b1e2439e3be5110f8634394f266da7c", "text": "ÐIn the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges, and junctions may provide a 3D model of the scene but it will not provide information about the actual ªscaleº of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, object recognition, under unconstrained conditions, remains difficult and unreliable for current computational approaches. Here, we propose a source of information for absolute depth estimation based on the whole scene structure that does not rely on specific objects. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene and, therefore, its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.", "title": "" }, { "docid": "13452d0ceb4dfd059f1b48dba6bf5468", "text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a2cdcd9400c2c6663b3672e9cf8d41f6", "text": "The use of immersive virtual reality (VR) systems in museums is a recent trend, as the development of new interactive technologies has inevitably impacted the more traditional sciences and arts. This is more evident in the case of novel interactive technologies that fascinate the broad public, as has always been the case with virtual reality. The increasing development of VR technologies has matured enough to expand research from the military and scientific visualization realm into more multidisciplinary areas, such as education, art and entertainment. This paper analyzes the interactive virtual environments developed at an institution of informal education and discusses the issues involved in developing immersive interactive virtual archaeology projects for the broad public.", "title": "" }, { "docid": "9e8d4b422a7ed05ee338fcd426dab723", "text": "Entity typing is an essential task for constructing a knowledge base. However, many non-English knowledge bases fail to type their entities due to the absence of a reasonable local hierarchical taxonomy. Since constructing a widely accepted taxonomy is a hard problem, we propose to type these non-English entities with some widely accepted taxonomies in English, such as DBpedia, Yago and Freebase. We define this problem as cross-lingual type inference. In this paper, we present CUTE to type Chinese entities with DBpedia types. First we exploit the cross-lingual entity linking between Chinese and English entities to construct the training data. Then we propose a multi-label hierarchical classification algorithm to type these Chinese entities. Experimental results show the effectiveness and efficiency of our method.", "title": "" }, { "docid": "834a5cb9f2948443fbb48f274e02ca9c", "text": "The Carnegie Mellon Communicator is a telephone-based dialog system that supports planning in a travel domain. The implementation of such a system requires two complimentary components, an architecture capable of managing interaction and the task, as well as a knowledge base that captures the speech, language and task characteristics specific to the domain. Given a suitable architecture, the principal effort in development in taken up in the acquisition and processing of a domain knowledge base. This paper describes a variety of techniques we have applied to modeling in acoustic, language, task, generation and synthesis components of the system.", "title": "" }, { "docid": "7d1faee4929d60d952cc8c2c12fa16d3", "text": "We recently showed that improved perceptual performance on a visual motion direction–discrimination task corresponds to changes in how an unmodified sensory representation in the brain is interpreted to form a decision that guides behavior. Here we found that these changes can be accounted for using a reinforcement-learning rule to shape functional connectivity between the sensory and decision neurons. We modeled performance on the basis of the readout of simulated responses of direction-selective sensory neurons in the middle temporal area (MT) of monkey cortex. A reward prediction error guided changes in connections between these sensory neurons and the decision process, first establishing the association between motion direction and response direction, and then gradually improving perceptual sensitivity by selectively strengthening the connections from the most sensitive neurons in the sensory population. The results suggest a common, feedback-driven mechanism for some forms of associative and perceptual learning.", "title": "" }, { "docid": "501c9fa6829242962f182aff2dbbd6f8", "text": "We present an instance segmentation scheme based on pixel affinity information, which is the relationship of two pixels belonging to a same instance. In our scheme, we use two neural networks with similar structure. One is to predict pixel level semantic score and the other is designed to derive pixel affinities. Regarding pixels as the vertexes and affinities as edges, we then propose a simple yet effective graph merge algorithm to cluster pixels into instances. Experimental results show that our scheme can generate fine grained instance mask. With Cityscapes training data, the proposed scheme achieves 27.3 AP on test set.", "title": "" }, { "docid": "191058192146249d5cf9493eb41a37c2", "text": "Cryptocurrency networks have given birth to a diversity of start-ups and attracted a huge influx of venture capital to invest in these start-ups for creating and capturing value within and between such networks. Synthesizing strategic management and information systems (IS) literature, this study advances a unified theoretical framework for identifying and investigating how cryptocurrency companies configure value through digital business models. This framework is then employed, via multiple case studies, to examine digital business models of companies within the bitcoin network. Findings suggest that companies within the bitcoin network exhibits six generic digital business models. These six digital business models are in turn driven by three modes of value configurations with their own distinct logic for value creation and mechanisms for value capturing. A key finding of this study is that value-chain and value-network driven business models commercialize their products and services for each value unit transfer, whereas commercialization for value-shop driven business models is realized through the subsidization of direct users by revenue generating entities. This study contributes to extant literature on value configurations and digital businesses models within the emerging and increasingly pervasive domain of cryptocurrency networks.", "title": "" }, { "docid": "645a92cd2f789f8708a522a35100611b", "text": "INTRODUCTION\nMalignant Narcissism has been recognized as a serious condition but it has been largely ignored in psychiatric literature and research. In order to bring this subject to the attention of mental health professionals, this paper presents a contemporary synthesis of the biopsychosocial dynamics and recommendations for treatment of Malignant Narcissism.\n\n\nMETHODS\nWe reviewed the literature on Malignant Narcissism which was sparse. It was first described in psychiatry by Otto Kernberg in 1984. There have been few contributions to the literature since that time. We discovered that the syndrome of Malignant Narcissism was expressed in fairy tales as a part of the collective unconscious long before it was recognized by psychiatry. We searched for prominent malignant narcissists in recent history. We reviewed the literature on treatment and developed categories for family assessment.\n\n\nRESULTS\nMalignant Narcissism is described as a core Narcissistic personality disorder, antisocial behavior, ego-syntonic sadism, and a paranoid orientation. There is no structured interview or self-report measure that identifies Malignant Narcissism and this interferes with research, clinical diagnosis and treatment. This paper presents a synthesis of current knowledge about Malignant Narcissism and proposes a foundation for treatment.\n\n\nCONCLUSIONS\nMalignant Narcissism is a severe personality disorder that has devastating consequences for the family and society. It requires attention within the discipline of psychiatry and the social science community. We recommend treatment in a therapeutic community and a program of prevention that is focused on psychoeducation, not only in mental health professionals, but in the wider social community.", "title": "" }, { "docid": "14fc402353ddc5ef3ebb1a28682b44ad", "text": "Service Oriented Architecture (SOA) is an architectural style that supports service orientation. In reality, SOA is much more than architecture. SOA adoption is prerequisite for organization to excel their service deliveries, as the delivery platforms are shifting to mobile, cloud and social media. A maturity model is a tool to accelerate enterprise SOA adoption, however it depends on how it should be applied. This paper presents a literature review of existing maturity models and proposes 5 major aspects that a maturity model has to address to improve SOA practices of an enterprise. A maturity model can be used as: (i) a roadmap for SOA adoption, (ii) a reference guide for SOA adoption, (iii) a tool to gauge maturity of process execution, (iv) a tool to measure the effectiveness of SOA motivations, and (v) a review tool for governance framework. This paper also sheds light on how SOA maturity assessment can be modeled. A model for SOA process execution maturity and perspective maturity assessment has been proposed along with a framework to include SOA scope of adoption.", "title": "" }, { "docid": "f383dd5dd7210105406c2da80cf72f89", "text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".", "title": "" } ]
scidocsrr
11662c77ce61b9476c57a5094b6ed761
Recurrent Attentional Reinforcement Learning for Multi-Label Image Recognition
[ { "docid": "2f20bca0134eb1bd9d65c4791f94ddcc", "text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "title": "" } ]
[ { "docid": "3a0da20211697fbcce3493aff795556c", "text": "OBJECTIVES\nWe studied whether park size, number of features in the park, and distance to a park from participants' homes were related to a park being used for physical activity.\n\n\nMETHODS\nWe collected observational data on 28 specific features from 33 parks. Adult residents in surrounding areas (n=380) completed 7-day physical activity logs that included the location of their activities. We used logistic regression to examine the relative importance of park size, features, and distance to participants' homes in predicting whether a park was used for physical activity, with control for perceived neighborhood safety and aesthetics.\n\n\nRESULTS\nParks with more features were more likely to be used for physical activity; size and distance were not significant predictors. Park facilities were more important than were park amenities. Of the park facilities, trails had the strongest relationship with park use for physical activity.\n\n\nCONCLUSIONS\nSpecific park features may have significant implications for park-based physical activity. Future research should explore these factors in diverse neighborhoods and diverse parks among both younger and older populations.", "title": "" }, { "docid": "466c537fca72aaa1e9cda2dc22c0f504", "text": "This paper presents a single-phase grid-connected photovoltaic (PV) module-integrated converter (MIC) based on cascaded quasi-Z-source inverters (qZSI). In this system, each qZSI module serves as an MIC and is connected to one PV panel. Due to the cascaded structure and qZSI topology, the proposed MIC features low-voltage gain requirement, single-stage energy conversion, enhanced reliability, and good output power quality. Furthermore, the enhancement mode gallium nitride field-effect transistors (eGaN FETs) are employed in the qZSI module for efficiency improvement at higher switching frequency. It is found that the qZSI is very suitable for the application of eGaN FETs because of the shoot-through capability. Optimized module design is developed based on the derived qZSI ac equivalent model and power loss analytical model to achieve high efficiency and high power density. A design example of qZSI module is presented for a 250-W PV panel with 25-50-V output voltage. The simulation and experimental results prove the validity of the analytical models. The final module prototype design achieves up to 98.06% efficiency with 100-kHz switching frequency.", "title": "" }, { "docid": "d723ffedb1d346742004b0585ee93f0b", "text": "In today's world, apart from the fact that systems and products are becoming increasingly complex, electronic technology is rapidly progressing in both miniaturization and higher complexity. Consequently, these facts are accompanied with new failures modes. Standard reliability tools cope to tackle all of the new emerging challenges. New technology and designs require adapted approaches to ensure that the products cost-effectively and timely meet desired reliability goals. The Physics-of-Failure (P-o-F) represents one approach to reliability assessment based on modeling and simulation that relies on understanding the physical processes contributing to the appearance of the critical failures. This paper outlines the classical approaches to reliability engineering and discusses advantages of the Physics-of-Failure approach. It also stresses that the P-o-F approach should be probabilistic in order to include inevitable variations of variables involved in processes contributing to the occurrence of failures in the analysis.", "title": "" }, { "docid": "52ef7357fa379b7eede3c4ceee448a81", "text": "(Note: This is a completely revised version of the article that was originally published in ACM Crossroads, Volume 13, Issue 4. Revisions were needed because of major changes to the Natural Language Toolkit project. The code in this version of the article will always conform to the very latest version of NLTK (v2.0b9 as of November 2010). Although the code is always tested, it is possible that a bug or two may have been introduced in the code during the course of this revision. If you find any, please report them to the author. If you are still using version 0.7 of the toolkit for some reason, please refer to http://www.acm.org/crossroads/xrds13-4/natural_language.html).", "title": "" }, { "docid": "697ac701dca9f2c4343d0de3aadd0fa1", "text": "We propose a two phase time dependent vehicle routing and scheduling optimization model that identifies the safest routes, as a substitute for the classical objectives given in the literature such as shortest distance or travel time, through (1) avoiding recurring congestions, and (2) selecting routes that have a lower probability of crash occurrences and non-recurring congestion caused by those crashes. In the first phase, we solve a mixed-integer programming model which takes the dynamic speed variations into account on a graph of roadway networks according to the time of day, and identify the routing of a fleet and sequence of nodes on the safest feasible paths. Second phase considers each route as an independent transit path (fixed route with fixed node sequences), and tries to avoid congestion by rescheduling the departure times of each vehicle from each node, and by adjusting the sub-optimal speed on each arc. A modified simulated annealing (SA) algorithm is formulated to solve both complex models iteratively, which is found to be capable of providing solutions in a considerably short amount of time. In this paper, speed (and travel time) variation with respect to the hour of the day is calculated via queuing models (i.e., M/G/1) to capture the stochasticity of travel times more accurately unlike the most researches in this area, which assume the speed on arcs to be a fixed value or a time dependent step function. First, we demonstrate the accurate performance of M/G/1 in estimation and predicting speeds and travel times for those arcs without readily available speed data. Crash data, on the other hand, is obtained for each arc. Next, 24 scenarios, which correspond to each hour of a day, are developed, and are fed to the proposed solution algorithms. This is followed by evaluating the routing schema for each scenario where the following objective * Corresponding author. Tel.: +1-850-405-6688 E-mail address: Aschkan@ufl.edu functions are utilized: (1) the minimization of the traffic delay (maximum congestion avoidance), and (2) the minimization of the traffic crash risk, and (3) the combination of two objectives. Using these objectives, we identify the safest routes, as a substitute for the classical objectives given in the literature such as shortest distance or travel time, through (1) avoiding recurring congestions, and (2) selecting routes that have a lower probability of crash occurrences and non-recurring congestion caused by those crashes. This also allows us to discuss the feasibility and applicability of our model. Finally, the proposed methodology is applied on a benchmark network as well as a small real-world case study application for the City of Miami, Florida. Results suggest that in some instances, both the travelled distance and travel time increase in return for a safer route, however, the advantages of safer route can outweigh this slight increase.", "title": "" }, { "docid": "1310fd212958fa5b18ff67efe7cade63", "text": "In this paper, a new design method of a tunable oscillator using a suspended-stripline resonator is presented. The negative resistance of an FET mounted on microstrip line (MSL) is combined with a high Q suspended-stripline (SSL) resonator to produce a tunable oscillator with good phase noise. The new MSL-to-SSL transition facilitates easy connection between the MSL-based circuits and the SSL module. The proposed oscillator is also frequency-tunable using a tuner located on the top of the SSL housing. The measured phase noise of the implemented oscillator at 5.148 GHz is -104.34 dBc@100 kHz and -133.21 dBc@1 MHz with 125.7 MHz of frequency tuning.", "title": "" }, { "docid": "bc4fa6a77bf0ea02456947696dc6dca3", "text": "We propose a constraint programming approach for the optimization of inventory routing in the liquefied natural gas industry. We present two constraint programming models that rely on a disjunctive scheduling representation of the problem. We also propose an iterative search heuristic to generate good feasible solutions for these models. Computational results on a set of largescale test instances demonstrate that our approach can find better solutions than existing approaches based on mixed integer programming, while being 4 to 10 times faster on average.", "title": "" }, { "docid": "76a2c62999a256076cdff0fffefca1eb", "text": "Learning a second language is challenging. Becoming fluent requires learning contextual information about how language should be used as well as word meanings and grammar. The majority of existing language learning applications provide only thin context around content. In this paper, we present Crystallize, a collaborative 3D game that provides rich context along with scaffolded learning and engaging gameplay mechanics. Players collaborate through joint tasks, or quests. We present a user study with 42 participants that examined the impact of low and high levels of task interdependence on language learning experience and outcomes. We found that requiring players to help each other led to improved collaborative partner interactions, learning outcomes, and gameplay. A detailed analysis of the chat-logs further revealed that changes in task interdependence affected learning behaviors.", "title": "" }, { "docid": "8f9309ebfc87de5eb7cf715c0370da54", "text": "Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.", "title": "" }, { "docid": "857b9753f213d704b9d7d3b166ff9848", "text": "The aim of rehabilitation robotic area is to research on the application of robotic devices to therapeutic procedures. The goal is to achieve the best possible motor, cognitive and functional recovery for people with impairments following various diseases. Pneumatic actuators are attractive for robotic rehabilitation applications because they are lightweight, powerful, and compliant, but their control has historically been difficult, limiting their use. This article first reviews the current state-of-art in rehabilitation robotic devices with pneumatic actuation systems reporting main features and control issues of each therapeutic device. Then, a new pneumatic rehabilitation robot for proprioceptive neuromuscular facilitation therapies and for relearning daily living skills: like taking a glass, drinking, and placing object on shelves is described as a case study and compared with the current pneumatic rehabilitation devices.", "title": "" }, { "docid": "7381d61eea849ecdf74c962042d0c5ff", "text": "Unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) is very important for battlefield awareness. For SAR systems mounted on a UAV, the motion errors can be considerably high due to atmospheric turbulence and aircraft properties, such as its small size, which makes motion compensation (MOCO) in UAV SAR more urgent than other SAR systems. In this paper, based on 3-D motion error analysis, a novel 3-D MOCO method is proposed. The main idea is to extract necessary motion parameters, i.e., forward velocity and displacement in line-of-sight direction, from radar raw data, based on an instantaneous Doppler rate estimate. Experimental results show that the proposed method is suitable for low- or medium-altitude UAV SAR systems equipped with a low-accuracy inertial navigation system.", "title": "" }, { "docid": "6624487fd7296588c934ad1d74bfc5ea", "text": "We report an efficient method for fabricating flexible membranes of electrospun carbon nanofiber/tin(IV) sulfide (CNF@SnS2) core/sheath fibers. CNF@SnS2 is a new photocatalytic material that can be used to treat wastewater containing high concentrations of hexavalent chromium (Cr(VI)). The hierarchical CNF@SnS2 core/sheath membranes have a three-dimensional macroporous architecture. This provides continuous channels for the rapid diffusion of photoelectrons generated by SnS2 nanoparticles under visible light irradiation. The visible light (λ > 400 nm) driven photocatalytic properties of CNF@SnS2 are evaluated by the reduction of water-soluble Cr(VI). CNF@SnS2 exhibits high visible light-driven photocatalytic activity because of its low band gap of 2.34 eV. Moreover, CNF@SnS2 exhibits good photocatalytic stability and excellent cycling stability. Under visible light irradiation, the optimized CNF@SnS2 membranes exhibit a high rate of degradation of 250 mg/L of aqueous Cr(VI) and can completely degrade the Cr(VI) within 90 min.", "title": "" }, { "docid": "30bc96451dd979a8c08810415e4a2478", "text": "An adaptive circulator fabricated on a 130 nm CMOS is presented. Circulator has two adaptive blocks for gain and phase mismatch correction and leakage cancelation. The impedance matching circuit corrects mismatches for antenna, divider, and LNTA. The cancelation block cancels the Tx leakage. Measured isolation between transmitter and receiver for single tone at 2.4 GHz is 90 dB, and for a 40 MHz wide-band signal is 50dB. The circulator Rx gain is 10 dB, with NF = 4.7 dB and 5 dB insertion loss.", "title": "" }, { "docid": "40c4175be1573d9542f6f9f859fafb01", "text": "BACKGROUND\nFalls are a major threat to the health and independence of seniors. Regular physical activity (PA) can prevent 40% of all fall injuries. The challenge is to motivate and support seniors to be physically active. Persuasive systems can constitute valuable support for persons aiming at establishing and maintaining healthy habits. However, these systems need to support effective behavior change techniques (BCTs) for increasing older adults' PA and meet the senior users' requirements and preferences. Therefore, involving users as codesigners of new systems can be fruitful. Prestudies of the user's experience with similar solutions can facilitate future user-centered design of novel persuasive systems.\n\n\nOBJECTIVE\nThe aim of this study was to investigate how seniors experience using activity monitors (AMs) as support for PA in daily life. The addressed research questions are as follows: (1) What are the overall experiences of senior persons, of different age and balance function, in using wearable AMs in daily life?; (2) Which aspects did the users perceive relevant to make the measurements as meaningful and useful in the long-term perspective?; and (3) What needs and requirements did the users perceive as more relevant for the activity monitors to be useful in a long-term perspective?\n\n\nMETHODS\nThis qualitative interview study included 8 community-dwelling older adults (median age: 83 years). The participants' experiences in using two commercial AMs together with tablet-based apps for 9 days were investigated. Activity diaries during the usage and interviews after the usage were exploited to gather user experience. Comments in diaries were summarized, and interviews were analyzed by inductive content analysis.\n\n\nRESULTS\nThe users (n=8) perceived that, by using the AMs, their awareness of own PA had increased. However, the AMs' impact on the users' motivation for PA and activity behavior varied between participants. The diaries showed that self-estimated physical effort varied between participants and varied for each individual over time. Additionally, participants reported different types of accomplished activities; talking walks was most frequently reported. To be meaningful, measurements need to provide the user with a reliable receipt of whether his or her current activity behavior is sufficient for reaching an activity goal. Moreover, praise when reaching a goal was described as motivating feedback. To be useful, the devices must be easy to handle. In this study, the users perceived wearables as easy to handle, whereas tablets were perceived difficult to maneuver. Users reported in the diaries that the devices had been functional 78% (58/74) of the total test days.\n\n\nCONCLUSIONS\nActivity monitors can be valuable for supporting seniors' PA. However, the potential of the solutions for a broader group of seniors can significantly be increased. Areas of improvement include reliability, usability, and content supporting effective BCTs with respect to increasing older adults' PA.", "title": "" }, { "docid": "43ff7d61119cc7b467c58c9c2e063196", "text": "Financial engineering such as trading decision is an emerging research area and also has great commercial potentials. A successful stock buying/selling generally occurs near price trend turning point. Traditional technical analysis relies on some statistics (i.e. technical indicators) to predict turning point of the trend. However, these indicators can not guarantee the accuracy of prediction in chaotic domain. In this paper, we propose an intelligent financial trading system through a new approach: learn trading strategy by probabilistic model from high-level representation of time series – turning points and technical indicators. The main contributions of this paper are two-fold. First, we utilize high-level representation (turning point and technical indicators). High-level representation has several advantages such as insensitive to noise and intuitive to human being. However, it is rarely used in past research. Technical indicator is the knowledge from professional investors, which can generally characterize the market. Second, by combining high-level representation with probabilistic model, the randomness and uncertainty of chaotic system is further reduced. In this way, we achieve great results (comprehensive experiments on S&P500 components) in a chaotic domain in which the prediction is thought impossible in the past. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5058d6002c43298442ebdf2902e6adf3", "text": "Non-contact image photoplethysmography has gained a lot of attention during the last 5 years. Starting with the work of Verkruysse et al. [1], various methods for estimation of the human pulse rate from video sequences of the face under ambient illumination have been presented. Applied on a mobile service robot aimed to motivate elderly users for physical exercises, the pulse rate can be a valuable information in order to adapt to the users conditions. For this paper, a typical processing pipeline was implemented on a mobile robot, and a detailed comparison of methods for face segmentation was conducted, which is the key factor for robust pulse rate extraction even, if the subject is moving. A benchmark data set is introduced focusing on the amount of motion of the head during the measurement.", "title": "" }, { "docid": "35258abbafac62dbfbd0be08617e95bf", "text": "Code Reuse Attacks (CRAs) recently emerged as a new class of security exploits. CRAs construct malicious programs out of small fragments (gadgets) of existing code, thus eliminating the need for code injection. Existing defenses against CRAs often incur large performance overheads or require extensive binary rewriting and other changes to the system software. In this paper, we examine a signature-based detection of CRAs, where the attack is detected by observing the behavior of programs and detecting the gadget execution patterns. We first demonstrate that naive signature-based defenses can be defeated by introducing special “delay gadgets” as part of the attack. We then show how a software-configurable signature-based approach can be designed to defend against such stealth CRAs, including the attacks that manage to use longer-length gadgets. The proposed defense (called SCRAP) can be implemented entirely in hardware using simple logic at the commit stage of the pipeline. SCRAP is realized with minimal performance cost, no changes to the software layers and no implications on binary compatibility. Finally, we show that SCRAP generates no false alarms on a wide range of applications.", "title": "" }, { "docid": "1472e8a0908467404c01d236d2f39c58", "text": "Millimetre wave antennas are typically used for applications like anti-collision car radar or sensory. A new and upcoming application is the use of 60 GHz antennas for high date rate point-to-point connections to serve wireless local area networks. For high gain antennas, configurations using lenses in combination with planar structures are often applied. However, single layer planar arrays might offer a more cost-efficient solution, especially if the antenna and the RF-circuitry are realised on one and the same substrate. The design of millimetre wave antennas has to cope with the severe impacts of manufacturing tolerances and losses at these frequencies. Reproducibility can become poor in such cases. The successful design and realisation of a cost-efficient 60 GHz planar patch array (8/spl times/8 elements) with high reproducibility for point-to-point connections is presented. Important design aspects are highlighted and manufacturing tolerances and losses are analysed. Measurement results of different prototypes are presented to show the reproducibility of the antenna layout.", "title": "" }, { "docid": "67509b64aaf1ead0bcba557d8cfe84bc", "text": "Base on innovation resistance theory, this research builds the model of factors affecting consumers' resistance in using online travel in Thailand. Through the questionnaires and the SEM methods, empirical analysis results show that functional barriers are even greater sources of resistance to online travel website than psychological barriers. Online experience and independent travel experience have significantly influenced on consumer innovation resistance. Social influence plays an important role in this research.", "title": "" } ]
scidocsrr
0fda572b0a651c2c09b38584515fa36e
Data-driven comparison of spatio-temporal monitoring techniques
[ { "docid": "5508603a802abb9ab0203412b396b7bc", "text": "We present an optimal algorithm for informative path planning (IPP), using a branch and bound method inspired by feature selection algorithms. The algorithm uses the monotonicity of the objective function to give an objective function-dependent speedup versus brute force search. We present results which suggest that when maximizing variance reduction in a Gaussian process model, the speedup is significant.", "title": "" }, { "docid": "2bdaaeb18db927e2140c53fcc8d4fa30", "text": "Many information gathering problems require determining the set of points, for which an unknown function takes value above or below some given threshold level. As a concrete example, in the context of environmental monitoring of Lake Zurich we would like to estimate the regions of the lake where the concentration of chlorophyll or algae is greater than some critical value, which would serve as an indicator of algal bloom phenomena. A critical factor in such applications is the high cost in terms of time, baery power, etc. that is associated with each measurement, therefore it is important to be careful about selecting “informative” locations to sample, in order to reduce the total sampling effort required. We formalize the task of level set estimation as a classification problem with sequential measurements, where the unknown function is modeled as a sample from a Gaussian process (GP). We propose LSE, an active learning algorithm that guides both sampling and classification based on GP-derived confidence bounds, and provide theoretical guarantees about its sample complexity. Furthermore, we extend LSE and its theory to two more natural seings: (1) where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. Based on the laer extension we also propose a simple path planning algorithm. We evaluate the effectiveness of our proposed methods on two problems of practical interest, namely the aforementioned autonomous monitoring of algal populations in Lake Zurich and geolocating network latency.", "title": "" } ]
[ { "docid": "8d90b9fbf7af1ea36f93f88e6ce11ba2", "text": "Given its serious implications for psychological and socio-emotional health, the prevention of problem gambling among adolescents is increasingly acknowledged as an area requiring attention. The theory of planned behavior (TPB) is a well-established model of behavior change that has been studied in the development and evaluation of primary preventive interventions aimed at modifying cognitions and behavior. However, the utility of the TPB has yet to be explored as a framework for the development of adolescent problem gambling prevention initiatives. This paper first examines the existing empirical literature addressing the effectiveness of school-based primary prevention programs for adolescent gambling. Given the limitations of existing programs, we then present a conceptual framework for the integration of the TPB in the development of effective problem gambling preventive interventions. The paper describes the TPB, demonstrates how the framework has been applied to gambling behavior, and reviews the strengths and limitations of the model for the design of primary prevention initiatives targeting adolescent risk and addictive behaviors, including adolescent gambling.", "title": "" }, { "docid": "1514bae0c1b47f5aaf0bfca6a63d9ce9", "text": "The persistence of racial inequality in the U.S. labor market against a general backdrop of formal equality of opportunity is a troubling phenomenon that has significant ramifications on the design of hiring policies. In this paper, we show that current group disparate outcomes may be immovable even when hiring decisions are bound by an input-output notion of “individual fairness.” Instead, we construct a dynamic reputational model of the labor market that illustrates the reinforcing nature of asymmetric outcomes resulting from groups’ divergent accesses to resources and as a result, investment choices. To address these disparities, we adopt a dual labor market composed of a Temporary Labor Market (TLM), in which firms’ hiring strategies are constrained to ensure statistical parity of workers granted entry into the pipeline, and a Permanent Labor Market (PLM), in which firms hire top performers as desired. Individual worker reputations produce externalities for their group; the corresponding feedback loop raises the collective reputation of the initially disadvantaged group via a TLM fairness intervention that need not be permanent. We show that such a restriction on hiring practices induces an equilibrium that, under particular market conditions, Pareto-dominates those arising from strategies that statistically discriminate or employ a “group-blind” criterion. The enduring nature of equilibria that are both inequitable and Pareto suboptimal suggests that fairness interventions beyond procedural checks of hiring decisions will be of critical importance in a world where machines play a greater role in the employment process. ACM Reference Format: Lily Hu and Yiling Chen. 2018. A Short-term Intervention for Long-term Fairness in the Labor Market. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https: //doi.org/10.1145/3178876.3186044", "title": "" }, { "docid": "3fd685b63f92d277fb5a8e524e065277", "text": "State-of-the-art image sensors suffer from significant limitations imposed by their very principle of operation. These sensors acquire the visual information as a series of “snapshot” images, recorded at discrete points in time. Visual information gets time quantized at a predetermined frame rate which has no relation to the dynamics present in the scene. Furthermore, each recorded frame conveys the information from all pixels, regardless of whether this information, or a part of it, has changed since the last frame had been acquired. This acquisition method limits the temporal resolution, potentially missing important information, and leads to redundancy in the recorded image data, unnecessarily inflating data rate and volume. Biology is leading the way to a more efficient style of image acquisition. Biological vision systems are driven by events happening within the scene in view, and not, like image sensors, by artificially created timing and control signals. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making is transferred to the single pixel that handles its own information individually. In this paper, recent developments in bioinspired, neuromorphic optical sensing and artificial vision are presented and discussed. It is suggested that bioinspired vision systems have the potential to outperform conventional, frame-based vision systems in many application fields and to establish new benchmarks in terms of redundancy suppression and data compression, dynamic range, temporal resolution, and power efficiency. Demanding vision tasks such as real-time 3-D mapping, complex multiobject tracking, or fast visual feedback loops for sensory-motor action, tasks that often pose severe, sometimes insurmountable, challenges to conventional artificial vision systems, are in reach using bioinspired vision sensing and processing techniques.", "title": "" }, { "docid": "69b0c5a4a3d5fceda5e902ec8e0479bb", "text": "Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.", "title": "" }, { "docid": "bd3ba8635a8cd2112a1de52c90e2a04b", "text": "Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.", "title": "" }, { "docid": "bc6f9ef52c124675c62ccb8a1269a9b8", "text": "We explore 3D printing physical controls whose tactile response can be manipulated programmatically through pneumatic actuation. In particular, by manipulating the internal air pressure of various pneumatic elements, we can create mechanisms that require different levels of actuation force and can also change their shape. We introduce and discuss a series of example 3D printed pneumatic controls, which demonstrate the feasibility of our approach. This includes conventional controls, such as buttons, knobs and sliders, but also extends to domains such as toys and deformable interfaces. We describe the challenges that we faced and the methods that we used to overcome some of the limitations of current 3D printing technology. We conclude with example applications and thoughts on future avenues of research.", "title": "" }, { "docid": "043306203de8365bd1930a9c0b4138c7", "text": "In this paper, we compare two different methods for automatic Arabic speech recognition for isolated words and sentences. Isolated word/sentence recognition was performed using cepstral feature extraction by linear predictive coding, as well as Hidden Markov Models (HMM) for pattern training and classification. We implemented a new pattern classification method, where we used Neural Networks trained using the Al-Alaoui Algorithm. This new method gave comparable results to the already implemented HMM method for the recognition of words, and it has overcome HMM in the recognition of sentences. The speech recognition system implemented is part of the Teaching and Learning Using Information Technology (TLIT) project which would implement a set of reading lessons to assist adult illiterates in developing better reading capabilities.", "title": "" }, { "docid": "980ad058a2856048765f497683557386", "text": "Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills. Most recent works focus on HRL with two levels, i.e., a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversity-driven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRL with multiple levels. DEHRL follows a popular assumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with five baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.", "title": "" }, { "docid": "af8fbdfbc4c4958f69b3936ff2590767", "text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.", "title": "" }, { "docid": "7a87ffc98d8bab1ff0c80b9e8510a17d", "text": "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "title": "" }, { "docid": "33aa9af9a5f3d3f0b8bf21dca3b13d2f", "text": "Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.", "title": "" }, { "docid": "aa2401a302c7f0b394abb11961420b50", "text": "A program is then asked the question “what was too small” as a follow-up to (1a), and the question “what was too big” as a follow-up to (1b). Levesque et. al. call a sentence such as that in (1) “Google proof” since a system that processed a large corpus cannot “learn” how to resolve such references by finding some statistical correlations in the data, as the only difference between (1a) and (1b) are antonyms that are known to co-occur in similar contexts with the same frequency. In a recent paper Trinh and Le (2018) henceforth T&L suggested that they have successfully formulated a „simple‟ machine learning method for performing commonsense reasoning, and in particular, the kind of reasoning that would be required in the process of language understanding. In doing so, T&L use the Winograd Schema (WS) challenge as a benchmark. In simple terms, T&L suggest the following method for “learning” how to successfully resolve the reference “it” in sentences such as those in (1): generate two", "title": "" }, { "docid": "0122f015e3c054840782d09ede609390", "text": "Decision rules are one of the most expressive languages for machine learning. In this paper we present Adaptive Model Rules (AMRules), the first streaming rule learning algorithm for regression problems. In AMRules the antecedent of a rule is a conjunction of conditions on the attribute values, and the consequent is a linear combination of attribute values. Each rule uses a PageHinkley test to detect changes in the process generating data and react to changes by pruning the rule set. In the experimental section we report the results of AMRules on benchmark regression problems, and compare the performance of our system with other streaming regression algorithms.", "title": "" }, { "docid": "ad9cd1137223583c9324f7670688f098", "text": "Sources of multidimensional data are becoming more prevalent, partly due to the rise of the Internet of Things (IoT), and with that the need to ingest and analyze data streams at rates higher than before. Some industrial IoT applications require ingesting millions of records per second, while processing queries on recently ingested and historical data. Unfortunately, existing database systems suited to multidimensional data exhibit low per-node ingestion performance, and even if they can scale horizontally in distributed settings, they require large number of nodes to meet such ingest demands. For this reason, in this paper we evaluate a singlenode multidimensional data store for high-velocity sensor data. Its design centers around a two-level indexing structure, wherein the global index is an in-memory R*-tree and the local indices are serialized kd-trees. This study is confined to records with numerical indexing fields and range queries, and covers ingest throughput, query response time, and storage footprint. We show that the adopted design streamlines data ingestion and offers ingress rates two orders of magnitude higher than those of a selection of open-source database systems, namely Percona Server, SQLite, and Druid. Our prototype also reports query response times comparable to or better than those of Percona Server and Druid, and compares favorably in terms of storage footprint. In addition, we evaluate a kd-tree partitioning based scheme for grouping incoming streamed data records. Compared to a random scheme, this scheme produces less overlap between groups of streamed records, but contrary to what we expected, such reduced overlap does not translate into better query performance. By contrast, the local indices prove much more beneficial to query performance. We believe the experience reported in this paper is valuable to practitioners and researchers alike interested in building database systems for high-velocity multidimensional data.", "title": "" }, { "docid": "ba302b1ee508edc2376160b3ad0a751f", "text": "During the last years terrestrial laser scanning became a standard method of data acquisition for various applications in close range domain, like industrial production, forest inventories, plant engineering and construction, car navigation and – one of the most important fields – the recording and modelling of buildings. To use laser scanning data in an adequate way, a quality assessment of the laser scanner is inevitable. In the literature some publications can be found concerning the data quality of terrestrial laser scanners. Most of these papers concentrate on the geometrical accuracy of the scanner (errors of instrument axis, range accuracy using target etc.). In this paper a special aspect of quality assessment will be discussed: the influence of different materials and object colours on the recorded measurements of a TLS. The effects on the geometric accuracy as well as on the simultaneously acquired intensity values are the topics of our investigations. A TRIMBLE GX scanner was used for several test series. The study of different effects refer to materials commonly used at building façades, i.e. grey scaled and coloured sheets, various species of wood, a metal plate, plasters of different particle size, light-transmissive slides and surfaces of different conditions of wetness. The tests concerning a grey wedge show a dependence on the brightness where the mean square error (MSE) decrease from black to white, and therefore, confirm previous results of other research groups. Similar results had been obtained with coloured sheets. In this context an important result is that the accuracy of measurements at night-time has proved to be much better than at day time. While different species of wood and different conditions of wetness have no significant effect on the range accuracy the study of a metal plate delivers MSE values considerably higher than the accuracy of the scanner, if the angle of incidence is approximately orthogonal. Also light-transmissive slides cause enormous MSE values. It can be concluded that high precision measurements should be carried out at night-time and preferable on bright surfaces without specular characteristics.", "title": "" }, { "docid": "8beca44b655835e7a33abd8f1f343a6f", "text": "Taxonomies have been developed as a mechanism for cyber attack categorisation. However, when one considers the recent and rapid evolution of attacker techniques and targets, the applicability and effectiveness of these taxonomies should be questioned. This paper applies two approaches to the evaluation of seven taxonomies. The first employs a criteria set, derived through analysis of existing works in which critical components to the creation of taxonomies are defined. The second applies historical attack data to each taxonomy under review, more specifically, attacks in which industrial control systems have been targeted. This combined approach allows for a more in-depth understanding of existing taxonomies to be developed, from both a theoretical and practical perspective.", "title": "" }, { "docid": "01d4f1311afdd38c1afae967542768e6", "text": "Cortana, one of the new features introduced by Microsoft in Windows 10 desktop operating systems, is a voice activated personal digital assistant that can be used for searching stuff on device or web, setting up reminders, tracking users’ upcoming flights, getting news tailored to users’ interests, sending text and emails, and more. Being the platform relatively new, the forensic examination of Cortana has been largely unexplored in literature. This paper seeks to determine the data remnants of Cortana usage in a Windows 10 personal computer (PC). The research contributes in-depth understanding of the location of evidentiary artifacts on hard disk and the type of information recorded in these artifacts as a result of user activities on Cortana. For decoding and exporting data from one of the databases created by Cortana application, four custom python scripts have been developed. Additionally, as a part of this paper, a GUI tool called CortanaDigger is developed for extracting and listing web search strings, as well as timestamp of search made by a user on Cortana box. Several experiments are conducted to track reminders (based on time, place, and person) and detect anti-forensic attempts like evidence modification and evidence destruction carried out on Cortana artifacts. Finally, forensic usefulness of Cortana artifacts is demonstrated in terms of a Cortana web search timeline constructed over a period of time.", "title": "" }, { "docid": "6c411f36e88a39684eb9779462117e6b", "text": "Number of people who use internet and websites for various purposes is increasing at an astonishing rate. More and more people rely on online sites for purchasing songs, apparels, books, rented movies etc. The competition between the online sites forced the web site owners to provide personalized services to their customers. So the recommender systems came into existence. Recommender systems are active information filtering systems that attempt to present to the user, information items in which the user is interested in. The websites implement recommender system feature using collaborative filtering, content based or hybrid approaches. The recommender systems also suffer from issues like cold start, sparsity and over specialization. Cold start problem is that the recommenders cannot draw inferences for users or items for which it does not have sufficient information. This paper attempts to propose a solution to the cold start problem by combining association rules and clustering technique. Comparison is done between the performance of the recommender system when association rule technique is used and the performance when association rule and clustering is combined. The experiments with the implemented system proved that accuracy can be improved when association rules and clustering is combined. An accuracy improvement of 36% was achieved by using the combination technique over the association rule technique.", "title": "" }, { "docid": "04e7a143443a04be37e61a8ce0f562d6", "text": "During the 2016 United States presidential election, politicians have increasingly used Twitter to express their beliefs, stances on current political issues, and reactions concerning national and international events. Given the limited length of tweets and the scrutiny politicians face for what they choose or neglect to say, they must craft and time their tweets carefully. The content and delivery of these tweets is therefore highly indicative of a politician’s stances. We present a weakly supervised method for extracting how issues are framed and temporal activity patterns on Twitter for popular politicians and issues of the 2016 election. These behavioral components are combined into a global model which collectively infers the most likely stance and agreement patterns among politicians, with respective accuracies of 86.44% and 84.6% on average.", "title": "" } ]
scidocsrr
a60f54a4b2103ce0e5fa92ef52973b0f
A Comparative Study of Classification and Regression Algorithms for Modelling Students' Academic Performance.
[ { "docid": "fb3cb4a5aef2633add88f28a7f3f19ac", "text": "Both the root mean square error (RMSE) and the mean absolute error (MAE) are regularly employed in model evaluation studies. Willmott and Matsuura(2005) have suggested that the RMSE is not a good indicator of average model performance and might be a misleading indicator of average error, and thus the MAE would be a better metric for that purpose. While some concerns over using RMSE raised by Willmott and Matsuura(2005) andWillmott et al. (2009) are valid, the proposed avoidance of RMSE in favor of MAE is not the solution. Citing the aforementioned papers, many researchers chose MAE over RMSE to present their model evaluation statistics when presenting or adding the RMSE measures could be more beneficial. In this technical note, we demonstrate that the RMSE is not ambiguous in its meaning, contrary to what was claimed by Willmott et al. (2009). The RMSE is more appropriate to represent model performance than the MAE when the error distribution is expected to be Gaussian. In addition, we show that the RMSE satisfies the triangle inequality requirement for a distance metric, whereasWillmott et al. (2009) indicated that the sums-ofsquares-based statistics do not satisfy this rule. In the end, we discussed some circumstances where using the RMSE will be more beneficial. However, we do not contend that the RMSE is superior over the MAE. Instead, a combination of metrics, including but certainly not limited to RMSEs and MAEs, are often required to assess model performance.", "title": "" } ]
[ { "docid": "9c47d1896892c663987caa24d4a70037", "text": "Multi-pitch estimation of sources in music is an ongoing research area that has a wealth of applications in music information retrieval systems. This paper presents the systematic evaluations of over a dozen competing methods and algorithms for extracting the fundamental frequencies of pitched sound sources in polyphonic music. The evaluations were carried out as part of the Music Information Retrieval Evaluation eXchange (MIREX) over the course of two years, from 2007 to 2008. The generation of the dataset and its corresponding ground-truth, the methods by which systems can be evaluated, and the evaluation results of the different systems are presented and discussed.", "title": "" }, { "docid": "ed6a69d040a53bec208cf3f0fc5076e9", "text": "The Buddhist construct of mindfulness is a central element of mindfulness-based interventions and derives from a systematic phenomenological programme developed over several millennia to investigate subjective experience. Enthusiasm for ‘mindfulness’ in Western psychological and other science has resulted in proliferation of definitions, operationalizations and self-report inventories that purport tomeasure mindful awareness as a trait. This paper addresses a number of seemingly intractable issues regarding current attempts to characterize mindfulness and also highlights a number of vulnerabilities in this domain that may lead to denaturing, distortion, dilution or reification of Buddhist constructs related to mindfulness. Enriching positivist Western psychological paradigms with a detailed and complex Buddhist phenomenology of the mind may require greater study and long-term direct practice of insight meditation than is currently common among psychologists and other scientists. Pursuit of such an approach would seem a necessary precondition for attempts to characterize and quantify mindfulness.", "title": "" }, { "docid": "7645c6a0089ab537cb3f0f82743ce452", "text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.", "title": "" }, { "docid": "0add09adcb099c977435ddd8390c03c8", "text": "A novel diode-triggered SCR (DTSCR) ESD protection element is introduced for low-voltage application (signal, supply voltage /spl les/1.8 V) and extremely narrow ESD design margins. Trigger voltage engineering in conjunction with fast and efficient SCR voltage clamping is applied for the protection of ultra-sensitive circuit nodes, such as SiGe HBT bases (e.g. f/sub Tmax/=45 GHz in BiCMOS-0.35 /spl mu/m LNA input) and thin gate-oxides (e.g. tox=1.7 nm in CMOS-0.09 /spl mu/m input). SCR integration is possible based on CMOS devices or can alternatively be formed by high-speed SiGe HBTs.", "title": "" }, { "docid": "34d0b8d4b1c25b4be30ad0c15435f407", "text": "Cranioplasty using alternate alloplastic bone substitutes instead of autologous bone grafting is inevitable in the clinical field. The authors present their experiences with cranial reshaping using methyl methacrylate (MMA) and describe technical tips that are keys to a successful procedure. A retrospective chart review of patients who underwent cranioplasty with MMA between April 2007 and July 2010 was performed. For 20 patients, MMA was used for cranioplasty after craniofacial trauma (n = 16), tumor resection (n = 2), and a vascular procedure (n = 2). The patients were divided into two groups. In group 1, MMA was used in full-thickness inlay fashion (n = 3), and in group 2, MMA was applied in partial-thickness onlay fashion (n = 17). The locations of reconstruction included the frontotemporal region (n = 5), the frontoparietotemporal region (n = 5), the frontal region (n = 9), and the vertex region (n = 1). The size of cranioplasty varied from 30 to 144 cm2. The amount of MMA used ranged from 20 to 70 g. This biomaterial was applied without difficulty, and no intraoperative complications were linked to the applied material. The patients were followed for 6 months to 4 years (mean, 2 years) after MMA implantation. None of the patients showed any evidence of implant infection, exposure, or extrusion. Moreover, the construct appeared to be structurally stable over time in all the patients. Methyl methacrylate is a useful adjunct for treating deficiencies of the cranial skeleton. It provides rapid and reliable correction of bony defects and contour deformities. Although MMA is alloplastic, appropriate surgical procedures can avoid problems such as infection and extrusion. An acceptable overlying soft tissue envelope should be maintained together with minimal contamination of the operative site. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "4829d8c0dd21f84c3afbe6e1249d6248", "text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.", "title": "" }, { "docid": "41149c3504f43bd76cca054a4dff384c", "text": "This paper presents a 3-dimensional millimeterwave statistical channel impulse response model from 28 GHz and 73 GHz ultrawideband propagation measurements [1], [2] . An accurate 3GPP-like channel model that supports arbitrary carrier frequency, RF bandwidth, and antenna beamwidth (for both omnidirectional and arbitrary directional antennas), is provided. Time cluster and spatial lobe model parameters are extracted from empirical distributions from field measurements. A step-by-step modeling procedure for generati ng channel coefficients is shown to agree with statistics from t he field measurements, thus confirming that the statistical cha nnel model faithfully recreates spatial and temporal channel impulse responses for use in millimeter-wave 5G air interface desig ns.", "title": "" }, { "docid": "4a5959a7bcfaa0c7768d9a0d742742be", "text": "In this paper, we are interested in understanding the interrelationships between mainstream and social media in forming public opinion during mass crises, specifically in regards to how events are framed in the mainstream news and on social networks and to how the language used in those frames may allow to infer political slant and partisanship. We study the lingual choices for political agenda setting in mainstream and social media by analyzing a dataset of more than 40M tweets and more than 4M news articles from the mass protests in Ukraine during 2013-2014 — known as \"Euromaidan\" — and the post-Euromaidan conflict between Russian, pro-Russian and Ukrainian forces in eastern Ukraine and Crimea. We design a natural language processing algorithm to analyze at scale the linguistic markers which point to a particular political leaning in online media and show that political slant in news articles and Twitter posts can be inferred with a high level of accuracy. These findings allow us to better understand the dynamics of partisan opinion formation during mass crises and the interplay between mainstream and social media in such circumstances.", "title": "" }, { "docid": "61cfd09f87ed6bacd3446ea32061bc4c", "text": "Subgroup discovery is a data mining technique which extracts interesting rules with respect to a target variable. An important characteristic of this task is the combination of predictive and descriptive induction. An overview related to the task of subgroup discovery is presented. This review focuses on the foundations, algorithms, and advanced studies together with the applications of subgroup discovery presented throughout the specialised bibliography.", "title": "" }, { "docid": "646a1a07019d0f2965051baebcfe62c5", "text": "We present a computing model based on the DNA strand displacement technique, which performs Bayesian inference. The model will take single-stranded DNA as input data, that represents the presence or absence of a specific molecular signal (evidence). The program logic encodes the prior probability of a disease and the conditional probability of a signal given the disease affecting a set of different DNA complexes and their ratios. When the input and program molecules interact, they release a different pair of single-stranded DNA species whose ratio represents the application of Bayes’ law: the conditional probability of the disease given the signal. The models presented in this paper can have the potential to enable the application of probabilistic reasoning in genetic diagnosis in vitro.", "title": "" }, { "docid": "fd2abd6749eb7a85f3480ae9b4cbefa6", "text": "We examine the current performance and future demands of interconnects to and on silicon chips. We compare electrical and optical interconnects and project the requirements for optoelectronic and optical devices if optics is to solve the major problems of interconnects for future high-performance silicon chips. Optics has potential benefits in interconnect density, energy, and timing. The necessity of low interconnect energy imposes low limits especially on the energy of the optical output devices, with a ~ 10 fJ/bit device energy target emerging. Some optical modulators and radical laser approaches may meet this requirement. Low (e.g., a few femtofarads or less) photodetector capacitance is important. Very compact wavelength splitters are essential for connecting the information to fibers. Dense waveguides are necessary on-chip or on boards for guided wave optical approaches, especially if very high clock rates or dense wavelength-division multiplexing (WDM) is to be avoided. Free-space optics potentially can handle the necessary bandwidths even without fast clocks or WDM. With such technology, however, optics may enable the continued scaling of interconnect capacity required by future chips.", "title": "" }, { "docid": "c101290e355e76df7581a4500c111c86", "text": "The Internet of Things (IoT) is a network of physical things, objects, or devices, such as radio-frequency identification tags, sensors, actuators, mobile phones, and laptops. The IoT enables objects to be sensed and controlled remotely across existing network infrastructure, including the Internet, thereby creating opportunities for more direct integration of the physical world into the cyber world. The IoT becomes an instance of cyberphysical systems (CPSs) with the incorporation of sensors and actuators in IoT devices. Objects in the IoT have the potential to be grouped into geographical or logical clusters. Various IoT clusters generate huge amounts of data from diverse locations, which creates the need to process these data more efficiently. Efficient processing of these data can involve a combination of different computation models, such as in situ processing and offloading to surrogate devices and cloud-data centers.", "title": "" }, { "docid": "de6581719d2bc451695a77d43b091326", "text": "Keyphrases are useful for a variety of tasks in information retrieval systems and natural language processing, such as text summarization, automatic indexing, clustering/classification, ontology learning and building and conceptualizing particular knowledge domains, etc. However, assigning these keyphrases manually is time consuming and expensive in term of human resources. Therefore, there is a need to automate the task of extracting keyphrases. A wide range of techniques of keyphrase extraction have been proposed, but they are still suffering from the low accuracy rate and poor performance. This paper presents a state of the art of automatic keyphrase extraction approaches to identify their strengths and weaknesses. We also discuss why some techniques perform better than others and how can we improve the task of automatic keyphrase extraction.", "title": "" }, { "docid": "03aa771b457ec08c6ee5a4d1bb2d20dc", "text": "CONTEXT\nThe use of unidimensional pain scales such as the Numerical Rating Scale (NRS), Verbal Rating Scale (VRS), or Visual Analogue Scale (VAS) is recommended for assessment of pain intensity (PI). A literature review of studies specifically comparing the NRS, VRS, and/or VAS for unidimensional self-report of PI was performed as part of the work of the European Palliative Care Research Collaborative on pain assessment.\n\n\nOBJECTIVES\nTo investigate the use and performance of unidimensional pain scales, with specific emphasis on the NRSs.\n\n\nMETHODS\nA systematic search was performed, including citations through April 2010. All abstracts were evaluated by two persons according to specified criteria.\n\n\nRESULTS\nFifty-four of 239 papers were included. Postoperative PI was most frequently studied; six studies were in cancer. Eight versions of the NRS (NRS-6 to NRS-101) were used in 37 studies; a total of 41 NRSs were tested. Twenty-four different descriptors (15 for the NRSs) were used to anchor the extremes. When compared with the VAS and VRS, NRSs had better compliance in 15 of 19 studies reporting this, and were the recommended tool in 11 studies on the basis of higher compliance rates, better responsiveness and ease of use, and good applicability relative to VAS/VRS. Twenty-nine studies gave no preference. Many studies showed wide distributions of NRS scores within each category of the VRSs. Overall, NRS and VAS scores corresponded, with a few exceptions of systematically higher VAS scores.\n\n\nCONCLUSION\nNRSs are applicable for unidimensional assessment of PI in most settings. Whether the variability in anchors and response options directly influences the numerical scores needs to be empirically tested. This will aid in the work toward a consensus-based, standardized measure.", "title": "" }, { "docid": "44e310ba974f371605f6b6b6cd0146aa", "text": "This section is a collection of shorter “Issue and Opinions” pieces that address some of the critical challenges around the evolution of digital business strategy. These voices and visions are from thought leaders who, in addition to their scholarship, have a keen sense of practice. They outline through their opinion pieces a series of issues that will need attention from both research and practice. These issues have been identified through their observation of practice with the eye of a scholar. They provide fertile opportunities for scholars in information systems, strategic management, and organizational theory.", "title": "" }, { "docid": "e2f69fd023cfe69432459e8a82d4c79a", "text": "Thresholding is one of the popular and fundamental techniques for conducting image segmentation. Many thresholding techniques have been proposed in the literature. Among them, the minimum cross entropy thresholding (MCET) have been widely adopted. Although the MCET method is effective in the bilevel thresholding case, it could be very time-consuming in the multilevel thresholding scenario for more complex image analysis. This paper first presents a recursive programming technique which reduces an order of magnitude for computing the MCET objective function. Then, a particle swarm optimization (PSO) algorithm is proposed for searching the near-optimal MCET thresholds. The experimental results manifest that the proposed PSO-based algorithm can derive multiple MCET thresholds which are very close to the optimal ones examined by the exhaustive search method. The convergence of the proposed method is analyzed mathematically and the results validate that the proposed method is efficient and is suited for real-time applications. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "37653b46f34b1418ad7dbfc59cbfe16a", "text": "The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention-based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all time steps. With this dual-stage attention scheme, our model can not only make predictions effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can outperform state-of-the-art methods for time series prediction.", "title": "" }, { "docid": "9bcf47b56ba4b58533b0d0435411a7b3", "text": "OBJECTIVES\nThe aim of this report was to evaluate the 5-year clinical performance and survival of zirconia (NobelProcera™) single crowns.\n\n\nMETHODS\nAll patients treated with porcelain-veneered zirconia single crowns in a private practice during the period October 2004 to November 2005 were included. The records were scrutinized for clinical data. Information was available for 162 patients and 205 crowns.\n\n\nRESULTS\nMost crowns (78%) were placed on premolars and molars. Out of the 143 crowns that were followed for 5 years, 126 (88%) did not have any complications. Of those with complications, the most common were: extraction of abutment tooth (7; 3%), loss of retention (15; 7%), need of endodontic treatment (9; 4%) and porcelain veneer fracture (6; 3%). No zirconia cores fractured. In total 19 restorations (9%) were recorded as failures: abutment tooth extraction (7), remake of crown due to lost retention (6), veneer fracture (4), persistent pain (1) and caries (1). The 5-year cumulative survival rate (CSR) was 88.8%.\n\n\nCONCLUSIONS\nAccording to the present 5-year results zirconia crowns (NobelProcera™) are a promising prosthodontic alternative also in the premolar and molar regions. Out of the 143 crowns followed for 5 years, 126 (88%) did not have any complications. However, 9% of the restorations were judged as failures. Further studies are necessary to evaluate the long-term success.", "title": "" }, { "docid": "1c56b68a20b2baba45c7939a24d9be70", "text": "Emotion recognition in conversations is crucial for building empathetic machines. Current work in this domain do not explicitly consider the inter-personal influences that thrive in the emotional dynamics of dialogues. To this end, we propose Interactive COnversational memory Network (ICON), a multimodal emotion detection framework that extracts multimodal features from conversational videos and hierarchically models the selfand interspeaker emotional influences into global memories. Such memories generate contextual summaries which aid in predicting the emotional orientation of utterance-videos. Our model outperforms state-of-the-art networks on multiple classification and regression tasks in two benchmark datasets.", "title": "" }, { "docid": "c41e65416f0339046587239ae6a6f7b4", "text": "Substantial research has documented the universality of several emotional expressions. However, recent findings have demonstrated cultural differences in level of recognition and ratings of intensity. When testing cultural differences, stimulus sets must meet certain requirements. Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE) is the only set that meets these requirements. The purpose of this study was to obtain judgment reliability data on the JACFEE, and to test for possible cross-national differences in judgments as well. Subjects from Hungary, Japan, Poland, Sumatra, United States, and Vietnam viewed the complete JACFEE photo set and judged which emotions were portrayed in the photos and rated the intensity of those expressions. Results revealed high agreement across countries in identifying the emotions portrayed in the photos, demonstrating the reliability of the JACFEE. Despite high agreement, cross-national differences were found in the exact level of agreement for photos of anger, contempt, disgust, fear, sadness, and surprise. Cross-national differences were also found in the level of intensity attributed to the photos. No systematic variation due to either preceding emotion or presentation order of the JACFEE was found. Also, we found that grouping the countries into a Western/Non-Western dichotomy was not justified according to the data. Instead, the cross-national differences are discussed in terms of possible sociopsychological variables that influence emotion judgments. Cross-cultural research has documented high agreement in judgments of facial expressions of emotion in over 30 different cultures (Ekman, The research reported in this article was made supported in part by faculty awards for research and scholarship to David Matsumoto. Also, we would like to express our appreciation to William Irwin for his previous work on this project, and to Nathan Yrizarry, Hideko Uchida, Cenita Kupperbusch, Galin Luk, Carinda Wilson-Cohn, Sherry Loewinger, and Sachiko Takeuchi for their general assistance in our research program. Correspondence concerning this article should be addressed to David Matsumoto, Department of Psychology, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132. Electronic mail may be sent to dm@sfsu.edu. loumal of Nonverbal Behavior 21(1), Spring 1997 © 1997 Human Sciences Press, Inc. 3 1994), including preliterate cultures (Ekman, Sorensen, & Friesen, 1969; Ekman & Friesen, 1971). Recent research, however, has reported cultural differences in judgment as well. Matsumoto (1989, 1992a), for example, found that American and Japanese subjects differed in their rates of recognition. Differences have also been found in ratings of intensity (Ekman et al., 1987). Examining cultural differences requires a different methodology than studying similarities. Matsumoto (1992a) outlined such requirements: (1) cultures must view the same expressions; (2) the facial expressions must meet criteria for validly and reliably portraying the universal emotions; (3) each poser must appear only once; (4) expressions must include posers of more than one race. Matsumoto and Ekman's (1988) Japanese and Caucasian Facial Expressions of Emotion (JACFEE) was designed to meet these requirements. JACFEE was developed by photographing over one hundred posers who voluntarily moved muscles that correspond to the universal expressions (Ekman & Friesen, 1975, 1986). From the thousands of photographs taken, a small pool of photos was coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). A final pool of photos was then selected to ensure that each poser only contributed one photo in the final set, which is comprised of 56 photos, including eight photos each of anger, contempt, disgust, fear, happiness, sadness, and surprise. Four photos of each emotion depict posers of either Japanese or Caucasian descent (2 males, 2 females). Two published studies have reported judgment data on the JACFEE, but only with American and Japanese subjects. Matsumoto and Ekman (1989), for example, asked their subjects to make scalar ratings (0-8) on seven emotion dimensions for each photo. The judgments of the Americans and Japanese were similar in relation to strongest emotion depicted in the photos, and the relative intensity among the photographs. Americans, however, gave higher absolute intensity ratings on photos of happiness, anger, sadness, and surprise. In the second study (Matsumoto, 1992a), high agreement was found in the recognition judgments, but the level of recognition differed for anger, disgust, fear, and sadness. While data from these and other studies seem to indicate the dual existence of universal and culture-specific aspects of emotion judgment, the methodology used in many previous studies has recently been questioned on several grounds, including the previewing of slides, judgment context, presentation order, preselection of slides, the use of posed expressions, and type of response format (Russell, 1994; see Ekman, 1994, and Izard, 1994, for reply). Two of these, judgment context and presentation order, are especially germane to the present study and are addressed here. JOURNAL OF NONVERBAL BEHAVIOR 4", "title": "" } ]
scidocsrr
9dfa53d70e1d72fc77c4ea19877698b6
Identifying Argumentative Discourse Structures in Persuasive Essays
[ { "docid": "3fa5de33e7ccd6c440a4a65a5681f8b8", "text": "Argumentation is the process by which arguments are constructed and handled. Argumentation constitutes a major component of human intelligence. The ability to engage in argumentation is essential for humans to understand new problems, to perform scientific reasoning, to express, to clarify and to defend their opinions in their daily lives. Argumentation mining aims to detect the arguments presented in a text document, the relations between them and the internal structure of each individual argument. In this paper we analyse the main research questions when dealing with argumentation mining and the different methods we have studied and developed in order to successfully confront the challenges of argumentation mining in legal texts.", "title": "" }, { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" }, { "docid": "b69686c780d585d6b53fe7ec37e22b80", "text": "In written dialog, discourse participants need to justify claims they make, to convince the reader the claim is true and/or relevant to the discourse. This paper presents a new task (with an associated corpus), namely detecting such justifications. We investigate the nature of such justifications, and observe that the justifications themselves often contain discourse structure. We therefore develop a method to detect the existence of certain types of discourse relations, which helps us classify whether a segment is a justification or not. Our task is novel, and our work is novel in that it uses a large set of connectives (which we call indicators), and in that it uses a large set of discourse relations, without choosing among them.", "title": "" } ]
[ { "docid": "8689b038c62d96adf1536594fcc95c07", "text": "We present an interactive system that allows users to design original pop-up cards. A pop-up card is an interesting form of papercraft consisting of folded paper that forms a three-dimensional structure when opened. However, it is very difficult for the average person to design pop-up cards from scratch because it is necessary to understand the mechanism and determine the positions of objects so that pop-up parts do not collide with each other or protrude from the card. In the proposed system, the user interactively sets and edits primitives that are predefined in the system. The system simulates folding and opening of the pop-up card using a mass–spring model that can simply simulate the physical movement of the card. This simulation detects collisions and protrusions and illustrates the movement of the pop-up card. The results of the present study reveal that the user can design a wide range of pop-up cards using the proposed system.", "title": "" }, { "docid": "2da528d39b8815bcbb9a8aaf20d94926", "text": "Collaborative filtering (CF) is out of question the most widely adopted and successful recommendation approach. A typical CF-based recommender system associates a user with a group of like-minded users based on their individual preferences over all the items, either explicit or implicit, and then recommends to the user some unobserved items enjoyed by the group. However, we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more reasonable to predict preferences through one user's correlated subgroups, but not the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate a new Multiclass Co-Clustering (MCoC) model, which captures relations of user-to-item, user-to-user, and item-to-item simultaneously. Then, we combine traditional CF algorithms with subgroups for improving their top- <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"cai-ieq1-2566622.gif\"/></alternatives></inline-formula> recommendation performance. Our approach can be seen as a new extension of traditional clustering CF models. Systematic experiments on several real data sets have demonstrated the effectiveness of our proposed approach.", "title": "" }, { "docid": "5bd61380b9b05b3e89d776c6cbeb0336", "text": "Cross-domain text classification aims to automatically train a precise text classifier for a target domain by using labelled text data from a related source domain. To this end, one of the most promising ideas is to induce a new feature representation so that the distributional difference between domains can be reduced and a more accurate classifier can be learned in this new feature space. However, most existing methods do not explore the duality of the marginal distribution of examples and the conditional distribution of class labels given labeled training examples in the source domain. Besides, few previous works attempt to explicitly distinguish the domain-independent and domain-specific latent features and align the domain-specific features to further improve the cross-domain learning. In this paper, we propose a model called Partially Supervised Cross-Collection LDA topic model (PSCCLDA) for cross-domain learning with the purpose of addressing these two issues in a unified way. Experimental results on nine datasets show that our model outperforms two standard classifiers and four state-of-the-art methods, which demonstrates the effectiveness of our proposed model.", "title": "" }, { "docid": "baa71f083831919a067322ab4b268db5", "text": "– The theoretical analysis gives an overview of the functioning of DDS, especially with respect to noise and spurs. Different spur reduction techniques are studied in detail. Four ICs, which were the circuit implementations of the DDS, were designed. One programmable logic device implementation of the CORDIC based quadrature amplitude modulation (QAM) modulator was designed with a separate D/A converter IC. For the realization of these designs some new building blocks, e.g. a new tunable error feedback structure and a novel and more cost-effective digital power ramp generator, were developed. Implementing a DDS on an FPGA using Xilinx’s ISE software. IndexTerms—CORDIC, DDS, NCO, FPGA, SFDR. ________________________________________________________________________________________________________", "title": "" }, { "docid": "e2d2fe124fbef2138d2c67a02da220c6", "text": "This paper addresses robust fault diagnosis of the chaser’s thrusters used for the rendezvous phase of the Mars Sample Return (MSR) mission. The MSR mission is a future exploration mission undertaken jointly by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The goal is to return tangible samples from Mars atmosphere and ground to Earth for analysis. A residual-based scheme is proposed that is robust against the presence of unknown time-varying delays induced by the thruster modulator unit. The proposed fault diagnosis design is based on Eigenstructure Assignment (EA) and first-order Padé approximation. The resulted method is able to detect quickly any kind of thruster faults and to isolate them using a cross-correlation based test. Simulation results from the MSR ”high-fidelity” industrial simulator, provided by Thales Alenia Space, demonstrate that the proposed method is able to detect and isolate some thruster faults in a reasonable time, despite of delays in the thruster modulator unit, inaccurate navigation unit, and spatial disturbances (i.e. J2 gravitational perturbation, atmospheric drag, and solar radiation pressure). Robert Fonod IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: robert.fonod@ims-bordeaux.fr David Henry IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: david.henry@ims-bordeaux.fr Catherine Charbonnel Thales Alenia Space, 100 Boulevard du Midi, 06156 Cannes La Bocca, France e-mail: catherine.charbonnel@thalesaleniaspace.com Eric Bornschlegl European Space Research and Technology Centre, Keplerlaan 1, 2200 AG Noordwijk, Netherlands e-mail: eric.bornschlegl@esa.int 1 Proceedings of the EuroGNC 2013, 2nd CEAS Specialist Conference on Guidance, Navigation & Control, Delft University of Technology, Delft, The Netherlands, April 10-12, 2013 FrBT2.2", "title": "" }, { "docid": "edb5b733e77271dd4e1afaf742388a68", "text": "The Intolerance of Uncertainty Model was initially developed as an explanation for worry within the context of generalized anxiety disorder. However, recent research has identified intolerance of uncertainty (IU) as a possible transdiagnostic maintaining factor across the anxiety disorders and depression. The aim of this study was to determine whether IU mediated the relationship between neuroticism and symptoms related to various anxiety disorders and depression in a treatment-seeking sample (N=328). Consistent with previous research, IU was significantly associated with neuroticism as well as with symptoms of social phobia, panic disorder and agoraphobia, obsessive-compulsive disorder, generalized anxiety disorder, and depression. Moreover, IU explained unique variance in these symptom measures when controlling for neuroticism. Mediational analyses showed that IU was a significant partial mediator between neuroticism and all symptom measures, even when controlling for symptoms of other disorders. More specifically, anxiety in anticipation of future uncertainty (prospective anxiety) partially mediated the relationship between neuroticism and symptoms of generalized anxiety disorder (i.e. worry) and obsessive-compulsive disorder, whereas inaction in the face of uncertainty (inhibitory anxiety) partially mediated the relationship between neuroticism and symptoms of social anxiety, panic disorder and agoraphobia, and depression. Sobel's test demonstrated that all hypothesized meditational pathways were associated with significant indirect effects, although the mediation effect was stronger for worry than other symptoms. Potential implications of these findings for the treatment of anxiety disorders and depression are discussed.", "title": "" }, { "docid": "947ffeb4fff1ca4ee826d71d4add399e", "text": "Description bttroductian. A maximal complete subgraph (clique) is a complete subgraph that is not contained in any other complete subgraph. A recent paper [1] describes a number of techniques to find maximal complete subgraphs of a given undirected graph. In this paper, we present two backtracking algorithms, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique. The first version is a straightforward implementation of the basic algorithm. It is mainly presented to illustrate the method used. This version generates cliques in alphabetic (lexicographic) order. The second version is derived from the first and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed. This version tends to produce the larger cliques first and to generate sequentially cliques having a large common intersection. The detailed algorithm for version 2 is presented here. Description o f the algorithm--Version 1. Three sets play an important role in the algorithm. (1) The set compsub is the set to be extended by a new point or shrunk by one point on traveling along a branch of the backtracking tree. The points that are eligible to extend compsub, i.e. that are connected to all points in compsub, are collected recursively in the remaining two sets. (2) The set candidates is the set of all points that will in due time serve as an extension to the present configuration of compsub. (3) The set not is the set of all points that have at an earlier stage already served as an extension of the present configuration of compsub and are now explicitly excluded. The reason for maintaining this set trot will soon be made clear. The core of the algorithm consists of a recursively defined extension operator that will be applied to the three sets Just described. It has the duty to generate all extensions of the given configuration of compsub that it can make with the given set of candidates and that do not contain any of the points in not. To put it differently: all extensions of compsub containing any point in not have already been generated. The basic mechanism now consists of the following five steps:", "title": "" }, { "docid": "5d154a62b22415cbedd165002853315b", "text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.", "title": "" }, { "docid": "5bc1c336b8e495e44649365f11af4ab8", "text": "Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.", "title": "" }, { "docid": "cdee51ab9562e56aee3fff58cd2143ba", "text": "Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.", "title": "" }, { "docid": "bd0375c1a6393117d9b3e97340e90316", "text": "INTRODUCTION\nCancer patients are particularly vulnerable to depression and anxiety, with fatigue as the most prevalent symptom of those undergoing treatment. The purpose of this study was to determine whether improvement in depression, anxiety or fatigue during chemotherapy following anthroposophy art therapy intervention is substantial enough to warrant a controlled trial.\n\n\nMATERIAL AND METHODS\nSixty cancer patients on chemotherapy and willing to participate in once-weekly art therapy sessions (painting with water-based paints) were accrued for the study. Nineteen patients who participated in > or =4 sessions were evaluated as the intervention group, and 41 patients who participated in < or =2 sessions comprised the participant group. Hospital Anxiety and Depression Scale (HADS) and the Brief Fatigue Inventory (BFI) were completed before every session, relating to the previous week.\n\n\nRESULTS\nBFI scores were higher in the participant group (p=0.06). In the intervention group, the median HADS score for depression was 9 at the beginning and 7 after the fourth appointment (p=0.021). The median BFI score changed from 5.7 to 4.1 (p=0.24). The anxiety score was in the normal range from the beginning.\n\n\nCONCLUSION\nAnthroposophical art therapy is worthy of further study in the treatment of cancer patients with depression or fatigue during chemotherapy treatment.", "title": "" }, { "docid": "b1f98cbb045f8c15f53d284c9fa9d881", "text": "If the pace of increase in life expectancy in developed countries over the past two centuries continues through the 21st century, most babies born since 2000 in France, Germany, Italy, the UK, the USA, Canada, Japan, and other countries with long life expectancies will celebrate their 100th birthdays. Although trends differ between countries, populations of nearly all such countries are ageing as a result of low fertility, low immigration, and long lives. A key question is: are increases in life expectancy accompanied by a concurrent postponement of functional limitations and disability? The answer is still open, but research suggests that ageing processes are modifiable and that people are living longer without severe disability. This finding, together with technological and medical development and redistribution of work, will be important for our chances to meet the challenges of ageing populations.", "title": "" }, { "docid": "9f16e90dc9b166682ac9e2a8b54e611a", "text": "Lua is a programming language designed as scripting language, which is fast, lightweight, and suitable for embedded applications. Due to its features, Lua is widely used in the development of games and interactive applications for digital TV. However, during the development phase of such applications, some errors may be introduced, such as deadlock, arithmetic overflow, and division by zero. This paper describes a novel verification approach for software written in Lua, using as backend the Efficient SMTBased Context-Bounded Model Checker (ESBMC). Such an approach, called bounded model checking - Lua (BMCLua), consists in translating Lua programs into ANSI-C source code, which is then verified with ESBMC. Experimental results show that the proposed verification methodology is effective and efficient, when verifying safety properties in Lua programs. The performed experiments have shown that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches. To the best of our knowledge, this work is the first that applies bounded model checking to the verification of Lua programs.", "title": "" }, { "docid": "9948786041464ea72bfdddeaba0d2707", "text": "The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as \"difficult\" than for \"easy\" or \"moderate\" comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n = 4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases.", "title": "" }, { "docid": "af910640384bca46ba4268fe4ba0c3b3", "text": "The experience and methodology developed by COPEL for the integrated use of Pls-Cadd (structure spotting) and Tower (structural analysis) softwares are presented. Structural evaluations in transmission line design are possible for any loading condition, allowing considerations of new or updated loading trees, wind speeds or design criteria.", "title": "" }, { "docid": "df9c6dc1d6d1df15b78b7db02f055f70", "text": "The robotic grasp detection is a great challenge in the area of robotics. Previous work mainly employs the visual approaches to solve this problem. In this paper, a hybrid deep architecture combining the visual and tactile sensing for robotic grasp detection is proposed. We have demonstrated that the visual sensing and tactile sensing are complementary to each other and important for the robotic grasping. A new THU grasp dataset has also been collected which contains the visual, tactile and grasp configuration information. The experiments conducted on a public grasp dataset and our collected dataset show that the performance of the proposed model is superior to state of the art methods. The results also indicate that the tactile data could help to enable the network to learn better visual features for the robotic grasp detection task.", "title": "" }, { "docid": "27e1d29dc8d252081e80f93186a14660", "text": "Over the last several years there has been an increasing focus on early detection of Autism Spectrum Disorder (ASD), not only from the scientific field but also from professional associations and public health systems all across Europe. Not surprisingly, in order to offer better services and quality of life for both children with ASD and their families, different screening procedures and tools have been developed for early assessment and intervention. However, current evidence is needed for healthcare providers and policy makers to be able to implement specific measures and increase autism awareness in European communities. The general aim of this review is to address the latest and most relevant issues related to early detection and treatments. The specific objectives are (1) analyse the impact, describing advantages and drawbacks, of screening procedures based on standardized tests, surveillance programmes, or other observational measures; and (2) provide a European framework of early intervention programmes and practices and what has been learnt from implementing them in public or private settings. This analysis is then discussed and best practices are suggested to help professionals, health systems and policy makers to improve their local procedures or to develop new proposals for early detection and intervention programmes.", "title": "" }, { "docid": "40e129b6264892f1090fd9a8d6a9c1ae", "text": "We introduce an algorithm for text detection and localization (\"spotting\") that is computationally efficient and produces state-of-the-art results. Our system uses multi-channel MSERs to detect a large number of promising regions, then subsamples these regions using a clustering approach. Representatives of region clusters are binarized and then passed on to a deep network. A final line grouping stage forms word-level segments. On the ICDAR 2011 and 2015 benchmarks, our algorithm obtains an F-score of 82% and 83%, respectively, at a computational cost of 1.2 seconds per frame. We also introduce a version that is three times as fast, with only a slight reduction in performance.", "title": "" }, { "docid": "93d4d58e974e66c11c9b41d12a833da0", "text": "OBJECTIVE\nButyrate enemas may be effective in the treatment of active distal ulcerative colitis. Because colonic fermentation of Plantago ovata seeds (dietary fiber) yields butyrate, the aim of this study was to assess the efficacy and safety of Plantago ovata seeds as compared with mesalamine in maintaining remission in ulcerative colitis.\n\n\nMETHODS\nAn open label, parallel-group, multicenter, randomized clinical trial was conducted. A total of 105 patients with ulcerative colitis who were in remission were randomized into groups to receive oral treatment with Plantago ovata seeds (10 g b.i.d.), mesalamine (500 mg t.i.d.), and Plantago ovata seeds plus mesalamine at the same doses. The primary efficacy outcome was maintenance of remission for 12 months.\n\n\nRESULTS\nOf the 105 patients, 102 were included in the final analysis. After 12 months, treatment failure rate was 40% (14 of 35 patients) in the Plantago ovata seed group, 35% (13 of 37) in the mesalamine group, and 30% (nine of 30) in the Plantago ovata plus mesalamine group. Probability of continued remission was similar (Mantel-Cox test, p = 0.67; intent-to-treat analysis). Therapy effects remained unchanged after adjusting for potential confounding variables with a Cox's proportional hazards survival analysis. Three patients were withdrawn because of the development of adverse events consisting of constipation and/or flatulence (Plantago ovata seed group = 1 and Plantago ovata seed plus mesalamine group = 2). A significant increase in fecal butyrate levels (p = 0.018) was observed after Plantago ovata seed administration.\n\n\nCONCLUSIONS\nPlantago ovata seeds (dietary fiber) might be as effective as mesalamine to maintain remission in ulcerative colitis.", "title": "" } ]
scidocsrr
c82dedb6f20d5cc6cc24882af6c00623
The REA-DSL: A Domain Specific Modeling Language for Business Models
[ { "docid": "8951e08b838294b61796717ad691378e", "text": "In order to open-up enterprise applications to e-businessand make them profitable for a communication with otherenterprise applications, a business model is needed showingthe business essentials of the e-commerce business caseto be developed. Currently there are two major businessmodeling techniques - e3-value and REA (Resource-Event-Agent). Whereas e3-value was designed for modeling valueexchanges within an e-business network of multiple businesspartners, the REA ontology assumes that, in the presence ofmoney and available prices, all multi-party collaborationsmay be decomposed into a set of corresponding binarycollaborations. This paper is a preliminary attempt to viewe3-value and REA used side-by-side to see where they cancomplement each other in coordinated use in the context ofmultiple-partner collaboration. A real life scenario from theprint media domain has been taken to proof our approach.", "title": "" }, { "docid": "3b778d25b51f444d5cdc327251e72999", "text": "must create the e-business information systems. This article presents a conceptual modeling approach to e-business—called e3-value—that is designed to help define how economic value is created and exchanged within a network of actors. Doing e-business well requires the formulation of an e-business model that will serve as the first step in requirements analysis for e-business information systems. The industry currently lacks adequate methods for formulating these kinds of requirements. Methods from the IT systems analysis domain generally have a strong technology bias and typically do not reflect business considerations very well. Meanwhile, approaches from the business sciences often lack the rigor needed for information systems development. A tighter integration of business and IT modeling would most certainly benefit the industry, because the integration of business and IT systems is already a distinct feature of e-business. This article shows some ways to achieve this kind of modeling integration. Our e3-value method is based on an economic valueoriented ontology that specifies what an e-business model is made of. In particular, it entails defining, deriving, and analyzing multi-enterprise relationships, e-business scenarios, and operations requirements in both qualitative and quantitative ways. Our e3-value approach offers distinct advantages over traditional nonintegrated modeling techniques. These advantages include better communication about the essentials of an e-business model and a more complete understanding of e-business operations and systems requirements through scenario analysis and quantification.1 The value viewpoint Requirements engineering entails information systems analysis from several distinct perspectives. Figure 1 shows what requirements perspectives are relevant to e-business design: the articulation of the economic value proposition (the e-business model), the layout of business processes that “operationalize” the e-business model, and the IT systems architecture that enables and supports the e-business processes. These perspectives provide a separation of concerns and help manage the complexity of requirements and design. Our emphasis on “the value viewpoint” is a distinguishing feature of our approach. There are already several good ways to represent business process and IT architectural models, but the industry lacks effective techniques to express and analyze the value viewpoint. We illustrate the use of the e3-value methodology with one of the e-business projects where we successfully applied our approach: provisioning a valueadded news service. A newspaper, which we call the Amsterdam Times for the sake of the example, wants to offer to all its subscribers the ability to read articles online. But the newspaper does not want to pass on any additional costs to its customers. The idea is to finance the expense by telephone connection revenues, which the reader must pay to set up a telephone connection for Internet connectivity. This can be achieved by two very different ebusiness models: the terminating model and the originating model. Figures 2 and 3 illustrate these modThis article presents", "title": "" } ]
[ { "docid": "80655e659e9cf0456595259f2969fe42", "text": "The induction motor equivalent circuit parameters are required for many performance and planning studies involving induction motors. These parameters are typically calculated from standardized motor performance tests, such as the no load, full load, and locked rotor tests. However, standardized test data is not typically available to the end user. Alternatively, the equivalent circuit parameters may be estimated based on published performance data for the motor. This paper presents an iterative method for estimating the induction motor equivalent circuit parameters using only the motor nameplate data.", "title": "" }, { "docid": "9714636fcadfc7778cb3d01a5fb20e46", "text": "In this paper, a method for controlling multivariable processes is presented. The controller design is divided into two parts: firstly, a decoupling matrix is designed in order to minimize the interaction effects. Then, the controller design is obtained for the process + decoupler block. For this purpose, an iterative numeric algorithm, proposed by same authors, is used. The aim is to meet the design specifications for each loop independently. This sequential design method for multivariable decoupling and multiloop PID controller is applied to several examples from literature. Decentralized PID controller design, specifications analysis and time response simulations has been made using the TITO tool, a set of m functions written in Matlab. It can be obtained in web page http://www.uco.es/~in2vasef. Copyrigth  2002 IFAC.", "title": "" }, { "docid": "a8fcb09ef7d0bb08f9869ca8aca4a5d7", "text": "Visuospatial working memory and its involvement in arithmetic were examined in two groups of 7- to 11-year-olds: one comprising children described by teachers as displaying symptoms of nonverbal learning difficulties (N = 21), the other a control group without learning disabilities (N = 21). The two groups were matched for verbal abilities, age, gender, and sociocultural level. The children were presented with a visuospatial working memory battery of recognition tests involving visual, spatial-sequential and spatial-simultaneous processes, and two arithmetic tasks (number ordering and written calculations). The two groups were found to differ on some spatial tasks but not in the visual working memory tasks. On the arithmetic tasks, the children with nonverbal learning difficulties made more errors than controls in calculation and were slower in number ordering. A discriminant function analysis confirmed the crucial role of spatial-sequential working memory in distinguishing between the two groups. Results are discussed with reference to spatial working memory and arithmetic difficulties in nonverbal learning disabilities. Implications for the relationship between visuospatial working memory and arithmetic are also considered.", "title": "" }, { "docid": "d805dc116db48b644b18e409dda3976e", "text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.", "title": "" }, { "docid": "e90755afe850d597ad7b3f4b7e590b66", "text": "Privacy is considered to be a fundamental human right (Movius and Krup, 2009). Around the world this has led to a large amount of legislation in the area of privacy. Nearly all national governments have imposed local privacy legislation. In the United States several states have imposed their own privacy legislation. In order to maintain a manageable scope this paper only addresses European Union wide and federal United States laws. In addition several US industry (self) regulations are also considered. Privacy regulations in emerging technologies are surrounded by uncertainty. This paper aims to clarify the uncertainty relating to privacy regulations with respect to Cloud Computing and to identify the main open issues that need to be addressed for further research. This paper is based on existing literature and a series of interviews and questionnaires with various Cloud Service Providers (CSPs) that have been performed for the first author’s MSc thesis (Ruiter, 2009). The interviews and questionnaires resulted in data on privacy and security procedures from ten CSPs and while this number is by no means large enough to make any definite conclusions the results are, in our opinion, interesting enough to publish in this paper. The remainder of the paper is organized as follows: the next section gives some basic background on Cloud Computing. Section 3 provides", "title": "" }, { "docid": "fcfafe226a7ab72b5e18d524344400a3", "text": "This paper proposes several adjustments to the ISO 12233 slanted edge algorithm for estimating camera MTF. First, the Ridler-Calvard binary image segmentation method is used to find the line. Secondly, total least squares, rather than ordinary least squares, is used to compute the line parameters. Finally, the pixel values are projected in the reverse direction from the 1D array to the 2D image, rather than from the 2D image to the 1D array. Together, these changes yield an algorithm that exhibits significantly less variation than existing techniques when applied to real images. In particular, the proposed algorithm is largely invariant to the rotation angle of the edge as well as to the size of the image crop.", "title": "" }, { "docid": "737dda9cc50e5cf42523e6cadabf524e", "text": "Maintaining incisor alignment is an important goal of orthodontic retention and can only be guaranteed by placement of an intact, passive and permanent fixed retainer. Here we describe a reliable technique for bonding maxillary retainers and demonstrate all the steps necessary for both technician and clinician. The importance of increasing the surface roughness of the wire and teeth to be bonded, maintaining passivity of the retainer, especially during bonding, the use of a stiff wire and correct placement of the retainer are all discussed. Examples of adverse tooth movement from retainers with twisted and multistrand wires are shown.", "title": "" }, { "docid": "cde4d7457b949420ab90bdc894f40eb0", "text": "We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (LaDTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that LaDTL is potential to be seamlessly adapted to a wide range of NER tasks.", "title": "" }, { "docid": "95612aa090b77fc660279c5f2886738d", "text": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.", "title": "" }, { "docid": "f79e3a38e1120f5c3e9d9113bcb1f847", "text": "Classical numerical methods for solving partial differential equations suffer from the curse dimensionality mainly due to their reliance on meticulously generated spatio-temporal grids. Inspired by modern deep learning based techniques for solving forward and inverse problems associated with partial differential equations, we circumvent the tyranny of numerical discretization by devising an algorithm that is scalable to high-dimensions. In particular, we approximate the unknown solution by a deep neural network which essentially enables us to benefit from the merits of automatic differentiation. To train the aforementioned neural network we leverage the well-known connection between high-dimensional partial differential equations and forwardbackward stochastic differential equations. In fact, independent realizations of a standard Brownian motion will act as training data. We test the effectiveness of our approach for a couple of benchmark problems spanning a number of scientific domains including Black-Scholes-Barenblatt and HamiltonJacobi-Bellman equations, both in 100-dimensions.", "title": "" }, { "docid": "81aa60b514bb11efb9e137b8d13b92e8", "text": "Linguistic creativity is a marriage of form and content in which each works together to convey our meanings with concision, resonance and wit. Though form clearly influences and shapes our content, the most deft formal trickery cannot compensate for a lack of real insight. Before computers can be truly creative with language, we must first imbue them with the ability to formulate meanings that are worthy of creative expression. This is especially true of computer-generated poetry. If readers are to recognize a poetic turn-of-phrase as more than a superficial manipulation of words, they must perceive and connect with the meanings and the intent behind the words. So it is not enough for a computer to merely generate poem-shaped texts; poems must be driven by conceits that build an affective worldview. This paper describes a conceit-driven approach to computational poetry, in which metaphors and blends are generated for a given topic and affective slant. Subtle inferences drawn from these metaphors and blends can then drive the process of poetry generation. In the same vein, we consider the problem of generating witty insights from the banal truisms of common-sense knowledge bases. Ode to a Keatsian Turn Poetic licence is much more than a licence to frill. Indeed, it is not so much a licence as a contract, one that allows a speaker to subvert the norms of both language and nature in exchange for communicating real insights about some relevant state of affairs. Of course, poetry has norms and conventions of its own, and these lend poems a range of recognizably “poetic” formal characteristics. When used effectively, formal devices such as alliteration, rhyme and cadence can mold our meanings into resonant and incisive forms. However, even the most poetic devices are just empty frills when used only to disguise the absence of real insight. Computer models of poem generation must model more than the frills of poetry, and must instead make these formal devices serve the larger goal of meaning creation. Nonetheless, is often said that we “eat with our eyes”, so that the stylish presentation of food can subtly influence our sense of taste. So it is with poetry: a pleasing form can do more than enhance our recall and comprehension of a meaning – it can also suggest a lasting and profound truth. Experiments by McGlone & Tofighbakhsh (1999, 2000) lend empirical support to this so-called Keats heuristic, the intuitive belief – named for Keats’ memorable line “Beauty is truth, truth beauty” – that a meaning which is rendered in an aesthetically-pleasing form is much more likely to be perceived as truthful than if it is rendered in a less poetic form. McGlone & Tofighbakhsh demonstrated this effect by searching a book of proverbs for uncommon aphorisms with internal rhyme – such as “woes unite foes” – and by using synonym substitution to generate non-rhyming (and thus less poetic) variants such as “troubles unite enemies”. While no significant differences were observed in subjects’ ease of comprehension for rhyming/non-rhyming forms, subjects did show a marked tendency to view the rhyming variants as more truthful expressions of the human condition than the corresponding non-rhyming forms. So a well-polished poetic form can lend even a modestly interesting observation the lustre of a profound insight. An automated approach to poetry generation can exploit this symbiosis of form and content in a number of useful ways. It might harvest interesting perspectives on a given topic from a text corpus, or it might search its stores of commonsense knowledge for modest insights to render in immodest poetic forms. We describe here a system that combines both of these approaches for meaningful poetry generation. As shown in the sections to follow, this system – named Stereotrope – uses corpus analysis to generate affective metaphors for a topic on which it is asked to wax poetic. Stereotrope can be asked to view a topic from a particular affective stance (e.g., view love negatively) or to elaborate on a familiar metaphor (e.g. love is a prison). In doing so, Stereotrope takes account of the feelings that different metaphors are likely to engender in an audience. These metaphors are further integrated to yield tight conceptual blends, which may in turn highlight emergent nuances of a viewpoint that are worthy of poetic expression (see Lakoff and Turner, 1989). Stereotrope uses a knowledge-base of conceptual norms to anchor its understanding of these metaphors and blends. While these norms are the stuff of banal clichés and stereotypes, such as that dogs chase cats and cops eat donuts. we also show how Stereotrope finds and exploits corpus evidence to recast these banalities as witty, incisive and poetic insights. Mutual Knowledge: Norms and Stereotypes Samuel Johnson opined that “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.” Traditional approaches to the modelling of metaphor and other figurative devices have typically sought to imbue computers with the former (Fass, 1997). More recently, however, the latter kind has gained traction, with the use of the Web and text corpora to source large amounts of shallow knowledge as it is needed (e.g., Veale & Hao 2007a,b; Shutova 2010; Veale & Li, 2011). But the kind of knowledge demanded by knowledgehungry phenomena such as metaphor and blending is very different to the specialist “book” knowledge so beloved of Johnson. These demand knowledge of the quotidian world that we all tacitly share but rarely articulate in words, not even in the thoughtful definitions of Johnson’s dictionary. Similes open a rare window onto our shared expectations of the world. Thus, the as-as-similes “as hot as an oven”, “as dry as sand” and “as tough as leather” illuminate the expected properties of these objects, while the like-similes “crying like a baby”, “singing like an angel” and “swearing like a sailor” reflect intuitons of how these familiar entities are tacitly expected to behave. Veale & Hao (2007a,b) thus harvest large numbers of as-as-similes from the Web to build a rich stereotypical model of familiar ideas and their salient properties, while Özbal & Stock (2012) apply a similar approach on a smaller scale using Google’s query completion service. Fishelov (1992) argues convincingly that poetic and non-poetic similes are crafted from the same words and ideas. Poetic conceits use familiar ideas in non-obvious combinations, often with the aim of creating semantic tension. The simile-based model used here thus harvests almost 10,000 familiar stereotypes (drawing on a range of ~8,000 features) from both as-as and like-similes. Poems construct affective conceits, but as shown in Veale (2012b), the features of a stereotype can be affectively partitioned as needed into distinct pleasant and unpleasant perspectives. We are thus confident that a stereotype-based model of common-sense knowledge is equal to the task of generating and elaborating affective conceits for a poem. A stereotype-based model of common-sense knowledge requires both features and relations, with the latter showing how stereotypes relate to each other. It is not enough then to know that cops are tough and gritty, or that donuts are sweet and soft; our stereotypes of each should include the cliché that cops eat donuts, just as dogs chew bones and cats cough up furballs. Following Veale & Li (2011), we acquire inter-stereotype relationships from the Web, not by mining similes but by mining questions. As in Özbal & Stock (2012), we target query completions from a popular search service (Google), which offers a smaller, public proxy for a larger, zealously-guarded search query log. We harvest questions of the form “Why do Xs <relation> Ys”, and assume that since each relationship is presupposed by the question (so “why do bikers wear leathers” presupposes that everyone knows that bikers wear leathers), the triple of subject/relation/object captures a widely-held norm. In this way we harvest over 40,000 such norms from the Web. Generating Metaphors, N-Gram Style! The Google n-grams (Brants & Franz, 2006) is a rich source of popular metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe a topic T, where commonality is defined as the presence of the corresponding metaphor in the Google n-grams. To find metaphors for proper-named entities, we also analyse n-grams of the form stereotype First [Middle] Last, such as “tyrant Adolf Hitler” and “boss Bill Gates”. Thus, e.g.: src(racism) = {problem, disease, joke, sin, poison, crime, ideology, weapon} src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, ...} Let typical(T) denote the set of properties and behaviors harvested for T from Web similes (see previous section), and let srcTypical(T) denote the aggregate set of properties and behaviors ascribable to T via the metaphors in src(T): (1) srcTypical (T) = M∈src(T) typical(M) We can generate conceits for a topic T by considering not just obvious metaphors for T, but metaphors of metaphors: (2) conceits(T) = src(T) ∪ M∈src(T) src(M) The features evoked by the conceit T as M are given by: (3) salient (T,M) = [srcTypical(T) ∪ typical(T)]", "title": "" }, { "docid": "fb15647d528df8b8613376066d9f5e68", "text": "This article described the feature extraction methods of crop disease based on computer image processing technology in detail. Based on color, texture and shape feature extraction method in three aspects features and their respective problems were introduced start from the perspective of lesion leaves. Application research of image feature extraction in the filed of crop disease was reviewed in recent years. The results were analyzed that about feature extraction methods, and then the application of image feature extraction techniques in the future detection of crop diseases in the field of intelligent was prospected.", "title": "" }, { "docid": "8e5d286259c3b74b295e5bc1d867a5b2", "text": "We present an approach to multilingual grammar induction that exploits a phylogeny-structured model of parameter drift. Our method does not require any translated texts or token-level alignments. Instead, the phylogenetic prior couples languages at a parameter level. Joint induction in the multilingual model substantially outperforms independent learning, with larger gains both from more articulated phylogenies and as well as from increasing numbers of languages. Across eight languages, the multilingual approach gives error reductions over the standard monolingual DMV averaging 21.1% and reaching as high as 39%.", "title": "" }, { "docid": "eee0bc6ee06dce38efbc89659771f720", "text": "In a data center, an IO from an application to distributed storage traverses not only the network, but also several software stages with diverse functionality. This set of ordered stages is known as the storage or IO stack. Stages include caches, hypervisors, IO schedulers, file systems, and device drivers. Indeed, in a typical data center, the number of these stages is often larger than the number of network hops to the destination. Yet, while packet routing is fundamental to networks, no notion of IO routing exists on the storage stack. The path of an IO to an endpoint is predetermined and hard-coded. This forces IO with different needs (e.g., requiring different caching or replica selection) to flow through a one-size-fits-all IO stack structure, resulting in an ossified IO stack. This paper proposes sRoute, an architecture that provides a routing abstraction for the storage stack. sRoute comprises a centralized control plane and “sSwitches” on the data plane. The control plane sets the forwarding rules in each sSwitch to route IO requests at runtime based on application-specific policies. A key strength of our architecture is that it works with unmodified applications and VMs. This paper shows significant benefits of customized IO routing to data center tenants (e.g., a factor of ten for tail IO latency, more than 60% better throughput for a customized replication protocol and a factor of two in throughput for customized caching).", "title": "" }, { "docid": "5116079b69aeb1858177429fabd10f80", "text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.", "title": "" }, { "docid": "72cd858344bb5e0a878dd05fc8d07044", "text": "This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning frommembership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.", "title": "" }, { "docid": "579c8fffc3a3de878beb7319b01c2a4e", "text": "This paper introduces AVSWAT, a GIS based hydrological system linking the Soil and Water Assessment Tool (SWAT) water quality model and ArcView Geographic Information System software. The ?main purpose of AVSWAT is the combined assessment of nonpoint and point pollUtion loading at the watershed scale. The GIS component of the system, in addition to the traditional functions of data acquisition, storage, organization and display, implements advanced analytical methods with enhanced flexibility to improve the hydrological characterization of a study watershed. Intuitive user friendly graphic interfaces, also part of the GIS component, have been developed to provide an efficient interaction with the model and the associated parameter databases, and ultimately to simplify water quality assessments, while maintaining and increasing their reliability. This is also supported by SWAT, the core of the system, a complex, conceptual, hydrologic, continuous model with spatially explicit parameterization, building upon the United State Department of Agriculture (USDA) modeling experience. A step-by-step example application for a watershed in Central Texas is also included to verify the capability and illustrate some of the characteristics of the system which has been adopted by many users around the world. Address for correspondence: Mauro Di Luzio, Texas Agricultural Experiment Station, Blackland Research Center, Texas A&M University System, 720 East Blackland Road, Temple, TX 76502, USA. E-mail: diluzio@brc.tamus.edu © Blackwell Publishing Ltd. 2004. 9600 Garsington Road, Oxford 0X4 2DQ, UK and 350 Main Street, Maiden, MA 02148, USA. 114 M Di Luzio, R Srinivasan and I C Arnold", "title": "" }, { "docid": "cb66a49205c9914be88a7631ecc6c52a", "text": "BACKGROUND\nMidline facial clefts are rare and challenging deformities caused by failure of fusion of the medial nasal prominences. These anomalies vary in severity, and may include microform lines or midline lip notching, incomplete or complete labial clefting, nasal bifidity, or severe craniofacial bony and soft tissue anomalies with orbital hypertelorism and frontoethmoidal encephaloceles. In this study, the authors present 4 cases, classify the spectrum of midline cleft anomalies, and review our technical approaches to the surgical correction of midline cleft lip and bifid nasal deformities. Embryology and associated anomalies are discussed.\n\n\nMETHODS\nThe authors retrospectively reviewed our experience with 4 cases of midline cleft lip with and without nasal deformities of varied complexity. In addition, a comprehensive literature search was performed, identifying studies published relating to midline cleft lip and/or bifid nose deformities. Our assessment of the anomalies in our series, in conjunction with published reports, was used to establish a 5-tiered classification system. Technical approaches and clinical reports are described.\n\n\nRESULTS\nFunctional and aesthetic anatomic correction was successfully achieved in each case without complication. A classification and treatment strategy for the treatment of midline cleft lip and bifid nose deformity is presented.\n\n\nCONCLUSIONS\nThe successful treatment of midline cleft lip and bifid nose deformities first requires the identification and classification of the wide variety of anomalies. With exposure of abnormal nasolabial anatomy, the excision of redundant skin and soft tissue, anatomic approximation of cartilaginous elements, orbicularis oris muscle repair, and craniofacial osteotomy and reduction as indicated, a single-stage correction of midline cleft lip and bifid nasal deformity can be safely and effectively achieved.", "title": "" }, { "docid": "216f97a97d240456d36ec765fd45739e", "text": "This paper explores the growing trend of using mobile technology in university classrooms, exploring the use of tablets in particular, to identify learning benefits faced by students. Students, acting on their efficacy beliefs, make decisions regarding technology’s influence in improving their education. We construct a theoretical model in which internal and external factors affect a student’s self-efficacy which in turn affects the extent of adoption of a device for educational purposes. Through qualitative survey responses of university students who were given an Apple iPad to keep for the duration of a university course we find high levels of self-efficacy leading to positive views of the technology’s learning enhancement capabilities. Student observations on the practicality of the technology, off-topic use and its effects, communication, content, and perceived market advantage of using a tablet are also explored.", "title": "" }, { "docid": "6bbbddca9ba258afb25d6e8af9bfec82", "text": "With the ever increasing popularity of electronic commerce, the evaluation of antecedents and of customer satisfaction have become very important for the cyber shopping store (CSS) and for researchers. The various models of customer satisfaction that researchers have provided so far are mostly based on the traditional business channels and thus may not be appropriate for CSSs. This research has employed case and survey methods to study the antecedents of customer satisfaction. Though case methods a research model with hypotheses is developed. And through survey methods, the relationships between antecedents and satisfaction are further examined and analyzed. We find five antecedents of customer satisfaction to be more appropriate for online shopping on the Internet. Among them homepage presentation is a new and unique antecedent which has not existed in traditional marketing.", "title": "" } ]
scidocsrr
2a9d99e81c06a751cb76e5d22677eca8
CloudMAC — An OpenFlow based architecture for 802.11 MAC layer processing in the cloud
[ { "docid": "83355e7d2db67e42ec86f81909cfe8c1", "text": "everal protocols for routing and forwarding in Wireless Mesh Networks (WMN) have been proposed, such as AODV, OLSR or B.A.T.M.A.N. However, providing support for e.g. flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols. OpenFlow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface. By using virtualization and flow-based routing, OpenFlow enables a rapid deployment of novel packet forwarding and routing algorithms, focusing on fixed networks. We propose an architecture that integrates OpenFlow with WMNs and provides such flow-based routing and forwarding capabilities. To demonstrate the feasibility of our OpenFlow based approach, we have implemented a simple solution to solve the problem of client mobility in a WMN which handles the fast migration of client addresses (e.g. IP addresses) between Mesh Access Points and the interaction with re-routing without the need for tunneling. Measurements from a real mesh testbed (KAUMesh) demonstrate the feasibility of our approach based on the evaluation of forwarding performance, control traffic and rule activation time.", "title": "" } ]
[ { "docid": "1ab9bfcb356b394a3e9441a75668bc07", "text": "User Generated Content (UGC) is a rapidly emerging growth engine of many Internet businesses and an important component of the new knowledge society. However, little research has been done on the mechanisms inherent to UGC. This research explores the relationships among the quality, value, and benefits of UGC. The main objective is to identify and evaluate the quality factors that affect UGC value, which ultimately influences the utility of UGC. We identify the three quality dimensions of UGC: content, design, and technology. We classify UGC value into three categories: functional value, emotional value, and social value. We attempt to characterize the mechanism underlying UGC value by evaluating the relationships between the quality and value of UGC and investigating what types of UGC value affect UGC utility. Our results show that all three factors of UGC quality are strongly associated with increases in the functional, emotional, and social values of UGC. Our findings also demonstrate that the functional and emotional values of UGC are critically important factors for UGC utility. Based on these findings, we discuss theoretical implications for future research and practical implications for UGC services.", "title": "" }, { "docid": "858acbd02250ff2f8325786475b4f3f3", "text": "One of the most important aspects of Grice’s theory of conversation is the drawing of a borderline between what is said and what is implicated. Grice’s views concerning this borderline have been strongly and influentially criticised by relevance theorists. In particular, it has become increasingly widely accepted that Grice’s notion of what is said is too limited, and that pragmatics has a far larger role to play in determining what is said than Grice would have allowed. (See for example Bezuidenhuit 1996; Blakemore 1987; Carston 1991; Recanati 1991, 1993, 2001; Sperber and Wilson 1986; Wilson and Sperber 1981.) In this paper, I argue that the rejection of Grice has moved too swiftly, as a key line of objection which has led to this rejection is flawed. The flaw, we will see, is that relevance theorists rely on a misunderstanding of Grice’s project in his theory of conversation. I am not arguing that Grice’s versions of saying and implicating are right in all details, but simply that certain widespread reasons for rejecting his theory are based on misconceptions.1 Relevance theorists, I will suggest, systematically misunderstand Grice by taking him to be engaged in the same project that they are: making sense of the psychological processes by which we interpret utterances. Notions involved with this project will need to be ones that are relevant to the psychology of utterance interpretation. Thus, it is only reasonable that relevance theorists will require that what is said and what is implicated should be psychologically real to the audience. (We will see that this requirement plays a crucial role in their arguments against Grice.) Grice, I will argue, was not pursuing this project. Rather, I will suggest that he was trying to make sense of quite a different notion of what is said: one on which both speaker and audience may be wrong about what is said. On this sort of notion, psychological reality is not a requirement. So objections to Grice based on a requirement of psychological reality will fail.", "title": "" }, { "docid": "d0a765968e7cc4cf8099f66e0c3267da", "text": "We explore the lattice sphere packing representation of a multi-antenna system and the algebraic space-time (ST) codes. We apply the sphere decoding (SD) algorithm to the resulted lattice code. For the uncoded system, SD yields, with small increase in complexity, a huge improvement over the well-known V-BLAST detection algorithm. SD of algebraic ST codes exploits the full diversity of the coded multi-antenna system, and makes the proposed scheme very appealing to take advantage of the richness of the multi-antenna environment. The fact that the SD does not depend on the constellation size, gives rise to systems with very high spectral efficiency, maximum-likelihood performance, and low decoding complexity.", "title": "" }, { "docid": "e0aac76af8e600afba35a97d88a60da1", "text": "We present a new algorithm for merging occupancy grid maps produced by multiple robots exploring the same environment. The algorithm produces a set of possible transformations needed to merge two maps, i.e translations and rotations. Each transformation is weighted, thus allowing to distinguish uncertain situations, and enabling to track multiple cases when ambiguities arise. Transformations are produced extracting some spectral information from the maps. The approach is deterministic, non-iterative, and fast. The algorithm has been tested on public available datasets, as well as on maps produced by two robots concurrently exploring both indoor and outdoor environments. Throughout the experimental validation stage the technique we propose consistently merged maps exhibiting very different characteristics.", "title": "" }, { "docid": "3503074668bd55868f86a99a8a171073", "text": "Deep Neural Networks (DNNs) provide state-of-the-art solutions in several difficult machine perceptual tasks. However, their performance relies on the availability of a large set of labeled training data, which limits the breadth of their applicability. Hence, there is a need for new semisupervised learning methods for DNNs that can leverage both (a small amount of) labeled and unlabeled training data. In this paper, we develop a general loss function enabling DNNs of any topology to be trained in a semi-supervised manner without extra hyper-parameters. As opposed to current semi-supervised techniques based on topology-specific or unstable approaches, ours is both robust and general. We demonstrate that our approach reaches state-of-the-art performance on the SVHN (9.82% test error, with 500 labels and wide Resnet) and CIFAR10 (16.38% test error, with 8000 labels and sigmoid convolutional neural network) data sets.", "title": "" }, { "docid": "ce1d25b3d2e32f903ce29470514abcce", "text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.", "title": "" }, { "docid": "e63eac157bd750ca39370fd5b9fdf85e", "text": "Allometric scaling relations, including the 3/4 power law for metabolic rates, are characteristic of all organisms and are here derived from a general model that describes how essential materials are transported through space-filling fractal networks of branching tubes. The model assumes that the energy dissipated is minimized and that the terminal tubes do not vary with body size. It provides a complete analysis of scaling relations for mammalian circulatory systems that are in agreement with data. More generally, the model predicts structural and functional properties of vertebrate cardiovascular and respiratory systems, plant vascular systems, insect tracheal tubes, and other distribution networks.", "title": "" }, { "docid": "52b1adf3b7b6bf08651c140d726143c3", "text": "The antifungal potential of aqueous leaf and fruit extracts of Capsicum frutescens against four major fungal strains associated with groundnut storage was evaluated. These seed-borne fungi, namely Aspergillus flavus, A. niger, Penicillium sp. and Rhizopus sp. were isolated by standard agar plate method and identified by macroscopic and microscopic features. The minimum inhibitory concentrations (MIC) and minimum fungicidal concentration (MFC) of C. frutescens extracts were determined. MIC values of the fruit extract were lower compared to the leaf extract. At MIC, leaf extract showed strong activity against A. flavus (88.06%), while fruit extract against A. niger (88.33%) in the well diffusion method. Groundnut seeds treated with C.frutescens fruit extract (10mg/ml) showed a higher rate of fungal inhibition. The present results suggest that groundnuts treated with C. frutescens fruit extracts are capable of preventing fungal infection to a certain extent.", "title": "" }, { "docid": "0d0f9576ba5ccc442f531d4222bb1a12", "text": "This tutorial introduces fingerprint recognition systems and their main components: sensing, feature extraction and matching. The basic technologies are surveyed and some state-of-the-art algorithms are discussed. Due to the extent of this topic it is not possible to provide here all the details and to cover a number of interesting issues such as classification, indexing and multimodal systems. Interested readers can find in [21] a complete and comprehensive guide to fingerprint recognition.", "title": "" }, { "docid": "fa52d586e7e6c92444845881ab1990cf", "text": "This paper proposes a novel rotor contour design for variable reluctance (VR) resolvers by injecting auxiliary air-gap permeance harmonics. Based on the resolver model with nonoverlapping tooth-coil windings, the influence of air-gap length function is first investigated by finite element (FE) method, and the detection accuracy of designs with higher values of fundamental wave factor may deteriorate due to the increasing third order of output voltage harmonics. Further, the origins of the third harmonics are investigated by analytical derivation and FE analyses of output voltages. Furthermore, it is proved that the voltage harmonics and the detection accuracy are significantly improved by injecting auxiliary air-gap permeance harmonics in the design of rotor contour. In addition, the proposed design can also be employed to eliminate voltage tooth harmonics in a conventional VR resolver topology. Finally, VR resolver prototypes with the conventional and the proposed rotors are fabricated and tested respectively to verify the analyses.", "title": "" }, { "docid": "ef09bc08cc8e94275e652e818a0af97f", "text": "The biosynthetic pathway of L-tartaric acid, the form most commonly encountered in nature, and its catabolic ties to vitamin C, remain a challenge to plant scientists. Vitamin C and L-tartaric acid are plant-derived metabolites with intrinsic human value. In contrast to most fruits during development, grapes accumulate L-tartaric acid, which remains within the berry throughout ripening. Berry taste and the organoleptic properties and aging potential of wines are intimately linked to levels of L-tartaric acid present in the fruit, and those added during vinification. Elucidation of the reactions relating L-tartaric acid to vitamin C catabolism in the Vitaceae showed that they proceed via the oxidation of L-idonic acid, the proposed rate-limiting step in the pathway. Here we report the use of transcript and metabolite profiling to identify candidate cDNAs from genes expressed at developmental times and in tissues appropriate for L-tartaric acid biosynthesis in grape berries. Enzymological analyses of one candidate confirmed its activity in the proposed rate-limiting step of the direct pathway from vitamin C to tartaric acid in higher plants. Surveying organic acid content in Vitis and related genera, we have identified a non-tartrate-forming species in which this gene is deleted. This species accumulates in excess of three times the levels of vitamin C than comparably ripe berries of tartrate-accumulating species, suggesting that modulation of tartaric acid biosynthesis may provide a rational basis for the production of grapes rich in vitamin C.", "title": "" }, { "docid": "d079bba6c4490bf00eb73541ebba8ace", "text": "The literature on Design Science (or Design Research) has been mixed on the inclusion, form, and role of theory and theorising in Design Science. Some authors have explicitly excluded theory development and testing from Design Science, leaving them to the Natural and Social/Behavioural Sciences. Others propose including theory development and testing as part of Design Science. Others propose some ideas for the content of IS Design Theories, although more detailed and clear concepts would be helpful. This paper discusses the need and role for theory in Design Science. It further proposes some ideas for standards for the form and level of detail needed for theories in Design Science. Finally it develops a framework of activities for the interaction of Design Science with research in other scientific paradigms.", "title": "" }, { "docid": "039055a2fa9292031abc8db50819eb35", "text": "Boosting is a technique of combining a set weak classifiers to form one high-performance prediction rule. Boosting was successfully applied to solve the problems of object detection, text analysis, data mining and etc. The most and widely used boosting algorithm is AdaBoost and its later more effective variations Gentle and Real AdaBoost. In this article we propose a new boosting algorithm, which produces less generalization error compared to mentioned algorithms at the cost of somewhat higher training error.", "title": "" }, { "docid": "e4375a896bb4fb9d437ee68e3c2bf2c1", "text": "Executive Overview This paper describes the findings from a new, and intrinsically interdisciplinary, literature on happiness and well-being. The paper focuses on international evidence. We report the patterns in modern data, discuss what has been persuasively established and what has not, and suggest paths for future research. Looking ahead, our instinct is that this social science research avenue will gradually merge with a related literature—from the medical, epidemiological, and biological sciences—on biomarkers and health. Nevertheless, we expect that intellectual convergence to happen slowly.", "title": "" }, { "docid": "cfbd49b3d76942631639d00d7ee736d6", "text": "The online implementation of traditional business mechanisms raises many new issues not considered in classical economic models. This partially explains why online auctions have become the most successful but also the most controversial Internet businesses in the recent years. One emerging issue is that the lack of authentication over the Internet has encouraged shill bidding, the deliberate placing of bids on the seller’s behalf to artificially drive up the price of the seller’s auctioned item. Private-value English auctions with shill bidding can result in a higher expected seller profit than other auction formats [1], violating the classical revenue equivalence theory. This paper analyzes shill bidding in multi-round online English auctions and proves that there is no equilibrium without shill bidding. Taking into account the seller’s shills and relistings, bidders with valuations even higher than the reserve will either wait for the next round or shield their bids in the current round. Hence, it is inevitable to redesign online auctions to deal with the “shiller’s curse.”", "title": "" }, { "docid": "28370dc894584f053a5bb029142ad587", "text": "Pharmaceutical parallel trade in the European Union is a large and growing phenomenon, and hope has been expressed that it has the potential to reduce prices paid by health insurance and consumers and substantially to raise overall welfare. In this paper we examine the phenomenon empirically, using data on prices and volumes of individual imported products. We have found that the gains from parallel trade accrue mostly to the distribution chain rather than to health insurance and consumers. This is because in destination countries parallel traded drugs are priced just below originally sourced drugs. We also test to see whether parallel trade has a competition impact on prices in destination countries and find that it does not. Such competition effects as there are in pharmaceuticals come mainly from the presence of generics. Accordingly, instead of a convergence to the bottom in EU pharmaceutical prices, the evidence points at ‘convergence to the top’. This is explained by the fact that drug prices are subjected to regulation in individual countries, and by the limited incentives of purchasers to respond to price differentials.", "title": "" }, { "docid": "78c6ec58cec2607d5111ee415d683525", "text": "Forty-three normal hearing participants were tested in two experiments, which focused on temporal coincidence in auditory visual (AV) speech perception. In these experiments, audio recordings of/pa/and/ba/were dubbed onto video recordings of /ba/or/ga/, respectively (ApVk, AbVg), to produce the illusory \"fusion\" percepts /ta/, or /da/ [McGurk, H., & McDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-747]. In Experiment 1, an identification task using McGurk pairs with asynchronies ranging from -467 ms (auditory lead) to +467 ms was conducted. Fusion responses were prevalent over temporal asynchronies from -30 ms to +170 ms and more robust for audio lags. In Experiment 2, simultaneity judgments for incongruent and congruent audiovisual tokens (AdVd, AtVt) were collected. McGurk pairs were more readily judged as asynchronous than congruent pairs. Characteristics of the temporal window over which simultaneity and fusion responses were maximal were quite similar, suggesting the existence of a 200 ms duration asymmetric bimodal temporal integration window.", "title": "" }, { "docid": "1a7dd0fb317a9640ee6e90036d6036fa", "text": "A genome-wide association study was performed to identify genetic factors involved in susceptibility to psoriasis (PS) and psoriatic arthritis (PSA), inflammatory diseases of the skin and joints in humans. 223 PS cases (including 91 with PSA) were genotyped with 311,398 single nucleotide polymorphisms (SNPs), and results were compared with those from 519 Northern European controls. Replications were performed with an independent cohort of 577 PS cases and 737 controls from the U.S., and 576 PSA patients and 480 controls from the U.K.. Strongest associations were with the class I region of the major histocompatibility complex (MHC). The most highly associated SNP was rs10484554, which lies 34.7 kb upstream from HLA-C (P = 7.8x10(-11), GWA scan; P = 1.8x10(-30), replication; P = 1.8x10(-39), combined; U.K. PSA: P = 6.9x10(-11)). However, rs2395029 encoding the G2V polymorphism within the class I gene HCP5 (combined P = 2.13x10(-26) in U.S. cases) yielded the highest ORs with both PS and PSA (4.1 and 3.2 respectively). This variant is associated with low viral set point following HIV infection and its effect is independent of rs10484554. We replicated the previously reported association with interleukin 23 receptor and interleukin 12B (IL12B) polymorphisms in PS and PSA cohorts (IL23R: rs11209026, U.S. PS, P = 1.4x10(-4); U.K. PSA: P = 8.0x10(-4); IL12B:rs6887695, U.S. PS, P = 5x10(-5) and U.K. PSA, P = 1.3x10(-3)) and detected an independent association in the IL23R region with a SNP 4 kb upstream from IL12RB2 (P = 0.001). Novel associations replicated in the U.S. PS cohort included the region harboring lipoma HMGIC fusion partner (LHFP) and conserved oligomeric golgi complex component 6 (COG6) genes on chromosome 13q13 (combined P = 2x10(-6) for rs7993214; OR = 0.71), the late cornified envelope gene cluster (LCE) from the Epidermal Differentiation Complex (PSORS4) (combined P = 6.2x10(-5) for rs6701216; OR 1.45) and a region of LD at 15q21 (combined P = 2.9x10(-5) for rs3803369; OR = 1.43). This region is of interest because it harbors ubiquitin-specific protease-8 whose processed pseudogene lies upstream from HLA-C. This region of 15q21 also harbors the gene for SPPL2A (signal peptide peptidase like 2a) which activates tumor necrosis factor alpha by cleavage, triggering the expression of IL12 in human dendritic cells. We also identified a novel PSA (and potentially PS) locus on chromosome 4q27. This region harbors the interleukin 2 (IL2) and interleukin 21 (IL21) genes and was recently shown to be associated with four autoimmune diseases (Celiac disease, Type 1 diabetes, Grave's disease and Rheumatoid Arthritis).", "title": "" }, { "docid": "9d803b0ce1f1af621466b1d7f97b7edf", "text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.", "title": "" } ]
scidocsrr
badd6e36d6833cb2ccd3e2bf595608c7
Understanding User Revisions When Using Information Systems Features: Adaptive System Use and Triggers
[ { "docid": "586d89b6d45fd49f489f7fb40c87eb3a", "text": "Little research has examined the impacts of enterprise resource planning (ERP) systems implementation on job satisfaction. Based on a 12-month study of 2,794 employees in a telecommunications firm, we found that ERP system implementation moderated the relationships between three job characteristics (skill variety, autonomy, and feedback) and job satisfaction. Our findings highlight the key role that ERP system implementation can have in altering wellestablished relationships in the context of technology-enabled organizational change situations. This work also extends research on technology diffusion by moving beyond a focus on technology-centric outcomes, such as system use, to understanding broader job outcomes. Carol Saunders was the accepting senior editor for this paper.", "title": "" } ]
[ { "docid": "d310779b1006f90719a0ece3cf2583b2", "text": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model’s decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.", "title": "" }, { "docid": "7dcba854d1f138ab157a1b24176c2245", "text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.", "title": "" }, { "docid": "56d295950edf9503d89d891f7c1b361f", "text": "This paper describes the discipline of distance metric learning, a branch of machine learning that aims to learn distances from the data. Distance metric learning can be useful to improve similarity learning algorithms, and also has applications in dimensionality reduction. We describe the distance metric learning problem and analyze its main mathematical foundations. We discuss some of the most popular distance metric learning techniques used in classification, showing their goals and the required information to understand and use them. Furthermore, we present a Python package that collects a set of 17 distance metric learning techniques explained in this paper, with some experiments to evaluate the performance of the different algorithms. Finally, we discuss several possibilities of future work in this topic.", "title": "" }, { "docid": "d6dadf93c1a51be67f67a7fb8fdb9b68", "text": "Recent advances in quantum computing seem to suggest it is only a matter of time before general quantum computers become a reality. Because all widely used cryptographic constructions rely on the hardness of problems that can be solved efficiently using known quantum algorithms, quantum computers will have a profound impact on the field of cryptography. One such construction that will be broken by quantum computers is elliptic curve cryptography, which is used in blockchain applications such as bitcoin for digital signatures. Hash-based signature schemes are a promising post-quantum secure alternative, but existing schemes such as XMSS and SPHINCS are impractical for blockchain applications because of their performance characteristics. We construct a quantum secure signature scheme for use in blockchain technology by combining a hash-based one-time signature scheme with Naor-Yung chaining. By exploiting the structure and properties of a blockchain we achieve smaller signatures and better performance than existing hash-based signature schemes. The proposed scheme supports both one-time and many-time key pairs, and is designed to be easily adopted into existing blockchain implementations.", "title": "" }, { "docid": "5656c77061a3f678172ea01e226ede26", "text": "BACKGROUND\nIn 2010, overweight and obesity were estimated to cause 3·4 million deaths, 3·9% of years of life lost, and 3·8% of disability-adjusted life-years (DALYs) worldwide. The rise in obesity has led to widespread calls for regular monitoring of changes in overweight and obesity prevalence in all populations. Comparable, up-to-date information about levels and trends is essential to quantify population health effects and to prompt decision makers to prioritise action. We estimate the global, regional, and national prevalence of overweight and obesity in children and adults during 1980-2013.\n\n\nMETHODS\nWe systematically identified surveys, reports, and published studies (n=1769) that included data for height and weight, both through physical measurements and self-reports. We used mixed effects linear regression to correct for bias in self-reports. We obtained data for prevalence of obesity and overweight by age, sex, country, and year (n=19,244) with a spatiotemporal Gaussian process regression model to estimate prevalence with 95% uncertainty intervals (UIs).\n\n\nFINDINGS\nWorldwide, the proportion of adults with a body-mass index (BMI) of 25 kg/m(2) or greater increased between 1980 and 2013 from 28·8% (95% UI 28·4-29·3) to 36·9% (36·3-37·4) in men, and from 29·8% (29·3-30·2) to 38·0% (37·5-38·5) in women. Prevalence has increased substantially in children and adolescents in developed countries; 23·8% (22·9-24·7) of boys and 22·6% (21·7-23·6) of girls were overweight or obese in 2013. The prevalence of overweight and obesity has also increased in children and adolescents in developing countries, from 8·1% (7·7-8·6) to 12·9% (12·3-13·5) in 2013 for boys and from 8·4% (8·1-8·8) to 13·4% (13·0-13·9) in girls. In adults, estimated prevalence of obesity exceeded 50% in men in Tonga and in women in Kuwait, Kiribati, Federated States of Micronesia, Libya, Qatar, Tonga, and Samoa. Since 2006, the increase in adult obesity in developed countries has slowed down.\n\n\nINTERPRETATION\nBecause of the established health risks and substantial increases in prevalence, obesity has become a major global health challenge. Not only is obesity increasing, but no national success stories have been reported in the past 33 years. Urgent global action and leadership is needed to help countries to more effectively intervene.\n\n\nFUNDING\nBill & Melinda Gates Foundation.", "title": "" }, { "docid": "c8ef89eb90824b3d0f966c6f9b097d0b", "text": "Machine Learning and Inference methods have become ubiquitous in our attempt to induce more abstract representations of natural language text, visual scenes, and other messy, naturally occurring data, and support decisions that depend on it. However, learning models for these tasks is difficult partly because generating the necessary supervision signals for it is costly and does not scale. This paper describes several learning paradigms that are designed to alleviate the supervision bottleneck. It will illustrate their benefit in the context of multiple problems, all pertaining to inducing various levels of semantic representations from text. In particular, we discuss (i) Response Driven Learning of models, a learning protocol that supports inducing meaning representations simply by observing the model’s behavior in its environment, (ii) the exploitation of Incidental Supervision signals that exist in the data, independently of the task at hand, to learn models that identify and classify semantic predicates, and (iii) the use of weak supervision to combine simple models to support global decisions where joint supervision is not available. While these ideas are applicable in a range of Machine Learning driven fields, we will demonstrate it in the context of several natural language applications, from (cross-lingual) text classification, to Wikification, to semantic parsing.", "title": "" }, { "docid": "d3797817bcde1b16d35cc7efbc97953c", "text": "Biological time-keeping mechanisms have fascinated researchers since the movement of leaves with a daily rhythm was first described >270 years ago. The circadian clock confers a approximately 24-hour rhythm on a range of processes including leaf movements and the expression of some genes. Molecular mechanisms and components underlying clock function have been described in recent years for several animal and prokaryotic organisms, and those of plants are beginning to be characterized. The emerging model of the Arabidopsis clock has mechanistic parallels with the clocks of other model organisms, which consist of positive and negative feedback loops, but the molecular components appear to be unique to plants.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "f92a71e6094000ecf47ebd02bf4e5c4a", "text": "Exploding amounts of multimedia data increasingly require automatic indexing and classification, e.g. training classifiers to produce high-level features, or semantic concepts, chosen to represent image content, like car, person, etc. When changing the applied domain (i.e. from news domain to consumer home videos), the classifiers trained in one domain often perform poorly in the other domain due to changes in feature distributions. Additionally, classifiers trained on the new domain alone may suffer from too few positive training samples. Appropriately adapting data/models from an old domain to help classify data in a new domain is an important issue. In this work, we develop a new cross-domain SVM (CDSVM) algorithm for adapting previously learned support vectors from one domain to help classification in another domain. Better precision is obtained with almost no additional computational cost. Also, we give a comprehensive summary and comparative study of the state- of-the-art SVM-based cross-domain learning methods. Evaluation over the latest large-scale TRECVID benchmark data set shows that our CDSVM method can improve mean average precision over 36 concepts by 7.5%. For further performance gain, we also propose an intuitive selection criterion to determine which cross-domain learning method to use for each concept.", "title": "" }, { "docid": "ad6d21a36cc5500e4d8449525eae25ca", "text": "Human Activity Recognition is one of the attractive topics to develop smart interactive environment in which computing systems can understand human activities in natural context. Besides traditional approaches with visual data, inertial sensors in wearable devices provide a promising approach for human activity recognition. In this paper, we propose novel methods to recognize human activities from raw data captured from inertial sensors using convolutional neural networks with either 2D or 3D filters. We also take advantage of hand-crafted features to combine with learned features from Convolution-Pooling blocks to further improve accuracy for activity recognition. Experiments on UCI Human Activity Recognition dataset with six different activities demonstrate that our method can achieve 96.95%, higher than existing methods.", "title": "" }, { "docid": "ad49388ef64fd63e0f318a0097019fe2", "text": "We present an experimental study of IEEE 802.11n (high throughput extension to the 802.11 standard) using commodity wireless hardware. 802.11n introduces a variety of new mechanisms including physical layer diversity techniques, channel bonding and frame aggregation mechanisms. Using measurements from our testbed, we analyze the fundamental characteristics of 802.11n links and quantify the gains of each mechanism under diverse scenarios. We show that the throughput of an 802.11n link can be severely degraded (up ≈85%) in presence of an 802.11g link. Our results also indicate that increased amount of interference due to wider channel bandwidths can lead to throughput degradation. To this end, we characterize the nature of interference due to variable channel widths in 802.11n and show that careful modeling of interference is imperative in such scenarios. Further, as a reappraisal of previous work, we evaluate the effectiveness of MAC level diversity in the presence of physical layer diversity mechanisms introduced by 802.11n.", "title": "" }, { "docid": "2fc024a732681aea0945430894351394", "text": "Despite the increasing popularity of cloud services, ensuring the security and availability of data, resources and services remains an ongoing research challenge. Distributed denial of service (DDoS) attacks are not a new threat, but remain a major security challenge and are a topic of ongoing research interest. Mitigating DDoS attack in cloud presents a new dimension to solutions proffered in traditional computing due to its architecture and features. This paper reviews 96 publications on DDoS attack and defense approaches in cloud computing published between January 2009 and December 2015, and discusses existing research trends. A taxonomy and a conceptual cloud DDoS mitigation framework based on change point detection are presented. Future research directions are also outlined.", "title": "" }, { "docid": "728ea68ac1a50ae2d1b280b40c480aec", "text": "This paper presents a new metaprogramming library, CL ARRAY, that offers multiplatform and generic multidimensional data containers for C++ specifically adapted for parallel programming. The CL ARRAY containers are built around a new formalism for representing the multidimensional nature of data as well as the semantics of multidimensional pointers and contiguous data structures. We also present OCL ARRAY VIEW, a concept based on metaprogrammed enveloped objects that supports multidimensional transformations and multidimensional iterators designed to simplify and formalize the interfacing process between OpenCL APIs, standard template library (STL) algorithms and CL ARRAY containers. Our results demonstrate improved performance and energy savings over the three most popular container libraries available to the developer community for use in the context of multi-linear algebraic applications.", "title": "" }, { "docid": "48a476d5100f2783455fabb6aa566eba", "text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].", "title": "" }, { "docid": "16b9d7602e45da0bb47017d1516c95bb", "text": "Intranet is a term used to describe the use of Internet technologies internally within an organization rather than externally to connect to the global Internet. While the advancement and the sophistication of the intranet is progressing tremendously, research on intranet utilization is still very scant. This paper is an attempt to provide a conceptual understanding of the intranet utilization and the corresponding antecedents and impacts through the proposed conceptual model. Based on several research frameworks built through past research, the authors attempt to propose a framework for studying intranet utilization that is based on three constructs i.e. mode of utilizations, decision support and knowledge sharing. Three groups of antecedent variables namely intranet, organizational and individual characteristics are explored to determine their possible contribution to intranet utilization. In addition, the impacts of intranet utilization are also examined in terms of task productivity, task innovation and individual sense of accomplishments. Based on the proposed model, several propositions are formulated as a basis for the study that will follow.", "title": "" }, { "docid": "cff32690c2421b2ad94dea33f5e4479d", "text": "Heavy ion single-event effect (SEE) measurements on Xilinx Zynq-7000 are reported. Heavy ion susceptibility to Single-Event latchup (SEL), single event upsets (SEUs) of BRAM, configuration bits of FPGA and on chip memory (OCM) of the processor were investigated.", "title": "" }, { "docid": "418ebc0424128ec1a89d5e5292872124", "text": "Apocyni Veneti Folium (AVF) is a kind of staple traditional Chinese medicine with vast clinical consumption because of its positive effects. However, due to the habitats and adulterants, its quality is uneven. To control the quality of this medicinal herb, in this study, the quality of AVF was evaluated based on simultaneous determination of multiple bioactive constituents combined with multivariate statistical analysis. A reliable method based on ultra-fast liquid chromatography tandem triple quadrupole mass spectrometry (UFLC-QTRAP-MS/MS) was developed for the simultaneous determination of a total of 43 constituents, including 15 flavonoids, 6 organic acids, 13 amino acids, and 9 nucleosides in 41 Luobumaye samples from different habitats and commercial herbs. Furthermore, according to the contents of these 43 constituents, principal component analysis (PCA) was employed to classify and distinguish between AVF and its adulterants, leaves of Poacynum hendersonii (PHF), and gray relational analysis (GRA) was performed to evaluate the quality of the samples. The proposed method was successfully applied to the comprehensive quality evaluation of AVF, and all results demonstrated that the quality of AVF was higher than the PHF. This study will provide comprehensive information necessary for the quality control of AVF.", "title": "" }, { "docid": "46980b89e76bc39bf125f63ed9781628", "text": "In this paper, a design of miniaturized 3-way Bagley polygon power divider (BPD) is presented. The design is based on using non-uniform transmission lines (NTLs) in each arm of the divider instead of the conventional uniform ones. For verification purposes, a 3-way BPD is designed, simulated, fabricated, and measured. Besides suppressing the fundamental frequency's odd harmonics, a size reduction of almost 30% is achieved.", "title": "" }, { "docid": "25eea5205d1f8beaa8c4a857da5714bc", "text": "To backpropagate the gradients through discrete stochastic layers, we encode the true gradients into a multiplication between random noises and the difference of the same function of two different sets of discrete latent variables, which are correlated with these random noises. The expectations of that multiplication over iterations are zeros combined with spikes from time to time. To modulate the frequencies, amplitudes, and signs of the spikes to capture the temporal evolution of the true gradients, we propose the augment-REINFORCE-merge (ARM) estimator that combines data augmentation, the score-function estimator, permutation of the indices of latent variables, and variance reduction for Monte Carlo integration using common random numbers. The ARM estimator provides low-variance and unbiased gradient estimates for the parameters of discrete distributions, leading to state-of-the-art performance in both auto-encoding variational Bayes and maximum likelihood inference, for discrete latent variable models with one or multiple discrete stochastic layers.", "title": "" }, { "docid": "81f474cbd140935d93faf47af87a205b", "text": "The availability of food ingredient information in digital form is a major factor in modern information systems related to diet management and health issues. Although ingredient information is printed on food product labels, corresponding digital data is rarely available for the public. In this demo, we present the Mobile Food Information Scanner (MoFIS), a mobile user interface designed to enable users to semi-automatically extract ingredient lists from food product packaging.", "title": "" } ]
scidocsrr
11edb90b99ca38e51ae1a25a130afdba
Knowledge Elicitation Methods for Affect Modelling in Education
[ { "docid": "5f5c78b74e1e576dd48690b903bf4de4", "text": "Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.", "title": "" }, { "docid": "0f3a795be7101977171a9232e4f98bf4", "text": "Emotions are universally recognized from facial expressions--or so it has been claimed. To support that claim, research has been carried out in various modern cultures and in cultures relatively isolated from Western influence. A review of the methods used in that research raises questions of its ecological, convergent, and internal validity. Forced-choice response format, within-subject design, preselected photographs of posed facial expressions, and other features of method are each problematic. When they are altered, less supportive or nonsupportive results occur. When they are combined, these method factors may help to shape the results. Facial expressions and emotion labels are probably associated, but the association may vary with culture and is loose enough to be consistent with various alternative accounts, 8 of which are discussed.", "title": "" } ]
[ { "docid": "1dbb3a49f6c0904be9760f877b7270b7", "text": "We propose a geographical visualization to support operators of coastal surveillance systems and decision making analysts to get insights in vessel movements. For a possibly unknown area, they want to know where significant maritime areas, like highways and anchoring zones, are located. We show these features as an overlay on a map. As source data we use AIS data: Many vessels are currently equipped with advanced GPS devices that frequently sample the state of the vessels and broadcast them. Our visualization is based on density fields that are derived from convolution of the dynamic vessel positions with a kernel. The density fields are shown as illuminated height maps. Combination of two fields, with a large and small kernel provides overview and detail. A large kernel provides an overview of area usage revealing vessel highways. Details of speed variations of individual vessels are shown with a small kernel, highlighting anchoring zones where multiple vessels stop. Besides for maritime applications we expect that this approach is useful for the visualization of moving object data in general.", "title": "" }, { "docid": "1cd77d97f27b45d903ffcecda02795a5", "text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "title": "" }, { "docid": "2d71baac51cc59876bdcc0501bb68ee8", "text": "Emotions are undeniably a central component of human existence. In recent years, the importance of developing systems which incorporate emotions into human-computer interaction (HCI) has been widely acknowledged. However, research on emotion recognition has been dominated by studies of facial expression of emotion. In comparison, the study of EBL has received relatively little attention. Here we study the phenomena of EBL, specifically of static body postures expressing emotions, from two different perspectives. First, we have built a computational model for the recognition of four basic emotions which achieves a relatively high recognition rate (70 %). Secondly, to study perception of EBL, we examined what body parts attract the observer's attention during the perception of EBL. This is done by tracking eye movements of human subjects during the observation of static postures expressing emotions. Although invaluable information can be inferred from motion, this study will show that information about static body posture is rich enough for both automatic recognition and human perception. The present study contributes in an applicative way both to the development of automatic recognition systems of EBL and provides insight into the nature of human recognition of EBL. 12", "title": "" }, { "docid": "f330cfad6e7815b1b0670217cd09b12e", "text": "In this paper we study the effect of false data injection attacks on state estimation carried over a sensor network monitoring a discrete-time linear time-invariant Gaussian system. The steady state Kalman filter is used to perform state estimation while a failure detector is employed to detect anomalies in the system. An attacker wishes to compromise the integrity of the state estimator by hijacking a subset of sensors and sending altered readings. In order to inject fake sensor measurements without being detected the attacker will need to carefully design his actions to fool the estimator as abnormal sensor measurements would result in an alarm. It is important for a designer to determine the set of all the estimation biases that an attacker can inject into the system without being detected, providing a quantitative measure of the resilience of the system to such attacks. To this end, we will provide an ellipsoidal algorithm to compute its inner and outer approximations of such set. A numerical example is presented to further illustrate the effect of false data injection attack on state estimation.", "title": "" }, { "docid": "94c9eec9aa4f36bf6b2d83c3cc8dbb12", "text": "Many real world security problems can be modelled as finite zero-sum games with structured sequential strategies and limited interactions between the players. An abstract class of games unifying these models are the normal-form games with sequential strategies (NFGSS). We show that all games from this class can be modelled as well-formed imperfect-recall extensiveform games and consequently can be solved by counterfactual regret minimization. We propose an adaptation of the CFR algorithm for NFGSS and compare its performance to the standard methods based on linear programming and incremental game generation. We validate our approach on two security-inspired domains. We show that with a negligible loss in precision, CFR can compute a Nash equilibrium with five times less computation than its competitors. Game theory has been recently used to model many real world security problems, such as protecting airports (Pita et al. 2008) or airplanes (Tsai et al. 2009) from terrorist attacks, preventing fare evaders form misusing public transport (Yin et al. 2012), preventing attacks in computer networks (Durkota et al. 2015), or protecting wildlife from poachers (Fang, Stone, and Tambe 2015). Many of these security problems are sequential in nature. Rather than a single monolithic action, the players’ strategies are formed by sequences of smaller individual decisions. For example, the ticket inspectors make a sequence of decisions about where to check tickets and which train to take; a network administrator protects the network against a sequence of actions an attacker uses to penetrate deeper into the network. Sequential decision making in games has been extensively studied from various perspectives. Recent years have brought significant progress in solving massive imperfectinformation extensive-form games with a focus on the game of poker. Counterfactual regret minimization (Zinkevich et al. 2008) is the family of algorithms that has facilitated much of this progress, with a recent incarnation (Tammelin et al. 2015) essentially solving for the first time a variant of poker commonly played by people (Bowling et al. 2015). However, there has not been any transfer of these results to research on real world security problems. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We focus on an abstract class of sequential games that can model many sequential security games, such as games taking place in physical space that can be discretized as a graph. This class of games is called normal-form games with sequential strategies (NFGSS) (Bosansky et al. 2015) and it includes, for example, existing game theoretic models of ticket inspection (Jiang et al. 2013), border patrolling (Bosansky et al. 2015), and securing road networks (Jain et al. 2011). In this work we formally prove that any NFGSS can be modelled as a slightly generalized chance-relaxed skew well-formed imperfect-recall game (CRSWF) (Lanctot et al. 2012; Kroer and Sandholm 2014), a subclass of extensiveform games with imperfect recall in which counterfactual regret minimization is guaranteed to converge to the optimal strategy. We then show how to adapt the recent variant of the algorithm, CFR, directly to NFGSS and present experimental validation on two distinct domains modelling search games and ticket inspection. We show that CFR is applicable and efficient in domains with imperfect recall that are substantially different from poker. Moreover, if we are willing to sacrifice a negligible degree of approximation, CFR can find a solution substantially faster than methods traditionally used in research on security games, such as formulating the game as a linear program (LP) and incrementally building the game model by double oracle methods.", "title": "" }, { "docid": "980565c38859db2df10db238d8a4dc61", "text": "Performing High Voltage (HV) tasks with a multi craft work force create a special set of safety circumstances. This paper aims to present vital information relating to when it is acceptable to use a single or a two-layer soil structure. Also it discusses the implication of the high voltage infrastructure on the earth grid and the safety of this implication under a single or a two-layer soil structure. A multiple case study is investigated to show the importance of using the right soil resistivity structure during the earthing system design. Keywords—Earth Grid, EPR, High Voltage, Soil Resistivity Structure, Step Voltage, Touch Voltage.", "title": "" }, { "docid": "ba94bfaa5dc669877deedfaee057c93d", "text": "Bayesian networks have become a widely used method in the modelling of uncertain knowledge. Owing to the difficulty domain experts have in specifying them, techniques that learn Bayesian networks from data have become indispensable. Recently, however, there have been many important new developments in this field. This work takes a broad look at the literature on learning Bayesian networks—in particular their structure—from data. Specific topics are not focused on in detail, but it is hoped that all the major fields in the area are covered. This article is not intended to be a tutorial—for this, there are many books on the topic, which will be presented. However, an effort has been made to locate all the relevant publications, so that this paper can be used as a ready reference to find the works on particular sub-topics.", "title": "" }, { "docid": "010cd98d16481a7c2470b20066dcef3a", "text": "Most real decisions, unlike those of economics texts, have a status quo alternative-that is, doing nothing or maintaining one’s current or previous decision. A series of decision-making experiments shows that individuals disproportionately stick with the status quo. Data on the selections of health plans and retirement programs by faculty members reveal that the status quo bias is substantial in important real decisions. Economics, psychology, and decision theory provide possible explanations for this bias. Applications are discussed ranging from marketing techniques, to industrial organization, to the advance of science. “To do nothing is within the power of all men.” Samuel Johnson How do individuals make decisions? This question is of crucial interest to researchers in economics, political science, psychology, sociology, history, and law. Current economic thinking embraces the concept of rational choice as a prescriptive and descriptive paradigm. That is, economists believe that economic agents-individuals, managers, government regulators-should (and in large part do) choose among alternatives in accordance with well-defined preferences. In the canonical model of decision making under certainty, individuals select one of a known set of alternative choices with certain outcomes. They are endowed with preferences satisfying the basic choice axioms-that is, they have a transitive ranking of these alternatives. Rational choice simply means that they select their most preferred alternative in this ranking. If we know the decision maker’s ranking, we can predict his or her choice infallibly. For instance, an individual’s choice should not be affected by removing or adding an irrelevant (i.e., not top-ranked) alternative. Conversely, when we observe his or her actual choice, we know it was his or her top-ranked alternative. 8 WILLIAM SAMUELSON AND RICHARD ZECKHAUSER The theory of rational decision making under uncertainty, first formalized by Savage (19.54) requires the individual to assign probabilities to the possible outcomes and to calibrate utilities to value these outcomes. The decision maker selects the alternative that offers the highest expected utility. A critical feature of this approach is that transitivity is preserved for the more general category, decision making under uncertainty. Most of the decisions discussed here involve what Frank Knight referred to as risk (probabilities of the outcomes are well defined) or uncertainty (only subjective probabilities can be assigned to outcomes). In a number of instances, the decision maker’s preferences are uncertain. A fundamental property of the rational choice model, under certainty or uncertainty, is that only preference-relevant features of the alternatives influence the individual’s decision. Thus, neither the order in which the alternatives are presented nor any labels they carry should affect the individual’s choice. Of course, in realworld decision problems the alternatives often come with influential labels. Indeed, one alternative inevitably carries the label status quo-that is, doing nothing or maintaining one’s current or previous decision is almost always a possibility. Faced with new options, decision makers often stick with the status quo altemative, for example, to follow customary company policy, to elect an incumbent to still another term in office, to purchase the same product brands, or to stay in the same job. Thus, with respect to the canonical model, a key question is whether the framing of an alternative-whether it is in the status quo position or not-will significantly affect the likelihood of its being chosen.’ This article reports the results of a series of decision-making experiments designed to test for status quo effects. The main finding is that decision makers exhibit a significant status quo bias. Subjects in our experiments adhered to status quo choices more frequently than would be predicted by the canonical model. The vehicle for the experiments was a questionnaire consisting of a series of decision problems, each requiring a choice from among a fixed number of alternatives. While controlling for preferences and holding constant the set of choice alternatives, the experimental design varied the framing of the alternatives. Under neutralframing, a menu of potential alternatives with no specific labels attached was presented; all options were on an equal footing, as in the usual depiction of the canonical model. Under status quo framing, one of the choice alternatives was placed in the status quo position and the others became alternatives to the status quo. In some of the experiments, the status quo condition was manipulated by the experimenters. In the remainder, which involved sequential decisions, the subject’s initial choice self-selected the status quo option for a subsequent choice. In both parts of the experiment, status quo framing was found to have predictable and significant effects on subjects’ decision making. Individuals exhibited a significant status quo bias across a range of decisions. The degree of bias varied with the strength of the individual’s discernible preference and with the number of alternatives in the choice set. The stronger was an individual’s preference for a selected alternative, the weaker was the bias. The more options that were included in the choice set, the stronger was the relative bias for the status quo. STATUS QUO BIAS IN DECISION MAKING 9 To illustrate our findings, consider an election contest between two candidates who would be expected to divide the vote evenly if neither were an incumbent (the neutral setting). (This example should be regarded as a metaphor; we do not claim that our experimental results actually explain election outcomes.‘) Now suppose that one of these candidates is the incumbent office holder, a status generally acknowledged as a significant advantage in an election. An extrapolation of our experimental results indicates that the incumbent office holder (the status quo alternative) would claim an election victory by a margin of 59% to 41%. Conversely, a candidate who would command as few as 39% of the voters in the neutral setting could still earn a narrow election victory as an incumbent. With multiple candidates in a plurality election, the status quo advantage is more dramatic. Consider a race among four candidates, each of whom would win 25% of the vote in the neutral setting. Here, the incumbent earns 38.5% of the vote, and each challenger 20.5%. In turn, an incumbent candidate who would earn as little as 9% of the vote in a neutral election can still earn a 25.4% plurality. The finding that individuals exhibit significant status quo bias in relatively simple hypothetical decision tasks challenges the presumption (held implicitly by many economists) that the rational choice model provides a valid descriptive model for all economic behavior. (In Section 3, we explore possible explanations for status quo bias that are consistent with rational behavior.) In particular, this finding challenges perfect optimizing models that claim (at least) allegorical significance in explaining actual behavior in a complicated imperfect world. Even in simple experimental settings, perfect models are violated. In themselves, the experiments do not address the larger question of the importance of status quo bias in actual private and public decision making. Those who are skeptical of economic experiments purporting to demonstrate deviations from rationality contend that actual economic agents, with real resources at stake, will make it their business to act rationally. For several reasons, however, we believe that the skeptic’s argument applies only weakly to the status quo findings. First, the status quo bias is not a mistake-like a calculation error or an error in maximizing-that once pointed out is easily recognized and corrected. This bias is considerably more subtle. In the debriefing discussions following the experiments, subjects expressed surprise at the existence of the bias. Most were readily persuaded of the aggregate pattern of behavior (and the reasons for it), but seemed unaware (and slightly skeptical) that they personaly would fall prey to this bias. Furthermore, even if the bias is recognized, there appear to be no obvious ways to avoid it beyond calling on the decision maker to weigh all options evenhandedly. Second, we would argue that the controlled experiments’ hypothetical decision tasks provide fewer reasons for the expression of status quo bias than do realworld decisions. Many, if not most, subjects did not consciously perceive the differences in framing across decision problems in the experiment. When they did recognize the framing, they stated that it should not make much of a difference. By contrast, one would expect the status quo characteristic to have a much greater impact on actual decision making. Despite a desire to weigh all options evenhand10 WILLIAM SAMUELSON AND RICHARD ZECKHAUSER edly, a decision maker in the real world may have a considerable commitment to, or psychological investment in, the status quo option. The individual may retain the status quo out of convenience, habit or inertia, policy (company or government) or custom, because of fear or innate conservatism, or through simple rationalization. His or her past choice may have become known to others and, unlike the subject in a compressed-time laboratory setting, he or she may have lived with the status quo choice for some time. Moreover, many real-world decisions are made by a person acting as part of an organization or group, which may exert additional pressures for status quo choices. Finally, in our experiments, an alternative to the status quo was always explicitly identified. In day-to-day decision making, by contrast, a decision maker may not even recognize the potential for a choice. When, as is often the case in the real world, the first decision is to recognize that th", "title": "" }, { "docid": "ce7e6b3886242584d2e37c82ba85eca0", "text": "BACKGROUND\nWe performed an updated meta-analysis of randomized placebo-controlled trials testing memantine monotherapy for patients with Alzheimer's disease (AD).\n\n\nMETHODS\nThe meta-analysis included randomized controlled trials of memantine monotherapy for AD, omitting those in which patients were also administered a cholinesterase inhibitor. Cognitive function, activities of daily living, behavioral disturbances, global function, stage of dementia, drug discontinuation rate, and individual side effects were compared between memantine monotherapy and placebo groups. The primary outcomes were cognitive function and behavioral disturbances; the others were secondary outcomes.\n\n\nRESULTS\nNine studies including 2433 patients that met the study's inclusion criteria were identified. Memantine monotherapy significantly improved cognitive function [standardized mean difference (SMD)=-0.27, 95% confidence interval (CI)=-0.39 to -0.14, p=0.0001], behavioral disturbances (SMD=-0.12, 95% CI=-0.22 to -0.01, p=0.03), activities of daily living (SMD=-0.09, 95% CI=-0.19 to -0.00, p=0.05), global function assessment (SMD=-0.18, 95% CI=-0.27 to -0.09, p=0.0001), and stage of dementia (SMD=-0.23, 95% CI=-0.33 to -0.12, p=0.0001) scores. Memantine was superior to placebo in terms of discontinuation because of inefficacy [risk ratio (RR)=0.36, 95% CI=0.17¬ to 0.74, p=0.006, number needed to harm (NNH)=non significant]. Moreover, memantine was associated with less agitation compared with placebo (RR=0.68, 95% CI=0.49 to 0.94, p=0.02, NNH=non significant). There were no significant differences in the rate of discontinuation because of all causes, all adverse events, and individual side effects other than agitation between the memantine monotherapy and placebo groups.\n\n\nCONCLUSIONS\nMemantine monotherapy improved cognition, behavior, activities of daily living, global function, and stage of dementia and was well-tolerated by AD patients. However, the effect size in terms of efficacy outcomes was small and thus there is limited evidence of clinical benefit.", "title": "" }, { "docid": "cea7debee0413a79a9c7c5e54d82e337", "text": "Viral Marketing, the idea of exploiting social interactions of users to propagate awareness for products, has gained considerable focus in recent years. One of the key issues in this area is to select the best seeds that maximize the influence propagated in the social network. In this paper, we define the seed selection problem (called t-Influence Maximization, or t-IM) for multiple products. Specifically, given the social network and t products along with their seed requirements, we want to select seeds for each product that maximize the overall influence. As the seeds are typically sent promotional messages, to avoid spamming users, we put a hard constraint on the number of products for which any single user can be selected as a seed. In this paper, we design two efficient techniques for the t-IM problem, called Greedy and FairGreedy. The Greedy algorithm uses simple greedy hill climbing, but still results in a 1/3-approximation to the optimum. Our second technique, FairGreedy, allocates seeds with not only high overall influence (close to Greedy in practice), but also ensures fairness across the influence of different products. We also design efficient heuristics for estimating the influence of the selected seeds, that are crucial for running the seed selection on large social network graphs. Finally, using extensive simulations on real-life social graphs, we show the effectiveness and scalability of our techniques compared to existing and naive strategies.", "title": "" }, { "docid": "ebafef08b98f0581210749c570504599", "text": "In this paper we examine the effect of receptive field designs on classification accuracy in the commonly adopted pipeline of image classification. While existing algorithms usually use manually defined spatial regions for pooling, we show that learning more adaptive receptive fields increases performance even with a significantly smaller codebook size at the coding layer. To learn the optimal pooling parameters, we adopt the idea of over-completeness by starting with a large number of receptive field candidates, and train a classifier with structured sparsity to only use a sparse subset of all the features. An efficient algorithm based on incremental feature selection and retraining is proposed for fast learning. With this method, we achieve the best published performance on the CIFAR-10 dataset, using a much lower dimensional feature space than previous methods.", "title": "" }, { "docid": "95ee34da123289b9c538471844e39d8c", "text": "Population-level analyses often use average quantities to describe heterogeneous systems, particularly when variation does not arise from identifiable groups. A prominent example, central to our current understanding of epidemic spread, is the basic reproductive number, R0, which is defined as the mean number of infections caused by an infected individual in a susceptible population. Population estimates of R0 can obscure considerable individual variation in infectiousness, as highlighted during the global emergence of severe acute respiratory syndrome (SARS) by numerous ‘superspreading events’ in which certain individuals infected unusually large numbers of secondary cases. For diseases transmitted by non-sexual direct contacts, such as SARS or smallpox, individual variation is difficult to measure empirically, and thus its importance for outbreak dynamics has been unclear. Here we present an integrated theoretical and statistical analysis of the influence of individual variation in infectiousness on disease emergence. Using contact tracing data from eight directly transmitted diseases, we show that the distribution of individual infectiousness around R0 is often highly skewed. Model predictions accounting for this variation differ sharply from average-based approaches, with disease extinction more likely and outbreaks rarer but more explosive. Using these models, we explore implications for outbreak control, showing that individual-specific control measures outperform population-wide measures. Moreover, the dramatic improvements achieved through targeted control policies emphasize the need to identify predictive correlates of higher infectiousness. Our findings indicate that superspreading is a normal feature of disease spread, and to frame ongoing discussion we propose a rigorous definition for superspreading events and a method to predict their frequency.", "title": "" }, { "docid": "62b8b95579e387913198cd4adc77eb84", "text": "This paper aims to solve a fundamental problem in intensitybased 2D/3D registration, which concerns the limited capture range and need for very good initialization of state-of-the-art image registration methods. We propose a regression approach that learns to predict rotation and translations of arbitrary 2D image slices from 3D volumes, with respect to a learned canonical atlas co-ordinate system. To this end, we utilize Convolutional Neural Networks (CNNs) to learn the highly complex regression function that maps 2D image slices into their correct position and orientation in 3D space. Our approach is attractive in challenging imaging scenarios, where significant subject motion complicates reconstruction performance of 3D volumes from 2D slice data. We extensively evaluate the effectiveness of our approach quantitatively on simulated MRI brain data with extreme random motion. We further demonstrate qualitative results on fetal MRI where our method is integrated into a full reconstruction and motion compensation pipeline. With our CNN regression approach we obtain an average prediction error of 7mm on simulated data, and convincing reconstruction quality of images of very young fetuses where previous methods fail. We further discuss applications to Computed Tomography and X-ray projections. Our approach is a general solution to the 2D/3D initialization problem. It is computationally efficient, with prediction times per slice of a few milliseconds, making it suitable for real-time scenarios.", "title": "" }, { "docid": "62d3ed4ab5baeea14ccf93ae1b064dda", "text": "Many challenges are associated with the integration of geographic information systems (GISs) with models in specific applications. One of them is adapting models to the environment of GISs. Unique aspects of water resource management problems require a special approach to development of GIS data structures. Expanded development of GIS applications for handling water resources management analysis can be assisted by use of an object oriented approach. In this paper, we model a river basin water allocation problem as a collection of spatial and thematic objects. A conceptual GIS data model is formulated to integrate the physical and logical components of the modeling problem into an operational framework, based on which, extended GIS functions are developed to implement a tight linkage between the GIS and the water resources management model. Through the object-oriented approach, data, models and users interfaces are integrated in the GIS environment, creating great flexibility for modeling and analysis. The concept and methodology described in this paper is also applicable to connecting GIS with models in other fields that have a spatial dimension and hence to which GIS can provide a powerful additional component of the modeler’s tool kit.  2002 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "3df69e5ce63d3a3b51ad6f2b254e12b6", "text": "This paper presents three approaches to creating corpora that we are working on for speech-to-speech translation in the travel conversation task. The first approach is to collect sentences that bilingual travel experts consider useful for people going-to/coming-from another country. The resulting English-Japanese aligned corpora are collectively called the basic travel expression corpus (BTEC), which is now being translated into several other languages. The second approach tries to expand this corpus by generating many \"synonymous\" expressions for each sentence. Although we can create large corpora by the above two approaches relatively cheaply, they may be different from utterances in actual conversation. Thus, as the third approach, we are collecting dialogue corpora by letting two people talk, each in his/her native language, through a speech-to-speech translation system. To concentrate on translation modules, we have replaced speech recognition modules with human typists. We will report some of the characteristics of these corpora as well.", "title": "" }, { "docid": "921062a73e2b4a5ab1d994ac22b04918", "text": "This study describes a new corpus of over 60,000 hand-annotated metadiscourse acts from 106 OpenCourseWare lectures, from two different disciplines: Physics and Economics. Metadiscourse is a set of linguistic expressions that signal different functions in the discourse. This type of language is hypothesised to be helpful in finding a structure in unstructured text, such as lectures discourse. A brief summary is provided about the annotation scheme and labelling procedures, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary data that will be distributed with the corpus, and information relating to how to obtain the data. The results provide a deeper understanding of lecture structure and confirm the reliable coding of metadiscursive acts in academic lectures across different disciplines. The next stage of our research will be to build a classification model to automate the tagging process, instead of manual annotation, which take time and efforts. This is in addition to the use of these tags as indicators of the higher level structure of lecture discourse.", "title": "" }, { "docid": "636a39e2cfcbad2f495cb063ee2477b5", "text": "For many real applications, it’s equally important to detect objects accurately and quickly. In this paper, we propose an accurate and efficient single shot object detector with feature aggregation and enhancement (FAENet). Our motivation is to enhance and exploit the shallow and deep feature maps of the whole network simultaneously. To achieve it we introduce a pair of novel feature aggregation modules and two feature enhancement blocks, and integrate them into the original structure of SSD. Extensive experiments on both the PASCAL VOC and MS COCO datasets demonstrate that the proposed method achieves much higher accuracy than SSD. In addition, our method performs better than the state-of-the-art one-stage detector RefineDet on small objects and can run at a faster speed.", "title": "" }, { "docid": "34d16a5eb254846f431e2c716309e20a", "text": "AIM\nWe investigated the uptake and pharmacokinetics of l-ergothioneine (ET), a dietary thione with free radical scavenging and cytoprotective capabilities, after oral administration to humans, and its effect on biomarkers of oxidative damage and inflammation.\n\n\nRESULTS\nAfter oral administration, ET is avidly absorbed and retained by the body with significant elevations in plasma and whole blood concentrations, and relatively low urinary excretion (<4% of administered ET). ET levels in whole blood were highly correlated to levels of hercynine and S-methyl-ergothioneine, suggesting that they may be metabolites. After ET administration, some decreasing trends were seen in biomarkers of oxidative damage and inflammation, including allantoin (urate oxidation), 8-hydroxy-2'-deoxyguanosine (DNA damage), 8-iso-PGF2α (lipid peroxidation), protein carbonylation, and C-reactive protein. However, most of the changes were non-significant.\n\n\nINNOVATION\nThis is the first study investigating the administration of pure ET to healthy human volunteers and monitoring its uptake and pharmacokinetics. This compound is rapidly gaining attention due to its unique properties, and this study lays the foundation for future human studies.\n\n\nCONCLUSION\nThe uptake and retention of ET by the body suggests an important physiological function. The decreasing trend of oxidative damage biomarkers is consistent with animal studies suggesting that ET may function as a major antioxidant but perhaps only under conditions of oxidative stress. Antioxid. Redox Signal. 26, 193-206.", "title": "" }, { "docid": "033b05d21f5b8fb5ce05db33f1cedcde", "text": "Seasonal occurrence of the common cutworm Spodoptera litura (Fab.) (Lepidoptera: Noctuidae) moths captured in synthetic sex pheromone traps and associated field population of eggs and larvae in soybean were examined in India from 2009 to 2011. Male moths of S. litura first appeared in late July or early August and continued through October. Peak male trap catches occurred during the second fortnight of September, which was within soybean reproductive stages. Similarly, the first appearance of S. litura egg masses and larval populations were observed after the first appearance of male moths in early to mid-August, and were present in the growing season up to late September to mid-October. The peak appearance of egg masses and larval populations always corresponded with the peak activity of male moths recorded during mid-September in all years. Correlation studies showed that weekly mean trap catches were linearly and positively correlated with egg masses and larval populations during the entire growing season of soybean. Seasonal means of male moth catches in pheromone traps during the 2010 and 2011 seasons were significantly lower than the catches during the 2009 season. However, seasonal means of the egg masses and larval populations were not significantly different between years. Pheromone traps may be useful indicators of the onset of numbers of S. litura eggs and larvae in soybean fields.", "title": "" }, { "docid": "ffed6abc3134f30d267342e83931ee64", "text": "This paper discusses General Random Utility Models (GRUMs). These are a class of parametric models that generate partial ranks over alternatives given attributes of agents and alternatives. We propose two preference elicitation scheme for GRUMs developed from principles in Bayesian experimental design, one for social choice and the other for personalized choice. We couple this with a general Monte-CarloExpectation-Maximization (MC-EM) based algorithm for MAP inference under GRUMs. We also prove uni-modality of the likelihood functions for a class of GRUMs. We examine the performance of various criteria by experimental studies, which show that the proposed elicitation scheme increases the precision of estimation.", "title": "" } ]
scidocsrr
015f01d0b690b329424e2e757777c8ce
Understanding customer satisfaction and loyalty: An empirical study of mobile instant messages in China
[ { "docid": "97957590d7bec130bac3cf0f0e29cf9a", "text": "Understanding user acceptance of the Internet, especially the intentions to use Internet commerce and mobile commerce, is important in explaining the fact that these commerce have been growing at an exponential rate in recent years. This paper studies factors of new technology to better understand and manage the electronic commerce activities. The theoretical model proposed in this paper is intended to clarify the factors as they are related to the technology acceptance model. More specifically, the relationship among trust and other factors are hypothesized. Using the technology acceptance model, this research reveals the importance of the hedonic factor. The result of this research implies that the ways of stimulating and facilitating customers' participation in mobile commerce should be differentiated from those in Internet commerce", "title": "" }, { "docid": "b6d6da15fd000be1a01d4b0f1bb0d087", "text": "Purpose – The purpose of the paper is to distinguish features of m-commerce from those of e-commerce and identify factors to influence customer satisfaction (m-satisfaction) and loyalty (m-loyalty) in m-commerce by empirically-based case study. Design/methodology/approach – First, based on previous literature, the paper builds sets of customer satisfaction factors for both e-commerce and m-commerce. Second, features of m-commerce are identified by comparing it with current e-commerce through decision tree (DT). Third, with the derived factors from DT, significant factors and relationships among the factors, m-satisfaction and m-loyalty are examined by m-satisfaction model employing structural equation model. Findings – The paper finds that m-commerce is partially similar in factors like “transaction process” and “customization” which lead customer satisfaction after connecting an m-commerce site, but it has unique aspects of “content reliability”, “availability”, and “perceived price level of mobile Internet (m-Internet)” which build customer’s intention to the m-commerce site. Through the m-satisfaction model, “content reliability”, and “transaction process” are proven to be significantly influential factors to m-satisfaction and m-loyalty. Research implications/limitations – The paper can be a meaningful step to provide empirical analysis and evaluation based on questionnaire survey targeting actual users. The research is based on a case study on digital music transaction, which is indicative, rather than general. Practical implications – The paper meets the needs to focus on customer under the fiercer competition in Korean m-commerce market. It can guide those who want to initiate, move or broaden their business to m-commerce from e-commerce. Originality/value – The paper develops a revised ACSI model to identify individual critical factors and the degree of effect.", "title": "" } ]
[ { "docid": "716e08a31e775342daee6319d4c6a4cf", "text": "Error-related EEG potentials (ErrP) can be used for brain-machine interfacing (BMI). Decoding of these signals, indicating subject's perception of erroneous system decisions or actions can be used to correct these actions or to improve the overall interfacing system. Multiple studies have shown the feasibility of decoding these potentials in single-trial using different types of experimental protocols and feedback modalities. However, previously reported approaches are limited by the use of long inter-stimulus intervals (ISI > 2 s). In this work we assess if it is possible to overcome this limitation. Our results show that it is possible to decode error-related potentials elicited by stimuli presented with ISIs lower than 1 s without decrease in performance. Furthermore, the increase in the presentation rate did not increase the subject workload. This suggests that the presentation rate for ErrP-based BMI protocols using serial monitoring paradigms can be substantially increased with respect to previous works.", "title": "" }, { "docid": "bc0064e87f077b9acf4d583d3d90489b", "text": "The dominant evolutionary theory of physical attraction posits that attractiveness reflects physiological health, and attraction is a mechanism for identifying a healthy mate. Previous studies have found that perceptions of the healthiest body mass index (weight scaled for height; BMI) for women are close to healthy BMI guidelines, while the most attractive BMI is significantly lower, possibly pointing to an influence of sociocultural factors in determining attractive BMI. However, less is known about ideal body size for men. Further, research has not addressed the role of body fat and muscle, which have distinct relationships with health and are conflated in BMI, in determining perceived health and attractiveness. Here, we hypothesised that, if attractiveness reflects physiological health, the most attractive and healthy appearing body composition should be in line with physiologically healthy body composition. Thirty female and 33 male observers were instructed to manipulate 15 female and 15 male body images in terms of their fat and muscle to optimise perceived health and, separately, attractiveness. Observers were unaware that they were manipulating the muscle and fat content of bodies. The most attractive apparent fat mass for female bodies was significantly lower than the healthiest appearing fat mass (and was lower than the physiologically healthy range), with no significant difference for muscle mass. The optimal fat and muscle mass for men's bodies was in line with the healthy range. Male observers preferred a significantly lower overall male body mass than did female observers. While the body fat and muscle associated with healthy and attractive appearance is broadly in line with physiologically healthy values, deviations from this pattern suggest that future research should examine a possible role for internalization of body ideals in influencing perceptions of attractive body composition, particularly in women.", "title": "" }, { "docid": "69a0426796f46ac387f1f9d831c85e87", "text": "In this paper, a Volterra analysis built on top of a normal harmonic balance simulation is used for a comprehensive analysis of the causes of AM-PM distortion in a LDMOS RF power amplifier (PA). The analysis shows that any nonlinear capacitors cause AM-PM. In addition, varying terminal impedances may pull the matching impedances and cause phase shift. The AM-PM is also affected by the distortion that is mixed down from the second harmonic. As a sample circuit, an internally matched 30-W LDMOS RF PA is used and the results are compared to measured AM-AM, AM-PM and large-signal S11.", "title": "" }, { "docid": "eb10f86262180b122d261f5acbe4ce18", "text": "Procrasttnatton ts variously descnbed a? harmful, tnnocuous, or even beneficial Two longitudinal studies examined procrastination among students Procrasttnators reported lower stress and less illness than nonprocrasttnators early in the semester, but they reported higher stress and more illness late in the term, and overall they were sicker Procrastinators also received lower grades on atl assignment's Procrasttnatton thus appears to be a self-defeating behavior pattem marked by short-term benefits and long-term costs Doing one's work and fulfilling other obligations in a timely fashion seem like integral parts of rational, proper adult funcuoning Yet a majonty of the population admits to procrastinating at least sometimes, and substantial minonties admit to significant personal, occupational, or financial difficulties resulting from their dilatory behavior (Ferran, Johnson, & McCown, 1995) Procrastinauon is often condemned, particularly by people who do not think themselves guilty of it (Burka & Yuen, 1983, Ferran et dl, 1995) Cntics of procrastination depict it as a lazy self-indulgent habit of putting things off for no reason They say it is self-defeating m that It lowers the quality of performance, because one ends up with less time to work (Baumeister & Scher, 1988, Ellis & Knaus, 1977) Others depict it as a destructive strategy of self-handicappmg (Jones & Berglas, 1978), such a,s when people postpone or withhold effort so as to give themselves an excuse for anticipated poor performance (Tice, 1991, Tice & Baumeister, 1990) People who finish their tasks and assignments early may point self-nghteously to the stress suffered by procrastinators at the last minute and say that putting things off is bad for one's physical or mental health (see Boice, 1989, 1996, Rothblum, Solomon, & Murakami, 1986 Solomon & Rothblum, 1984) On the other hand, some procrastinators defend their practice They point out correctly that if one puts in the same amount of work on the project, it does not matter whether this is done early or late Some even say that procrastination improves perfonnance, because the imminent deadline creates excitement and pressure that elicit peak performance \"I do my best work under pressure,\" in the standard phrase (Ferran, 1992, Ferran et al , 1995, Uy, 1995) Even if it were true that stress and illness are higher for people who leave things unul the last minute—and research has not yet provided clear evidence that in fact they both are higher—this might be offset by the enjoyment of carefree times earlier (see Ainslie, 1992) The present investigation involved a longitudinal study of the effects of procrastination on quality of performance, stress, and illness Early in the semester, students were given an assignment with a deadline Procrastinators were identified usmg Lay's (1986) scale Students' well-being was assessed with self-reports of stress and illAddress correspondence Case Western Reserve Unive 7123, e-mail dxt2@po cwiu o Dianne M Tice Department of Psychology, sity 10900 Euclid Ave Cleveland OH 44106ness The validity of the scale was checked by ascertaining whethtr students tumed in the assignment early, on time, or late Finally, task performance was assessed by consulting the grades received Competing predictions could be made", "title": "" }, { "docid": "591af257561f98f28b1530c0fee13907", "text": "Most of the mining techniques have only concerned with interesting patterns. However, in the recent years, there is an increasing demand in mining Unexpected Items or Outliers or Rare Items. Several application domains have realized the direct mapping between outliers in data and real world anomalies that are of great interest to an analyst. Outliers represents semantically correct but infrequent situationin a database. Detecting outliers allows extracting useful and actionable knowledge to the domain experts. In Educational Data, outliers are those students who have secured scores deviated so much from the average scores of other students. The educational data are Quantitative in nature. Any mining technique on quantitative data will partition the quantitative attributes with unnatural boundaries which lead to overestimate or underestimate the boundary values. Fuzzy logic handles this in a more realistic way. Knowing the threshold values apriori is not possible, hence our method uses dynamically calculated Support and Rank measures rather than predefined values. Our method uses a modified Fuzzy Apriori Rare Item sets Mining (FARIM) algorithm to detect the outliers (weak student). This will help the teachers in giving extra coaching for the weak students.", "title": "" }, { "docid": "b5b4e637065ba7c0c18a821bef375aea", "text": "The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research.", "title": "" }, { "docid": "690659887c8261e2984802e2cdb71b5f", "text": "The Discrete Hodge Helmholtz Decomposition (DHHD) is able to locate critical points in a vector field. We explore two novel applications of this technique to image processing problems, viz., hurricane tracking and fingerprint analysis. The eye of the hurricane represents a rotational center, which is shown to be robustly detected using DHHD. This is followed by an automatic segmentation and tracking of the hurricane eye, which does not require manual initializations. DHHD is also used for identification of reference points in fingerprints. The new technique for reference point detection is relatively insensitive to noise in the orientation field. The DHHD based method is shown to detect reference points correctly for 96.25% of the images in the database used.", "title": "" }, { "docid": "f4c2a00b8a602203c86eaebc6f111f46", "text": "Tamara Kulesa: Hello. This is Tamara Kulesa, Worldwide Marketing Manager for IBM Global Business Services for the Global Government Industry. I am here today with Susanne Dirks, Manager of the IBM Institute for Business Values Global Center for Economic Development in Ireland. Susanne is responsible for the research and writing of the newly published report, \"A Vision of Smarter Cities: How Cities Can Lead the Way into a Prosperous and Sustainable Future.\" Susanne, thank you for joining me today.", "title": "" }, { "docid": "abcc4de8a7ca3b716fa0951429a6c969", "text": "Recently, deep learning has been successfully applied to the problem of hashing, yielding remarkable performance compared to traditional methods with hand-crafted features. However, most of existing deep hashing methods are designed for the supervised scenario and require a large number of labeled data. In this paper, we propose a novel semi-supervised hashing method for image retrieval, named Deep Hashing with a Bipartite Graph (BGDH), to simultaneously learn embeddings, features and hash codes. More specifically, we construct a bipartite graph to discover the underlying structure of data, based on which an embedding is generated for each instance. Then, we feed raw pixels as well as embeddings to a deep neural network, and concatenate the resulting features to determine the hash code. Compared to existing methods, BGDH is a universal framework that is able to utilize various types of graphs and losses. Furthermore, we propose an inductive variant of BGDH to support out-of-sample extensions. Experimental results on real datasets show that our BGDH outperforms state-of-the-art hashing methods.", "title": "" }, { "docid": "4a5abe07b93938e7549df068967731fc", "text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.", "title": "" }, { "docid": "31954ceaa223884fa27a9c446288b8a9", "text": "Computational thinking (CT) has been described as the use of abstraction, automation, and analysis in problem-solving [3]. We examine how these ways of thinking take shape for middle and high school youth in a set of NSF-supported programs. We discuss opportunities and challenges in both in-school and after-school contexts. Based on these observations, we present a \"use-modify-create\" framework, representing three phases of students' cognitive and practical activity in computational thinking. We recommend continued investment in the development of CT-rich learning environments, in educators who can facilitate their use, and in research on the broader value of computational thinking.", "title": "" }, { "docid": "b59e332c086a8ce6d6ddc0526b8848c7", "text": "We propose Generative Adversarial Tree Search (GATS), a sample-efficient Deep Reinforcement Learning (DRL) algorithm. While Monte Carlo Tree Search (MCTS) is known to be effective for search and planning in RL, it is often sampleinefficient and therefore expensive to apply in practice. In this work, we develop a Generative Adversarial Network (GAN) architecture to model an environment’s dynamics and a predictor model for the reward function. We exploit collected data from interaction with the environment to learn these models, which we then use for model-based planning. During planning, we deploy a finite depth MCTS, using the learned model for tree search and a learned Q-value for the leaves, to find the best action. We theoretically show that GATS improves the bias-variance tradeoff in value-based DRL. Moreover, we show that the generative model learns the model dynamics using orders of magnitude fewer samples than the Q-learner. In non-stationary settings where the environment model changes, we find the generative model adapts significantly faster than the Q-learner to the new environment.", "title": "" }, { "docid": "e733b08455a5ca2a5afa596268789993", "text": "In this paper a new PWM inverter topology suitable for medium voltage (2300/4160 V) adjustable speed drive (ASD) systems is proposed. The modular inverter topology is derived by combining three standard 3-phase inverter modules and a 0.33 pu output transformer. The output voltage is high quality, multistep PWM with low dv/dt. Further, the approach also guarantees balanced operation and 100% utilization of each 3-phase inverter module over the entire speed range. These features enable the proposed topology to be suitable for powering constant torque as well as variable torque type loads. Clean power utility interface of the proposed inverter system can be achieved via an 18-pulse input transformer. Analysis, simulation, and experimental results are shown to validate the concepts.", "title": "" }, { "docid": "5ed74b235edcbcb5aeb5b6b3680e2122", "text": "Self-paced learning (SPL) mimics the cognitive mechanism o f humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by mini zer function. Existing methods usually pursue this by artificially designing th e explicit form of SPL regularizer. In this paper, we focus on the minimizer functi on, and study a group of new regularizer, named self-paced implicit regularizer th at is deduced from robust loss function. Based on the convex conjugacy theory, the min imizer function for self-paced implicit regularizer can be directly learned fr om the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We dem onstrate that the learning procedure of SPL-IR is associated with latent robu st loss functions, thus can provide some theoretical inspirations for its working m echanism. We further analyze the relation between SPL-IR and half-quadratic opt imization. Finally, we implement SPL-IR to both supervised and unsupervised tasks , nd experimental results corroborate our ideas and demonstrate the correctn ess and effectiveness of implicit regularizers.", "title": "" }, { "docid": "7d42d3d197a4d62e1b4c0f3c08be14a9", "text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.", "title": "" }, { "docid": "a1c859b44c46ebf4d2d413f4303cb4f7", "text": "We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker andWeir (1994) apart from weakly equivalent formalisms such as Tree-Adjoining Grammar (TAG), for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.", "title": "" }, { "docid": "4fb6b884b22962c6884bd94f8b76f6f2", "text": "This paper describes a novel motion estimation algorithm for floating base manipulators that utilizes low-cost inertial measurement units (IMUs) containing a three-axis gyroscope and a three-axis accelerometer. Four strap-down microelectromechanical system (MEMS) IMUs are mounted on each link to form a virtual IMU whose body's fixed frame is located at the center of the joint rotation. An extended Kalman filter (EKF) and a complementary filter are used to develop a virtual IMU by fusing together the output of four IMUs. The novelty of the proposed algorithm is that no forward kinematic model that requires data flow from previous joints is needed. The measured results obtained from the planar motion of a hydraulic arm show that the accuracy of the estimation of the joint angle is within ± 1 degree and that the root mean square error is less than 0.5 degree.", "title": "" }, { "docid": "5c772b272bbbd8a19af1f2960a44be18", "text": "The American Association of Clinical Endocrinologists and American Association of Endocrine Surgeons Medical Guidelines for the Management of Adrenal Incidentalomas are systematically developed statements to assist health care providers in medical decision making for specific clinical conditions. Most of the content herein is based on literature reviews. In areas of uncertainty, professional judgment was applied. These guidelines are a working document that reflects the state of the field at the time of publication. Because rapid changes in this area are expected, periodic revisions are inevitable. We encourage medical professionals to use this information in conjunction with their best clinical judgment. The presented recommendations may not be appropriate in all situations. Any decision by practitioners to apply these guidelines must be made in light of local resources and individual circumstances.", "title": "" }, { "docid": "cc78d1482412669e05f57e13cbc1c59f", "text": "We present a method to learn and propagate shape placements in 2D polygonal scenes from a few examples provided by a user. The placement of a shape is modeled as an oriented bounding box. Simple geometric relationships between this bounding box and nearby scene polygons define a feature set for the placement. The feature sets of all example placements are then used to learn a probabilistic model over all possible placements and scenes. With this model, we can generate a new set of placements with similar geometric relationships in any given scene. We introduce extensions that enable propagation and generation of shapes in 3D scenes, as well as the application of a learned modeling session to large scenes without additional user interaction. These concepts allow us to generate complex scenes with thousands of objects with relatively little user interaction.", "title": "" }, { "docid": "fb8638c46ca5bb4a46b1556a2504416d", "text": "In this paper we investigate how a VANET-based traffic information system can overcome the two key problems of strictly limited bandwidth and minimal initial deployment. First, we present a domain specific aggregation scheme in order to minimize the required overall bandwidth. Then we propose a genetic algorithm which is able to identify good positions for static roadside units in order to cope with the highly partitioned nature of a VANET in an early deployment stage. A tailored toolchain allows to optimize the placement with respect to an application-centric objective function, based on travel time savings. By means of simulation we assess the performance of the resulting traffic information system and the optimization strategy.", "title": "" } ]
scidocsrr
d5e15ac864231fcbcd8823b9ed7b70b2
Design and Dynamic Model of a Frog-inspired Swimming Robot Powered by Pneumatic Muscles
[ { "docid": "30f48021bca12899d6f2e012e93ba12d", "text": "There are several locomotion mechanisms in Nature. The study of mechanics of any locomotion is very useful for scientists and researchers. Many locomotion principles from Nature have been adapted in robotics. There are several species which are capable of multimode locomotion such as walking and swimming, and flying etc. Frogs are such species, capable of jumping, walking, and swimming. Multimode locomotion is important for robots to work in unknown environment. Frogs are widely known as good multimode locomotors. Webbed feet help them to swim efficiently in water. This paper presents the study of frog's swimming locomotion and adapting the webbed feet for swimming locomotion of the robots. A simple mechanical model of robotic leg with webbed foot, which can be used for multi-mode locomotion and robotic frog, is put forward. All the joints of the legs are designed to be driven by tendon-pulley arrangement with the actuators mounted on the body, which allows the legs to be lighter and compact.", "title": "" } ]
[ { "docid": "50ffd544ab676a0b3c17802734a9fd9a", "text": "PSDVec is a Python/Perl toolbox that learns word embeddings, i.e. the mapping of words in a natural language to continuous vectors which encode the semantic/syntactic regularities between the words. PSDVec implements a word embedding learning method based on a weighted low-rank positive semidefinite approximation. To scale up the learning process, we implement a blockwise online learning algorithm to learn the embeddings incrementally. This strategy greatly reduces the learning time of word embeddings on a large vocabulary, and can learn the embeddings of new words without re-learning the whole vocabulary. On 9 word similarity/analogy benchmark sets and 2 Natural Language Processing (NLP) tasks, PSDVec produces embeddings that has the best average performance among popular word embedding tools. PSDVec provides a new option for NLP practitioners. & 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1613f8b73465d52a3e850c894578ef2a", "text": "In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.", "title": "" }, { "docid": "51215220471f8f7f4afd68c1a27b5809", "text": "he unauthorized modification and subsequent misuse of software is often referred to as software cracking. Usually, cracking requires disabling one or more software features that enforce policies (of access, usage, dissemination, etc.) related to the software. Because there is value and/or notoriety to be gained by accessing valuable software capabilities, cracking continues to be common and is a growing problem. To combat cracking, anti-tamper (AT) technologies have been developed to protect valuable software. Both hardware and software AT technologies aim to make software more resistant against attack and protect critical program elements. However, before discussing the various AT technologies, we need to know the adversary's goals. What do software crackers hope to achieve? Their purposes vary, and typically include one or more of the following: • Gaining unauthorized access. The attacker's goal is to disable the software access control mechanisms built into the software. After doing so, the attacker can make and distribute illegal copies whose copy protection or usage control mechanisms have been disabled – this is the familiar software piracy problem. If the cracked software provides access to classified data, then the attacker's real goal is not the software itself, but the data that is accessible through the software. The attacker sometimes aims at modifying or unlocking specific functionality in the program, e.g., a demo or export version of software is often a deliberately degraded version of what is otherwise fully functional software. The attacker then seeks to make it fully functional by re-enabling the missing features. • Reverse engineering. The attacker aims to understand enough about the software to steal key routines, to gain access to proprietary intellectual property , or to carry out code-lifting, which consists of reusing a crucial part of the code (without necessarily understanding the internals of how it works) in some other software. Good programming practices, while they facilitate software engineering, also tend to simultaneously make it easier to carry out reverse engineering attacks. These attacks are potentially very costly to the original software developer as they allow a competitor (or an enemy) to nullify the develop-er's competitive advantage by rapidly closing a technology gap through insights gleaned from examining the software. • Violating code integrity. This familiar attack consists of either injecting malicious code (malware) into a program , injecting code that is not malevolent but illegally enhances a pro-gram's functionality, or otherwise sub-verting a program so it performs new and …", "title": "" }, { "docid": "f69ba8c401cd61057888dfa023bfee30", "text": "Since its introduction, the Nintendo Wii remote has become one of the world's most sophisticated and common input devices. Combining its impressive capability with a low cost and high degree of accessibility make it an ideal platform for exploring a variety of interaction research concepts. The author describes the technology inside the Wii remote, existing interaction techniques, what's involved in creating custom applications, and several projects ranging from multiobject tracking to spatial augmented reality that challenge the way its developers meant it to be used.", "title": "" }, { "docid": "cf2e23cddb72b02d1cca83b4c3bf17a8", "text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr", "title": "" }, { "docid": "7c9aba06418b51a90f1f3d97c3e3f83a", "text": "BACKGROUND\nResearch indicates that music therapy can improve social behaviors and joint attention in children with Autism Spectrum Disorder (ASD); however, more research on the use of music therapy interventions for social skills is needed to determine the impact of group music therapy.\n\n\nOBJECTIVE\nTo examine the effects of a music therapy group intervention on eye gaze, joint attention, and communication in children with ASD.\n\n\nMETHOD\nSeventeen children, ages 6 to 9, with a diagnosis of ASD were randomly assigned to the music therapy group (MTG) or the no-music social skills group (SSG). Children participated in ten 50-minute group sessions over a period of 5 weeks. All group sessions were designed to target social skills. The Social Responsiveness Scale (SRS), the Autism Treatment Evaluation Checklist (ATEC), and video analysis of sessions were used to evaluate changes in social behavior.\n\n\nRESULTS\nThere were significant between-group differences for joint attention with peers and eye gaze towards persons, with participants in the MTG demonstrating greater gains. There were no significant between-group differences for initiation of communication, response to communication, or social withdraw/behaviors. There was a significant interaction between time and group for SRS scores, with improvements for the MTG but not the SSG. Scores on the ATEC did not differ over time between the MTG and SSG.\n\n\nCONCLUSIONS\nThe results of this study support further research on the use of music therapy group interventions for social skills in children with ASD. Statistical results demonstrate initial support for the use of music therapy social groups to develop joint attention.", "title": "" }, { "docid": "53ab46387cb1c04e193d2452c03a95ad", "text": "Real time control of five-axis machine tools requires smooth generation of feed, acceleration and jerk in CNC systems without violating the physical limits of the drives. This paper presents a feed scheduling algorithm for CNC systems to minimize the machining time for five-axis contour machining of sculptured surfaces. The variation of the feed along the five-axis tool-path is expressed in a cubic B-spline form. The velocity, acceleration and jerk limits of the five axes are considered in finding the most optimal feed along the toolpath in order to ensure smooth and linear operation of the servo drives with minimal tracking error. The time optimal feed motion is obtained by iteratively modulating the feed control points of the B-spline to maximize the feed along the tool-path without violating the programmed feed and the drives’ physical limits. Long tool-paths are handled efficiently by applying a moving window technique. The improvement in the productivity and linear operation of the five drives is demonstrated with five-axis simulations and experiments on a CNC machine tool. r 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c841938f03a07fffc5150fbe18f8f740", "text": "Ensemble modeling is now a well-established means for improving prediction accuracy; it enables you to average out noise from diverse models and thereby enhance the generalizable signal. Basic stacked ensemble techniques combine predictions from multiple machine learning algorithms and use these predictions as inputs to second-level learning models. This paper shows how you can generate a diverse set of models by various methods such as forest, gradient boosted decision trees, factorization machines, and logistic regression and then combine them with stacked-ensemble techniques such as hill climbing, gradient boosting, and nonnegative least squares in SAS Visual Data Mining and Machine Learning. The application of these techniques to real-world big data problems demonstrates how using stacked ensembles produces greater prediction accuracy and robustness than do individual models. The approach is powerful and compelling enough to alter your initial data mining mindset from finding the single best model to finding a collection of really good complementary models. It does involve additional cost due both to training a large number of models and the proper use of cross validation to avoid overfitting. This paper shows how to efficiently handle this computational expense in a modern SAS environment and how to manage an ensemble workflow by using parallel computation in a distributed framework.", "title": "" }, { "docid": "2a4cb6dac01c4388b4b8d8a80e30fc2b", "text": "Chemotaxis toward amino-acids results from the suppression of directional changes which occur spontaneously in isotropic solutions.", "title": "" }, { "docid": "4482146da978a89920e128470e3b8567", "text": "Glaucoma is the second leading cause of blindness. Glaucoma can be diagnosed through measurement of neuro-retinal optic cup-to-disc ratio (CDR). Automatic calculation of optic cup boundary is challenging due to the interweavement of blood vessels with the surrounding tissues around the cup. A Convex Hull based Neuro-Retinal Optic Cup Ellipse Optimization algorithm improves the accuracy of the boundary estimation. The algorithm’s effectiveness is demonstrated on 70 clinical patient’s data set collected from Singapore Eye Research Institute. The root mean squared error of the new algorithm is 43% better than the ARGALI system which is the state-of-the-art. This further leads to a large clinical evaluation of the algorithm involving 15 thousand patients from Australia and Singapore.", "title": "" }, { "docid": "23def38b89358bc1090412e127c7ec2b", "text": "We describe the design of four ornithopters ranging in wing span from 10 cm to 40 cm, and in weight from 5 g to 45 g. The controllability and power supply are two major considerations, so we compare the efficiency and characteristics between different types of subsystems such as gearbox and tail shape. Our current ornithopter is radio-controlled with inbuilt visual sensing and capable of takeoff and landing. We also concentrate on its wing efficiency based on design inspired by a real insect wing and consider that aspects of insect flight such as delayed stall and wake capture are essential at such small size. Most importantly, the advance ratio, controlled either by enlarging the wing beat amplitude or raising the wing beat frequency, is the most significant factor in an ornithopter which mimics an insect.", "title": "" }, { "docid": "4f43a692ff8f6aed3a3fc4521c86d35e", "text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Understand the challenges in restoring volume and structural integrity in rhinoplasty. 2. Identify the appropriate uses of various autografts in aesthetic and reconstructive rhinoplasty (septal cartilage, auricular cartilage, costal cartilage, calvarial and nasal bone, and olecranon process of the ulna). 3. Identify the advantages and disadvantages of each of these autografts.\n\n\nSUMMARY\nThis review specifically addresses the use of autologous grafts in rhinoplasty. Autologous materials remain the preferred graft material for use in rhinoplasty because of their high biocompatibility and low risk of infection and extrusion. However, these advantages should be counterbalanced with the concerns of donor-site morbidity, graft availability, and graft resorption.", "title": "" }, { "docid": "5b36ec4a7282397402d582de7254d0c1", "text": "Recurrent neural network language models (RNNLMs) have becoming increasingly popular in many applications such as automatic speech recognition (ASR). Significant performance improvements in both perplexity and word error rate over standard n-gram LMs have been widely reported on ASR tasks. In contrast, published research on using RNNLMs for keyword search systems has been relatively limited. In this paper the application of RNNLMs for the IARPA Babel keyword search task is investigated. In order to supplement the limited acoustic transcription data, large amounts of web texts are also used in large vocabulary design and LM training. Various training criteria were then explored to improved RNNLMs' efficiency in both training and evaluation. Significant and consistent improvements on both keyword search and ASR tasks were obtained across all languages.", "title": "" }, { "docid": "7b27d8b8f05833888b9edacf9ace0a18", "text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.", "title": "" }, { "docid": "0fe95e1e3f848d8ed1bc4b54c9ccfc5d", "text": "Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.", "title": "" }, { "docid": "445685897a2e7c9c5b44a713690bd0a8", "text": "Maximum power point tracking (MPPT) is an integral part of a system of energy conversion using photovoltaic (PV) arrays. The power-voltage characteristic of PV arrays operating under partial shading conditions exhibits multiple local maximum power points (LMPPs). In this paper, a new method has been presented to track the global maximum power point (GMPP) of PV. Compared with the past proposed global MPPT techniques, the method proposed in this paper has the advantages of determining whether partial shading is present, calculating the number of peaks on P-V curves, and predicting the locations of GMPP and LMPP. The new method can quickly find GMPP, and avoid much energy loss due to blind scan. The experimental results verify that the proposed method guarantees convergence to the global MPP under partial shading conditions.", "title": "" }, { "docid": "4c729baceae052361decd51321e0b5bc", "text": "Learning to hash has attracted broad research interests in recent computer vision and machine learning studies, due to its ability to accomplish efficient approximate nearest neighbor search. However, the closely related task, maximum inner product search (MIPS), has rarely been studied in this literature. To facilitate the MIPS study, in this paper, we introduce a general binary coding framework based on asymmetric hash functions, named asymmetric inner-product binary coding (AIBC). In particular, AIBC learns two different hash functions, which can reveal the inner products between original data vectors by the generated binary vectors. Although conceptually simple, the associated optimization is very challenging due to the highly nonsmooth nature of the objective that involves sign functions. We tackle the nonsmooth optimization in an alternating manner, by which each single coding function is optimized in an efficient discrete manner. We also simplify the objective by discarding the quadratic regularization term which significantly boosts the learning efficiency. Both problems are optimized in an effective discrete way without continuous relaxations, which produces high-quality hash codes. In addition, we extend the AIBC approach to the supervised hashing scenario, where the inner products of learned binary codes are forced to fit the supervised similarities. Extensive experiments on several benchmark image retrieval databases validate the superiority of the AIBC approaches over many recently proposed hashing algorithms.", "title": "" }, { "docid": "b4efebd49c8dd2756a4c2fb86b854798", "text": "Mobile technologies (including handheld and wearable devices) have the potential to enhance learning activities from basic medical undergraduate education through residency and beyond. In order to use these technologies successfully, medical educators need to be aware of the underpinning socio-theoretical concepts that influence their usage, the pre-clinical and clinical educational environment in which the educational activities occur, and the practical possibilities and limitations of their usage. This Guide builds upon the previous AMEE Guide to e-Learning in medical education by providing medical teachers with conceptual frameworks and practical examples of using mobile technologies in medical education. The goal is to help medical teachers to use these concepts and technologies at all levels of medical education to improve the education of medical and healthcare personnel, and ultimately contribute to improved patient healthcare. This Guide begins by reviewing some of the technological changes that have occurred in recent years, and then examines the theoretical basis (both social and educational) for understanding mobile technology usage. From there, the Guide progresses through a hierarchy of institutional, teacher and learner needs, identifying issues, problems and solutions for the effective use of mobile technology in medical education. This Guide ends with a brief look to the future.", "title": "" }, { "docid": "bcb756857adef42264eab0f1361f8be7", "text": "The problem of multi-class boosting is considered. A new fra mework, based on multi-dimensional codewords and predictors is introduced . The optimal set of codewords is derived, and a margin enforcing loss proposed. The resulting risk is minimized by gradient descent on a multidimensional functi onal space. Two algorithms are proposed: 1) CD-MCBoost, based on coordinate des cent, updates one predictor component at a time, 2) GD-MCBoost, based on gradi ent descent, updates all components jointly. The algorithms differ in the w ak learners that they support but are both shown to be 1) Bayes consistent, 2) margi n enforcing, and 3) convergent to the global minimum of the risk. They also red uce to AdaBoost when there are only two classes. Experiments show that both m et ods outperform previous multiclass boosting approaches on a number of data sets.", "title": "" }, { "docid": "42e2a8b8c1b855fba201e3421639d80d", "text": "Fraudulent behaviors in Google’s Android app market fuel search rank abuse and malware proliferation. We present FairPlay, a novel system that uncovers both malware and search rank fraud apps, by picking out trails that fraudsters leave behind. To identify suspicious apps, FairPlay’s PCF algorithm correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from longitudinal Google Play app data. We contribute a new longitudinal app dataset to the community, which consists of over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year. FairPlay achieves over 95% accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75% of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology, and reveals a new type of attack campaign, where users are harassed into writing positive reviews, and install and review other apps.", "title": "" } ]
scidocsrr
dc8653284ba16181ef3ccced89a9f403
Key elements to enable millimeter wave communications for 5G wireless systems
[ { "docid": "c7f38e2284ad6f1258fdfda3417a6e14", "text": "Millimeter wave (mmWave) systems must overcome heavy signal attenuation to support high-throughput wireless communication links. The small wavelength in mmWave systems enables beamforming using large antenna arrays to combat path loss with directional transmission. Beamforming with multiple data streams, known as precoding, can be used to achieve even higher performance. Both beamforming and precoding are done at baseband in traditional microwave systems. In mmWave systems, however, the high cost of mixed-signal and radio frequency chains (RF) makes operating in the passband and analog domains attractive. This hardware limitation places additional constraints on precoder design. In this paper, we consider single user beamforming and precoding in mmWave systems with large arrays. We exploit the structure of mmWave channels to formulate the precoder design problem as a sparsity constrained least squares problem. Using the principle of basis pursuit, we develop a precoding algorithm that approximates the optimal unconstrained precoder using a low dimensional basis representation that can be efficiently implemented in RF hardware. We present numerical results on the performance of the proposed algorithm and show that it allows mmWave systems to approach waterfilling capacity.", "title": "" }, { "docid": "f1830518839bf8cf2f1ade7a1fad86b1", "text": "Multiple-input-multiple-output (MIMO) wireless systems are those that have multiple antenna elements at both the transmitter and receiver. They were first investigated by computer simulations in the 1980s. Since that time, interest in MIMO systems has exploded. They are now being used for third-generation cellular systems (W-CDMA) and are discussed for future high-performance modes of the highly successful IEEE 802.11 standard for wireless local area networks. MIMO-related topics also occupy a considerable part of today's academic communications research. The multiple antennas in MIMO systems can be exploited in two different ways. One is the creation of a highly effective antenna diversity system; the other is the use of the multiple antennas for the transmission of several parallel data streams to increase the capacity of the system. This article presented an overview of MIMO systems with antenna selection. The transmitter, the receiver, or both use only the signals from a subset of the available antennas. This allows considerable reductions in the hardware expense.", "title": "" }, { "docid": "ed676ff14af6baf9bde3bdb314628222", "text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.", "title": "" }, { "docid": "68e714e5a3e92924c63167781149e628", "text": "This paper presents a millimeter wave wideband differential line to waveguide transition using a short ended slot line. The slot line connected in parallel to the rectangular waveguide can effectively compensate the frequency dependence of the susceptance in the waveguide. Thus it is suitable to achieve a wideband characteristic together with a simpler structure. It is experimentally demonstrated that the proposed transitions have the relative bandwidth of 20.2 % with respect to -10 dB reflection, which is a significant wideband characteristic compared with the conventional transition's bandwidth of 11%.", "title": "" } ]
[ { "docid": "27e0059fb9be7ada93fd2d1e01149582", "text": "OBJECTIVE\nTo assess the psychosocial impact of psoriatic arthritis (PsA), describe how health-related quality of life (QoL) is affected in patients with PsA, discuss measures used to evaluate the psychosocial impact of PsA, and review studies examining the effect of therapy on QoL.\n\n\nMETHODS\nA targeted review on the impact of PsA on QoL and the role of tailored psychosocial management in reducing the psychosocial burden of the disease was performed. PubMed literature searches were conducted using the terms PsA, psychosocial burden, QoL, and mood/behavioral changes. Articles were deemed relevant if they presented information regarding the psychosocial impact of PsA, methods used to evaluate these impacts, or ways to manage/improve management of PsA and its resulting comorbidities. The findings of this literature search are descriptively reviewed and the authors׳ expert opinion on their interpretation is provided.\n\n\nRESULTS\nThe psychosocial burden of PsA negatively affects QoL. Patients suffer from sleep disorders, fatigue, low-level stress, depression and mood/behavioral changes, poor body image, and reduced work productivity. Additionally, each patient responds to pain differently, depending on a variety of psychological factors including personality structure, cognition, and attention to pain. Strategies for evaluating the burdens associated with PsA and the results of properly managing patients with PsA are described.\n\n\nCONCLUSIONS\nPsA is associated with a considerable psychosocial burden and new assessment tools, specific to PsA, have been developed to help quantify this burden in patients. Future management algorithms of PsA should incorporate appropriate assessment and management of psychological and physical concerns of patients. Furthermore, patients with PsA should be managed by a multidisciplinary team that works in coordination with the patient and their family or caregivers.", "title": "" }, { "docid": "43e39433013ca845703af053e5ef9e11", "text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.", "title": "" }, { "docid": "6a01ccb9b2e0066340815752fd05588e", "text": "The microRNA(miRNA)-34a is a key regulator of tumor suppression. It controls the expression of a plethora of target proteins involved in cell cycle, differentiation and apoptosis, and antagonizes processes that are necessary for basic cancer cell viability as well as cancer stemness, metastasis, and chemoresistance. In this review, we focus on the molecular mechanisms of miR-34a-mediated tumor suppression, giving emphasis on the main miR-34a targets, as well as on the principal regulators involved in the modulation of this miRNA. Moreover, we shed light on the miR-34a role in modulating responsiveness to chemotherapy and on the phytonutrients-mediated regulation of miR-34a expression and activity in cancer cells. Given the broad anti-oncogenic activity of miR-34a, we also discuss the substantial benefits of a new therapeutic concept based on nanotechnology delivery of miRNA mimics. In fact, the replacement of oncosuppressor miRNAs provides an effective strategy against tumor heterogeneity and the selective RNA-based delivery systems seems to be an excellent platform for a safe and effective targeting of the tumor.", "title": "" }, { "docid": "5728682e998b89cb23b12ba9acc3d993", "text": "Potential field methods are rapidly gaining popularity in obstacle avoidance applications for mobile robots and manipulators. While the potential field principle is particularly attractive because of its elegance and simplicity, substantial shortcomings have been identified as problems that are inherent to this principle. Based upon mathematical analysis, this paper presents a systematic criticism of the inherent problems. The heart of this analysis is a differential equation that combines the robot and the environment into a unified system. The identified problems are discussed in qualitative and theoretical terms and documented with experimental results from actual mobile robot runs.", "title": "" }, { "docid": "4fb0803aa12b7dfb2b3661822ea67c2b", "text": "In this paper we present a broad overview of the last 40 years of research on cognitive architectures. Although the number of existing architectures is nearing several hundred, most of the existing surveys do not reflect this growth and focus on a handful of well-established architectures. Thus, in this survey we wanted to shift the focus towards a more inclusive and high-level overview of the research on cognitive architectures. Our final set of 85 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning and reasoning. In order to assess the breadth of practical applications of cognitive architectures we gathered information on over 900 practical projects implemented using the cognitive architectures in our list. We use various visualization techniques to highlight overall trends in the development of the field. In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.", "title": "" }, { "docid": "c610fdf29448f7ebedfa2f47a307039c", "text": "Extrusion-based bioprinting (EBB) is a rapidly growing technology that has made substantial progress during the last decade. It has great versatility in printing various biologics, including cells, tissues, tissue constructs, organ modules and microfluidic devices, in applications from basic research and pharmaceutics to clinics. Despite the great benefits and flexibility in printing a wide range of bioinks, including tissue spheroids, tissue strands, cell pellets, decellularized matrix components, micro-carriers and cell-laden hydrogels, the technology currently faces several limitations and challenges. These include impediments to organ fabrication, the limited resolution of printed features, the need for advanced bioprinting solutions to transition the technology bench to bedside, the necessity of new bioink development for rapid, safe and sustainable delivery of cells in a biomimetically organized microenvironment, and regulatory concerns to transform the technology into a product. This paper, presenting a first-time comprehensive review of EBB, discusses the current advancements in EBB technology and highlights future directions to transform the technology to generate viable end products for tissue engineering and regenerative medicine.", "title": "" }, { "docid": "b281f1244dbf31c492d34f0314f8b3e2", "text": "CONTEXT\nThe National Consensus Project for Quality Palliative Care includes spiritual care as one of the eight clinical practice domains. There are very few standardized spirituality history tools.\n\n\nOBJECTIVES\nThe purpose of this pilot study was to test the feasibility for the Faith, Importance and Influence, Community, and Address (FICA) Spiritual History Tool in clinical settings. Correlates between the FICA qualitative data and quality of life (QOL) quantitative data also were examined to provide additional insight into spiritual concerns.\n\n\nMETHODS\nThe framework of the FICA tool includes Faith or belief, Importance of spirituality, individual's spiritual Community, and interventions to Address spiritual needs. Patients with solid tumors were recruited from ambulatory clinics of a comprehensive cancer center. Items assessing aspects of spirituality within the Functional Assessment of Cancer Therapy QOL tools were used, and all patients were assessed using the FICA. The sample (n=76) had a mean age of 57, and almost half were of diverse religions.\n\n\nRESULTS\nMost patients rated faith or belief as very important in their lives (mean 8.4; 0-10 scale). FICA quantitative ratings and qualitative comments were closely correlated with items from the QOL tools assessing aspects of spirituality.\n\n\nCONCLUSION\nFindings suggest that the FICA tool is a feasible tool for clinical assessment of spirituality. Addressing spiritual needs and concerns in clinical settings is critical in enhancing QOL. Additional use and evaluation by clinicians of the FICA Spiritual Assessment Tool in usual practice settings are needed.", "title": "" }, { "docid": "69e2cd21ca9b5d14a09820b83f77c105", "text": "Stochastic Gradient Descent (SGD) is an important algorithm in machine learning. With constant learning rates, it is a stochastic process that, after an initial phase of convergence, generates samples from a stationary distribution. We show that SGD with constant rates can be effectively used as an approximate posterior inference algorithm for probabilistic modeling. Specifically, we show how to adjust the tuning parameters of SGD such as to match the resulting stationary distribution to the posterior. This analysis rests on interpreting SGD as a continuoustime stochastic process and then minimizing the Kullback-Leibler divergence between its stationary distribution and the target posterior. (This is in the spirit of variational inference.) In more detail, we model SGD as a multivariate Ornstein-Uhlenbeck process and then use properties of this process to derive the optimal parameters. This theoretical framework also connects SGD to modern scalable inference algorithms; we analyze the recently proposed stochastic gradient Fisher scoring under this perspective. We demonstrate that SGD with properly chosen constant rates gives a new way to optimize hyperparameters in probabilistic models.", "title": "" }, { "docid": "3ff9dbdc3a28a55465121cab38c9ad64", "text": "Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model / neural network (HMM/NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.", "title": "" }, { "docid": "2e6f7dbf2e8c22e10e210bb7d7dff503", "text": "In this paper, we present a detailed review on various types of SQL injection attacks, vulnerabilities, and prevention techniques. Alongside presenting our findings from the survey, we also note down future expectations and possible development of countermeasures against SQL injection attacks.", "title": "" }, { "docid": "49f2bfd7f81b4c7afaab578c137686f6", "text": "Lightning and thunderclouds are natural particle accelerators. Avalanches of relativistic runaway electrons, which develop in electric fields within thunderclouds, emit bremsstrahlung γ-rays. These γ-rays have been detected by ground-based observatories, by airborne detectors and as terrestrial γ-ray flashes from space. The energy of the γ-rays is sufficiently high that they can trigger atmospheric photonuclear reactions that produce neutrons and eventually positrons via β+ decay of the unstable radioactive isotopes, most notably 13N, which is generated via 14N + γ → 13N + n, where γ denotes a photon and n a neutron. However, this reaction has hitherto not been observed conclusively, despite increasing observational evidence of neutrons and positrons that are presumably derived from such reactions. Here we report ground-based observations of neutron and positron signals after lightning. During a thunderstorm on 6 February 2017 in Japan, a γ-ray flash with a duration of less than one millisecond was detected at our monitoring sites 0.5–1.7 kilometres away from the lightning. The subsequent γ-ray afterglow subsided quickly, with an exponential decay constant of 40–60 milliseconds, and was followed by prolonged line emission at about 0.511 megaelectronvolts, which lasted for a minute. The observed decay timescale and spectral cutoff at about 10 megaelectronvolts of the γ-ray afterglow are well explained by de-excitation γ-rays from nuclei excited by neutron capture. The centre energy of the prolonged line emission corresponds to electron–positron annihilation, providing conclusive evidence of positrons being produced after the lightning.", "title": "" }, { "docid": "8c9e311397d99dddd9a649a2f412604f", "text": "Currently, information security is a significant challenge in the information era because businesses store critical information in databases. Therefore, databases need to be a secure component of an enterprise. Organizations use Intrusion Detection Systems (IDS) as a security infrastructure component, of which a popular implementation is Snort. In this paper, we provide an overview of Snort and evaluate its ability to detect SQL Injection attacks.", "title": "" }, { "docid": "98202fd10302101e12e68b7eda0f4570", "text": "In linguistics, morphology refers to the mental system involved in word formation or to the branch of linguistics that deals with words, their internal structure, and how they are formed. Morphological Analysis is very essential for various automatic natural language processing applications. Sindhi Morphology is much more complex due to the large number of morphological variants. This paper presents the morphological analysis of Sindhi language in which important areas of Sindhi morphemes including structure, function, & nature, categories of words like compound words, prefix words, suffix Words & prefix-suffix words, and writing system are analyzed and reviewed. Moreover, comparative analysis is also carried out to comprehend the formation of Sindhi Morphology. Presented work will help to understand the internal structure of Sindhi words and beneficial for the software developers of Sindhi natural language and speech processing applications.", "title": "" }, { "docid": "98efa74b25284d0ce22038811f9e09e5", "text": "Automatic analysis of malicious binaries is necessary in order to scale with the rapid development and recovery of malware found in the wild. The results of automatic analysis are useful for creating defense systems and understanding the current capabilities of attackers. We propose an approach for automatic dissection of malicious binaries which can answer fundamental questions such as what behavior they exhibit, what are the relationships between their inputs and outputs, and how an attacker may be using the binary. We implement our approach in a system called BitScope. At the core of BitScope is a system which allows us to execute binaries with symbolic inputs. Executing with symbolic inputs allows us to reason about code paths without constraining the analysis to a particular input value. We implement 5 analysis using BitScope, and demonstrate that the analysis can rapidly analyze important properties such as what behaviors the malicious binaries exhibit. For example, BitScope uncovers all commands in typical DDoS zombies and botnet programs, and uncovers significant behavior in just minutes. This work was supported in part by CyLab at Carnegie Mellon under grant DAAD19-02-1-0389 from the Army Research Office, the U.S. Army Research Office under the Cyber-TA Research Grant No. W911NF-06-1-0316, the ITA (International Technology Alliance), CCF-0424422, National Science Foundation Grant Nos. 0311808, 0433540, 0448452, 0627511, and by the IT R&D program of MIC(Ministry of Information and Communication)/IITA(Institute for Information Technology Advancement) [2005-S-606-02, Next Generation Prediction and Response technology for Computer and Network Security Incidents]. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the ARO, CMU, or the U.S. Government.", "title": "" }, { "docid": "48653a8de0dd6e881415855e694fc925", "text": "The aim of this study was to compare the use of transcutaneous vs. motor nerve stimulation in the evaluation of low-frequency fatigue. Nine female and eleven male subjects, all physically active, performed a 30-min downhill run on a motorized treadmill. Knee extensor muscle contractile characteristics were measured before, immediately after (Post), and 30 min after the fatiguing exercise (Post30) by using single twitches and 0.5-s tetani at 20 Hz (P20) and 80 Hz (P80). The P20-to-P80 ratio was calculated. Electrical stimulations were randomly applied either maximally to the femoral nerve or via large surface electrodes (ES) at an intensity sufficient to evoke 50% of maximal voluntary contraction (MVC) during a 80-Hz tetanus. Voluntary activation level was also determined during isometric MVC by the twitch-interpolation technique. Knee extensor MVC and voluntary activation level decreased at all points in time postexercise (P < 0.001). P20 and P80 displayed significant time x gender x stimulation method interactions (P < 0.05 and P < 0.001, respectively). Both stimulation methods detected significant torque reductions at Post and Post30. Overall, ES tended to detect a greater impairment at Post in male and a lesser one in female subjects at both Post and Post30. Interestingly, the P20-P80 ratio relative decrease did not differ between the two methods of stimulation. The low-to-high frequency ratio only demonstrated a significant time effect (P < 0.001). It can be concluded that low-frequency fatigue due to eccentric exercise appears to be accurately assessable by ES.", "title": "" }, { "docid": "0182e6dcf7c8ec981886dfa2586a0d5d", "text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.", "title": "" }, { "docid": "18c517f26bceeb7930a4418f7a6b2f30", "text": "BACKGROUND\nWe aimed to study whether pulmonary hypertension (PH) and elevated pulmonary vascular resistance (PVR) could be predicted by conventional echo Doppler and novel tissue Doppler imaging (TDI) in a population of chronic obstructive pulmonary disease (COPD) free of LV disease and co-morbidities.\n\n\nMETHODS\nEchocardiography and right heart catheterization was performed in 100 outpatients with COPD. By echocardiography the time-integral of the TDI index, right ventricular systolic velocity (RVSmVTI) and pulmonary acceleration-time (PAAcT) were measured and adjusted for heart rate. The COPD patients were randomly divided in a derivation (n = 50) and a validation cohort (n = 50).\n\n\nRESULTS\nPH (mean pulmonary artery pressure (mPAP) ≥ 25mmHg) and elevated PVR ≥ 2Wood unit (WU) were predicted by satisfactory area under the curve for RVSmVTI of 0.93 and 0.93 and for PAAcT of 0.96 and 0.96, respectively. Both echo indices were 100% feasible, contrasting 84% feasibility for parameters relying on contrast enhanced tricuspid-regurgitation. RVSmVTI and PAAcT showed best correlations to invasive measured mPAP, but less so to PVR. PAAcT was accurate in 90- and 78% and RVSmVTI in 90- and 84% in the calculation of mPAP and PVR, respectively.\n\n\nCONCLUSIONS\nHeart rate adjusted-PAAcT and RVSmVTI are simple and reproducible methods that correlate well with pulmonary artery pressure and PVR and showed high accuracy in detecting PH and increased PVR in patients with COPD. Taken into account the high feasibility of these two echo indices, they should be considered in the echocardiographic assessment of COPD patients.", "title": "" }, { "docid": "94bb7d2329cbea921c6f879090ec872d", "text": "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https://worldmodels.github.io", "title": "" }, { "docid": "8a30f829e308cb75164d1a076fa99390", "text": "This paper proposes a planning method based on forward path generation and backward tracking algorithm for Automatic Parking Systems, especially suitable for backward parking situations. The algorithm is based on the steering property that backward moving trajectory coincides with the forward moving trajectory for the identical steering angle. The basic path planning is divided into two segments: a collision-free locating segment and an entering segment that considers the continuous steering angles for connecting the two paths. MATLAB simulations were conducted, along with experiments involving parallel and perpendicular situations.", "title": "" } ]
scidocsrr
afc8a1049b3702f7928d91cfca7ffa82
Bayesian Nonparametric Inverse Reinforcement Learning for Switched Markov Decision Processes
[ { "docid": "e6b9c0064a8dcf2790a891e20a5bb01d", "text": "The difficulty in inverse reinforcement learning (IRL) aris es in choosing the best reward function since there are typically an infinite number of eward functions that yield the given behaviour data as optimal. Using a Bayes i n framework, we address this challenge by using the maximum a posteriori (MA P) estimation for the reward function, and show that most of the previous IRL al gorithms can be modeled into our framework. We also present a gradient metho d for the MAP estimation based on the (sub)differentiability of the poster ior distribution. We show the effectiveness of our approach by comparing the performa nce of the proposed method to those of the previous algorithms.", "title": "" }, { "docid": "52fe696242f399d830d0a675bd766128", "text": "Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an \"intentional stance\" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a \"teleological stance\" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.", "title": "" } ]
[ { "docid": "e134a35340fbf5f825d0d64108a171c3", "text": "The present study investigated relations of anxiety sensitivity and other theoretically relevant personality factors to Copper's [Psychological Assessment 6 (1994) 117.] four categories of substance use motivations as applied to teens' use of alcohol, cigarettes, and marijuana. A sample of 508 adolescents (238 females, 270 males; mean age = 15.1 years) completed the Trait subscale of the State-Trait Anxiety Inventory for Children, the Childhood Anxiety Sensitivity Index (CASI), and the Intensity and Novelty subscales of the Arnett Inventory of Sensation Seeking. Users of each substance also completed the Drinking Motives Questionnaire-Revised (DMQ-R) and/or author-compiled measures for assessing motives for cigarette smoking and marijuana use, respectively. Multiple regression analyses revealed that, in the case of each drug, the block of personality variables predicted \"risky\" substance use motives (i.e., coping, enhancement, and/or conformity motives) over-and-above demographics. High intensity seeking and low anxiety sensitivity predicted enhancement motives for alcohol use, high anxiety sensitivity predicted conformity motives for alcohol and marijuana use, and high trait anxiety predicted coping motives for alcohol and cigarette use. Moreover, anxiety sensitivity moderated the relation between trait anxiety and coping motives for alcohol and cigarette use: the trait anxiety-coping motives relation was stronger for high, than for low, anxiety sensitive individuals. Implications of the findings for improving substance abuse prevention efforts for youth will be discussed.", "title": "" }, { "docid": "21bd6f42c74930c8e9876ff4f5ef1ee2", "text": "Dynamic channel allocation (DCA) is the key technology to efficiently utilize the spectrum resources and decrease the co-channel interference for multibeam satellite systems. Most works allocate the channel on the basis of the beam traffic load or the user terminal distribution of the current moment. These greedy-like algorithms neglect the intrinsic temporal correlation among the sequential channel allocation decisions, resulting in the spectrum resources underutilization. To solve this problem, a novel deep reinforcement learning (DRL)-based DCA (DRL-DCA) algorithm is proposed. Specifically, the DCA optimization problem, which aims at minimizing the service blocking probability, is formulated in the multibeam satellite systems. Due to the temporal correlation property, the DCA optimization problem is modeled as the Markov decision process (MDP) which is the dominant analytical approach in DRL. In modeled MDP, the system state is reformulated into an image-like fashion, and then, convolutional neural network is used to extract useful features. Simulation results show that the DRL-DCA algorithm can decrease the blocking probability and improve the carried traffic and spectrum efficiency compared with other channel allocation algorithms.", "title": "" }, { "docid": "9aa95ffde4eb675c094f4eba5e970357", "text": "Many interesting computational problems can be reformulated in terms of decision trees. A natural classical algorithm is to then run a random walk on the tree, starting at the root, to see if the tree contains a node n levels from the root. We devise a quantum mechanical algorithm that evolves a state, initially localized at the root, through the tree. We prove that if the classical strategy succeeds in reaching level n in time polynomial in n, then so does the quantum algorithm. Moreover, we find examples of trees for which the classical algorithm requires time exponential in n, but for which the quantum algorithm succeeds in polynomial time. The examples we have so far, however, could also be solved in polynomial time by different classical algorithms. MIT-CTP-2651, quant-ph/9706062 June 1997", "title": "" }, { "docid": "318daea2ef9b0d7afe2cb08edcfe6025", "text": "Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.", "title": "" }, { "docid": "908769c3f39ab3047fac2be9157d9a35", "text": "Low-bit-rate speech coding, at rates below 4 kb/s, is needed for both communication and voice storage applications. At such low rates, full encoding of the speech waveform is not possible; therefore, low-rate coders rely instead on parametric models to represent only the most perceptually-relevant aspects of speech. While there are a number of different approaches for this modeling, all can be related to the basic linear model of speech production, where an excitation signal drives a vocal tract filter. The basic properties of the speech signal and of human speech perception can explain the principles of parametric speech coding as applied in early vocoders. Current speech modeling approaches, such as mixed excitation linear prediction, sinusoidal coding, and waveform interpolation, use more sophisticated versions of these same concepts. Modern techniques for encoding the model parameters, in particular using the theory of vector quantization, allow the encoding of the model information with very few bits per speech frame. Successful standardization of low-rate coders has enabled their widespread use for both military and satellite communications, at rates from 4 kb/s all the way down to 600 b/s. However, the goal of tollquality low-rate coding continues to provide a research challenge. This work was sponsored by the Defense Advanced Research Projects Agency under Air Force Contract FA8721-05-C-0002 . Opinions, interpretations, conclusions, and recommendat ions are those of the authors and are not necessarily endorsed by the U nited States Government.", "title": "" }, { "docid": "a461592a276b13a6a25c25ab64c23d61", "text": "To maintain the integrity of an organism constantly challenged by pathogens, the immune system is endowed with a variety of cell types. B lymphocytes were initially thought to only play a role in the adaptive branch of immunity. However, a number of converging observations revealed that two B-cell subsets, marginal zone (MZ) and B1 cells, exhibit unique developmental and functional characteristics, and can contribute to innate immune responses. In addition to their capacity to mount a local antibody response against type-2 T-cell-independent (TI-2) antigens, MZ B-cells can participate to T-cell-dependent (TD) immune responses through the capture and import of blood-borne antigens to follicular areas of the spleen. Here, we discuss the multiple roles of MZ B-cells in humans, non-human primates, and rodents. We also summarize studies - performed in transgenic mice expressing fully human antibodies on their B-cells and in macaques whose infection with Simian immunodeficiency virus (SIV) represents a suitable model for HIV-1 infection in humans - showing that infectious agents have developed strategies to subvert MZ B-cell functions. In these two experimental models, we observed that two microbial superantigens for B-cells (protein A from Staphylococcus aureus and protein L from Peptostreptococcus magnus) as well as inactivated AT-2 virions of HIV-1 and infectious SIV preferentially deplete innate-like B-cells - MZ B-cells and/or B1 B-cells - with different consequences on TI and TD antibody responses. These data revealed that viruses and bacteria have developed strategies to deplete innate-like B-cells during the acute phase of infection and to impair the antibody response. Unraveling the intimate mechanisms responsible for targeting MZ B-cells in humans will be important for understanding disease pathogenesis and for designing novel vaccine strategies.", "title": "" }, { "docid": "272affb51cec7bf4fe0cbe8b10331977", "text": "During an earthquake, structures are subjected to both horizontal and vertical shaking. Most structures are rather insensitive to variations in the vertical acceleration history and primary considerations are given to the impact of the horizontal shaking on the behavior of structures. In the laboratory, however, most component tests are carried out under uni-directional horizontal loading to simulate earthquake effects rather than bi-directional loading. For example, biaxial loading tests of reinforced concrete (RC) walls constitute less than 0.5% of all quasi-static cyclic tests that have been conducted. Bi-directional tests require larger and more complex test setups than uni-directional tests and therefore should only be pursued if they provide insights and results that cannot be obtained from uni-directional tests. To investigate the influence of bi-directional loading on RC wall performance, this paper reviews results from quasi-static cyclic tests on RC walls that are reported in the literature. Results from uni-directional tests are compared to results from bi-directional tests for walls of different cross sections including rectangular walls, T-shaped walls, and U-shaped walls. The available test data are analyzed with regard to the influence of the loading history on stiffness, strength, deformation capacity and failure mode. Walls with T-shaped and Ushaped cross sections are designed to carry loads in both horizontal directions and thus consideration of the impact of bidirectional loading on behavior should be considered. However, it is also shown that the displacement capacity of walls with rectangular cross sections is typically reduced by 20 to 30% due to bi-directional loading. Further analysis of the test data indicates that the bi-directional loading protocol selected might impact wall strength and stiffness of the test specimen. Based on these findings, future research needs with regard to the response of RC walls subjected to bi-directional loading are provided.", "title": "" }, { "docid": "6afb1d4ee806a8be1bfae8748a731615", "text": "BACKGROUND\nThe COPD Assessment Test (CAT) is responsive to change in patients with chronic obstructive pulmonary disease (COPD). However, the minimum clinically important difference (MCID) has not been established. We aimed to identify the MCID for the CAT using anchor-based and distribution-based methods.\n\n\nMETHODS\nWe did three studies at two centres in London (UK) between April 1, 2010, and Dec 31, 2012. Study 1 assessed CAT score before and after 8 weeks of outpatient pulmonary rehabilitation in patients with COPD who were able to walk 5 m, and had no contraindication to exercise. Study 2 assessed change in CAT score at discharge and after 3 months in patients admitted to hospital for more than 24 h for acute exacerbation of COPD. Study 3 assessed change in CAT score at baseline and at 12 months in stable outpatients with COPD. We focused on identifying the minimum clinically important improvement in CAT score. The St George's Respiratory Questionnaire (SGRQ) and Chronic Respiratory Questionnaire (CRQ) were measured concurrently as anchors. We used receiver operating characteristic curves, linear regression, and distribution-based methods (half SD, SE of measurement) to estimate the MCID for the CAT; we included only patients with paired CAT scores in the analysis.\n\n\nFINDINGS\nIn Study 1, 565 of 675 (84%) patients had paired CAT scores. The mean change in CAT score with pulmonary rehabilitation was -2·5 (95% CI -3·0 to -1·9), which correlated significantly with change in SGRQ score (r=0·32; p<0·0001) and CRQ score (r=-0·46; p<0·0001). In Study 2, of 200 patients recruited, 147 (74%) had paired CAT scores. Mean change in CAT score from hospital discharge to 3 months after discharge was -3·0 (95% CI -4·4 to -1·6), which correlated with change in SGRQ score (r=0·47; p<0·0001). In Study 3, of 200 patients recruited, 164 (82%) had paired CAT scores. Although no significant change in CAT score was identified after 12 months (mean 0·6, 95% CI -0·4 to 1·5), change in CAT score correlated significantly with change in SGRQ score (r=0·36; p<0·0001). Linear regression estimated the minimum clinically important improvement for the CAT to range between -1·2 and -2·8 with receiver operating characteristic curves consistently identifying -2 as the MCID. Distribution-based estimates for the MCID ranged from -3·3 to -3·8.\n\n\nINTERPRETATION\nThe most reliable estimate of the minimum important difference of the CAT is 2 points. This estimate could be useful in the clinical interpretation of CAT data, particularly in response to intervention studies.\n\n\nFUNDING\nMedical Research Council and UK National Institute of Health Research.", "title": "" }, { "docid": "697360b396804ef0540d0f53b7031aed", "text": "We describe a high-resolution, real-time 3D absolute coordinate measurement system based on a phase-shifting method. It acquires 3D shape at 30 frames per second (fps), with 266K points per frame. A tiny marker is encoded in the projected fringe pattern, and detected by software from the texture image and the gamma map. Absolute 3D coordinates are obtained from the detected marker position and the calibrated system parameters. To demonstrate the performance of the system, we measure a hand moving over a depth distance of approximately 700 mm, and human faces with expressions. Applications of such a system include manufacturing, inspection, entertainment, security, medical imaging.", "title": "" }, { "docid": "fceb43462f77cf858ef9747c1c5f0728", "text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.", "title": "" }, { "docid": "34855c90155970485094829edb6bc3cb", "text": "We present an approach for navigating in unknown environments while, simultaneously, gathering information for inspecting underwater structures using an autonomous underwater vehicle (AUV). To accomplish this, we first use our pipeline for mapping and planning collision-free paths online, which endows an AUV with the capability to autonomously acquire optical data in close proximity. With that information, we then propose a reconstruction pipeline to create a photo-realistic textured 3D model of the inspected area. These 3D models are also of particular interest to other fields of study in marine sciences, since they can serve as base maps for environmental monitoring, thus allowing change detection of biological communities and their environment over time. Finally, we evaluate our approach using the Sparus II, a torpedo-shaped AUV, conducting inspection missions in a challenging, real-world and natural scenario.", "title": "" }, { "docid": "cb7e4299f0994d2fe37ea2f1dc382610", "text": "This paper presents a quick and accurate power control method for a zone-control induction heating (ZCIH) system. The ZCIH system consists of multiple working coils connected to multiple H-bridge inverters. The system controls the amplitude and phase angle of each coil current to make the temperature distribution on the workpiece uniform. This paper proposes a new control method for the coil currents based on a circuit model using real and imaginary (Re-Im) current/voltage components. The method detects and controls the Re-Im components of the coil current instead of the current amplitude and phase angle. As a result, the proposed method enables decoupling control for the system, making the control for each working coil independent from the others. Experiments on a 6-zone ZCIH laboratory setup are conducted to verify the validity of the proposed method. It is clarified that the proposed method has a stable operation both in transient and steady states. The proposed system and control method enable system complexity reduction and control stability improvements.", "title": "" }, { "docid": "8314487867961ae2572997e2a7315c9c", "text": "Social cognitive neuroscience examines social phenomena and processes using cognitive neuroscience research tools such as neuroimaging and neuropsychology. This review examines four broad areas of research within social cognitive neuroscience: (a) understanding others, (b) understanding oneself, (c) controlling oneself, and (d) the processes that occur at the interface of self and others. In addition, this review highlights two core-processing distinctions that can be neurocognitively identified across all of these domains. The distinction between automatic versus controlled processes has long been important to social psychological theory and can be dissociated in the neural regions contributing to social cognition. Alternatively, the differentiation between internally-focused processes that focus on one's own or another's mental interior and externally-focused processes that focus on one's own or another's visible features and actions is a new distinction. This latter distinction emerges from social cognitive neuroscience investigations rather than from existing psychological theories demonstrating that social cognitive neuroscience can both draw on and contribute to social psychological theory.", "title": "" }, { "docid": "0f5e00fc025d0ee8746f774dfead1781", "text": "Within the fields of urban reconstruction and city modeling, shape grammars have emerged as a powerful tool for both synthesizing novel designs and reconstructing buildings. Traditionally, a human expert was required to write grammars for specific building styles, which limited the scope of method applicability. We present an approach to automatically learn two-dimensional attributed stochastic context-free grammars (2D-ASCFGs) from a set of labeled building facades. To this end, we use Bayesian Model Merging, a technique originally developed in the field of natural language processing, which we extend to the domain of two-dimensional languages. Given a set of labeled positive examples, we induce a grammar which can be sampled to create novel instances of the same building style. In addition, we demonstrate that our learned grammar can be used for parsing existing facade imagery. Experiments conducted on the dataset of Haussmannian buildings in Paris show that our parsing with learned grammars not only outperforms bottom-up classifiers but is also on par with approaches that use a manually designed style grammar.", "title": "" }, { "docid": "22eefe8e8a46f1323fdfdcc5e0e4cac5", "text": " Covers the main data mining techniques through carefully selected case studies  Describes code and approaches that can be easily reproduced or adapted to your own problems  Requires no prior experience with R  Includes introductions to R and MySQL basics  Provides a fundamental understanding of the merits, drawbacks, and analysis objectives of the data mining techniques  Offers data and R code on www.liaad.up.pt/~ltorgo/DataMiningWithR/", "title": "" }, { "docid": "d4cd46d9c8f0c225d4fe7e34b308e8f1", "text": "In this paper, a 10 kW current-fed DC-DC converter using resonant push-pull topology is demonstrated and analyzed. The grounds for component dimensioning are given and the advantages and disadvantages of the resonant push-pull topology are discussed. The converter characteristics and efficiencies are demonstrated by calculations and prototype measurements.", "title": "" }, { "docid": "b5af728b9a8fd3d53c8fd55784557e29", "text": "The term \"Goal\" is increasingly being used in Requirement Engineering. Goal-Oriented requirement engineering (GORE) provides an incremental approach for elicitation, analysis, elaboration & refinement, specification and modeling of requirements. Various Goal Oriented Requirement Engineering (GORE) methods exist for these requirement engineering processes like KAOS, GBRAM etc. GORE techniques are based on certain underlying concepts and principles. This paper presents and synthesizes the underlying concepts of GORE with respect to coverage of requirement engineering activities. The advantages of GORE claimed in the literature are presented. This paper evaluates GORE techniques on the basis of concepts, process and claimed advantages.", "title": "" }, { "docid": "1e8f25674dc66a298c277d80dd031c20", "text": "DeepQ Arrhythmia Database, the first generally available large-scale dataset for arrhythmia detector evaluation, contains 897 annotated single-lead ECG recordings from 299 unique patients. DeepQ includes beat-by-beat, rhythm episodes, and heartbeats fiducial points annotations. Each patient was engaged in a sequence of lying down, sitting, and walking activities during the ECG measurement and contributed three five-minute records to the database. Annotations were manually labeled by a group of certified cardiographic technicians and audited by a cardiologist at Taipei Veteran General Hospital, Taiwan. The aim of this database is in three folds. First, from the scale perspective, we build this database to be the largest representative reference set with greater number of unique patients and more variety of arrhythmic heartbeats. Second, from the diversity perspective, our database contains fully annotated ECG measures from three different activity modes and facilitates the arrhythmia classifier training for wearable ECG patches and AAMI assessment. Thirdly, from the quality point of view, it serves as a complement to the MIT-BIH Arrhythmia Database in the development and evaluation of the arrhythmia detector. The addition of this dataset can help facilitate the exhaustive studies using machine learning models and deep neural networks, and address the inter-patient variability. Further, we describe the development and annotation procedure of this database, as well as our on-going enhancement. We plan to make DeepQ database publicly available to advance medical research in developing outpatient, mobile arrhythmia detectors.", "title": "" }, { "docid": "0332be71a529382e82094239db31ea25", "text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).", "title": "" } ]
scidocsrr
79763ad1e7ec488b68bbb5d2f3549da5
Mind the Traps! Design Guidelines for Rigorous BCI Experiments
[ { "docid": "d4cb0a729d182222ba0a96715e07783e", "text": "A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges Christian Mühl, Brendan Allison, Anton Nijholt & Guillaume Chanel a Inria Bordeaux Sud-Ouest, Talence, France b ASPEN Lab, Electrical and Computer Engineering Department, Old Dominion University, Norfolk, VA, USA c Department of Cognitive Science, University of California at San Diego, La Jolla, CA, USA d Faculty EEMCS, Human Media Interaction, University of Twente, Enschede, The Netherlands e Swiss Center for Affective Sciences – University of Geneva, Campus Biotech, Genève, Switzerland Published online: 14 May 2014.", "title": "" } ]
[ { "docid": "e9b2f987c4744e509b27cbc2ab1487be", "text": "Analogy and similarity are often assumed to be distinct psychological processes. In contrast to this position, the authors suggest that both similarity and analogy involve a process of structural alignment and mapping, that is, that similarity is like analogy. In this article, the authors first describe the structure-mapping process as it has been worked out for analogy. Then, this view is extended to similarity, where it is used to generate new predictions. Finally, the authors explore broader implications of structural alignment for psychological processing.", "title": "" }, { "docid": "699ba57af7ed09817db19d30110ad9b0", "text": "A RESURF stepped oxide (RSO) transistor is presented and electrically characterised. The processed RSO MOSFET includes a trench field-plate network in the drift region that is isolated with a thick oxide layer. This trench network has a hexagonal layout that induces an improved RESURF effect at breakdown compared with the more common stripe (2D) layout. Consequently, the effective doping can be two times higher for the hexagonal layout. We have obtained a record value for the specific on-resistance (R/sub ds,on/) of 58 m/spl Omega/.mm/sup 2/ at V/sub gs/=10 V for a breakdown voltage (BV/sub ds/,) of 85 V. These values have been obtained for devices having a 4.0 /spl mu/m cell pitch and a 5 /spl mu/m long drift region with a doping level of 2.10/sup 16/ cm/sup -3/. Measurements of the gate-drain charge density (Q/sub gd/) for these devices show that Q/sub gd/ is fully dominated by the oxide capacitance of the field-plate along the drift region.", "title": "" }, { "docid": "b7487dc3fc2b26ed49fd6beaa0fefe77", "text": "Cellulose and cyclodextrins possess unique properties that can be tailored, combined, and used in a considerable number of applications, including textiles, coatings, sensors, and drug delivery systems. Successfully structuring and applying cellulose and cyclodextrins conjugates requires a deep understanding of the relation between structural, and soft matter behavior, materials, energy, and function. This review focuses on the key advances in developing materials based on these conjugates. Relevant aspects regarding structural variations, methods of synthesis, processing and functionalization, and corresponding supramolecular properties are presented. The use of cellulose/cyclodextrin conjugates as intelligent platforms for applications in materials science and pharmaceutical technology is also outlined, focusing on drug delivery, textiles, and sensors.", "title": "" }, { "docid": "454c390fcd7d9a3d43842aee19c77708", "text": "Altmetrics have gained momentum and are meant to overcome the shortcomings of citation-based metrics. In this regard some light is shed on the dangers associated with the new “all-in-one” indicator altmetric score.", "title": "" }, { "docid": "d479707742dcf5bec920370d98c2eadc", "text": "Spectral measures of linear Granger causality have been widely applied to study the causal connectivity between time series data in neuroscience, biology, and economics. Traditional Granger causality measures are based on linear autoregressive with exogenous (ARX) inputs models of time series data, which cannot truly reveal nonlinear effects in the data especially in the frequency domain. In this study, it is shown that the classical Geweke's spectral causality measure can be explicitly linked with the output spectra of corresponding restricted and unrestricted time-domain models. The latter representation is then generalized to nonlinear bivariate signals and for the first time nonlinear causality analysis in the frequency domain. This is achieved by using the nonlinear ARX (NARX) modeling of signals, and decomposition of the recently defined output frequency response function which is related to the NARX model.", "title": "" }, { "docid": "22d4ab1e9ecdfb86e6823fdd780f18dd", "text": "Part-of-Speech (POS) tagging is the process of assigning a part-of-speech like noun, verb, adjective, adverb, or other lexical class marker to each word in a sentence. This paper presents a POS Tagger for Marathi language text using Rule based approach, which will assign part of speech to the words in a sentence given as an input. We describe our system as the one which tokenizes the string into tokens and then comparing tokens with the WordNet to assign their particular tags. There are many ambiguous words in Marathi language and we resolve the ambiguity of these words using Marathi grammar rules. KeywordsPOS-Part Of Speech, WordNet, Tagset, Corpus.", "title": "" }, { "docid": "fe89c8a17676b7767cfa40e7822b8d25", "text": "Previous machine comprehension (MC) datasets are either too small to train endto-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.", "title": "" }, { "docid": "804920bbd9ee11cc35e93a53b58e7e79", "text": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.", "title": "" }, { "docid": "cdc276a3c4305d6c7ba763332ae933cc", "text": "Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. With the advancement of imaging techniques, it permits to produce higher resolution SAR data and extend data amount. Therefore, intelligent algorithms for high-resolution SAR image classification are demanded. Inspired by deep learning technology, an end-to-end classification model from the original SAR image to final classification map is developed to automatically extract features and conduct classification, which is named deep recurrent encoding neural networks (DRENNs). In our proposed framework, a spatial feature learning network based on long–short-term memory (LSTM) is developed to extract contextual dependencies of SAR images, where 2-D image patches are transformed into 1-D sequences and imported into LSTM to learn the latent spatial correlations. After LSTM, nonnegative and Fisher constrained autoencoders (NFCAEs) are proposed to improve the discrimination of features and conduct final classification, where nonnegative constraint and Fisher constraint are developed in each autoencoder to restrict the training of the network. The whole DRENN not only combines the spatial feature learning power of LSTM but also utilizes the discriminative representation ability of our NFCAE to improve the classification performance. The experimental results tested on three SAR images demonstrate that the proposed DRENN is able to learn effective feature representations from SAR images and produce competitive classification accuracies to other related approaches.", "title": "" }, { "docid": "2d17838b344c07245ebee619859dd881", "text": "BACKGROUND\nMortality among patients admitted to hospital after out-of-hospital cardiac arrest (OHCA) is high. Based on recent scientific evidence with a main goal of improving survival, we introduced and implemented a standardised post resuscitation protocol focusing on vital organ function including therapeutic hypothermia, percutaneous coronary intervention (PCI), control of haemodynamics, blood glucose, ventilation and seizures.\n\n\nMETHODS\nAll patients with OHCA of cardiac aetiology admitted to the ICU from September 2003 to May 2005 (intervention period) were included in a prospective, observational study and compared to controls from February 1996 to February 1998.\n\n\nRESULTS\nIn the control period 15/58 (26%) survived to hospital discharge with a favourable neurological outcome versus 34 of 61 (56%) in the intervention period (OR 3.61, CI 1.66-7.84, p=0.001). All survivors with a favourable neurological outcome in both groups were still alive 1 year after discharge. Two patients from the control period were revascularised with thrombolytics versus 30 (49%) receiving PCI treatment in the intervention period (47 patients (77%) underwent cardiac angiography). Therapeutic hypothermia was not used in the control period, but 40 of 52 (77%) comatose patients received this treatment in the intervention period.\n\n\nCONCLUSIONS\nDischarge rate from hospital, neurological outcome and 1-year survival improved after standardisation of post resuscitation care. Based on a multivariate logistic analysis, hospital treatment in the intervention period was the most important independent predictor of survival.", "title": "" }, { "docid": "1524297aeea3a28a542d8006607266bf", "text": "Fully automating machine learning pipeline is one of the outstanding challenges of general artificial intelligence, as practical machine learning often requires costly human driven process, such as hyper-parameter tuning, algorithmic selection, and model selection. In this work, we consider the problem of executing automated, yet scalable search for finding optimal gradient based meta-learners in practice. As a solution, we apply progressive neural architecture search to proto-architectures by appealing to the model agnostic nature of general gradient based meta learners. In the presence of recent universality result of Finn et al.[9], our search is a priori motivated in that neural network architecture search dynamics—automated or not—may be quite different from that of the classical setting with the same target tasks, due to the presence of the gradient update operator. A posteriori, our search algorithm, given appropriately designed search spaces, finds gradient based meta learners with non-intuitive proto-architectures that are narrowly deep, unlike the inception-like structures previously observed in the resulting architectures of traditional NAS algorithms. Along with these notable findings, the searched gradient based meta-learner achieves state-of-the-art results on the few shot classification problem on Mini-ImageNet with 76.29% accuracy, which is an 13.18% improvement over results reported in the original MAML paper. To our best knowledge, this work is the first successful AutoML implementation in the context of meta learning.", "title": "" }, { "docid": "82479411c3d3b6796f96880ee5012d74", "text": "The recent advances brought by deep learning allowed to improve the performance in image retrieval tasks. Through the many convolutional layers, available in a Convolutional Neural Network (CNN), it is possible to obtain a hierarchy of features from the evaluated image. At every step, the patches extracted are smaller than the previous levels and more representative. Following this idea, this paper introduces a new detector applied on the feature maps extracted from pre-trained CNN. Specifically, this approach lets to increase the number of features in order to increase the performance of the aggregation algorithms like the most famous and used VLAD embedding. The proposed approach is tested on different public datasets: Holidays, Oxford5k, Paris6k and UKB.", "title": "" }, { "docid": "914d17433df678e9ace1c9edd1c968d3", "text": "We propose a Deep Learning approach to the visual question answering task, where machines answer to questions about real-world images. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We evaluate our approaches on the DAQUAR as well as the VQA dataset where we also report various baselines, including an analysis how much information is contained in the language part only. To study human consensus, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Finally, we evaluate a rich set of design choices how to encode, combine and decode information in our proposed Deep Learning formulation.", "title": "" }, { "docid": "7ac0875617d11cb811de8e2d4e117e01", "text": "The video-recorded lecture represents a central feature of most online learning platforms. Nonetheless, little is known about how to best structure video-recorded lectures in order to optimize learning. Here, we focused on the tendency for high school and college students to be overconfident in their learning from video-recorded modules, and demonstrated that testing could be used to effectively improve the calibration between predicted and actual performance. Notably, interpolating a lecture with repeated", "title": "" }, { "docid": "d37dd9382e7fd8e4c7e7099728a09d59", "text": "OBJECTIVE\nTo assess immediate and near-term effects of 2 exercise training programs for persons with idiopathic Parkinson's disease (IPD).\n\n\nDESIGN\nRandomized control trial.\n\n\nSETTING\nPublic health facility and medical center.\n\n\nPARTICIPANTS\nFifteen persons with IPD.\n\n\nINTERVENTION\nCombined group (balance and resistance training) and balance group (balance training only) underwent 10 weeks of high-intensity resistance training (knee extensors and flexors, ankle plantarflexion) and/or balance training under altered visual and somatosensory sensory conditions, 3 times a week on nonconsecutive days. Groups were assessed before, immediately after training, and 4 weeks later.\n\n\nMAIN OUTCOME MEASURES\nBalance was assessed by computerized dynamic posturography, which determined the subject's response to reduced or altered visual and somatosensory orientation cues (Sensory Orientation Test [SOT]). Muscle strength was assessed by measuring the amount of weight a participant could lift, by using a standardized weight-and-pulley system, during a 4-repetition-maximum test of knee extension, knee flexion, and ankle plantarflexion.\n\n\nRESULTS\nBoth types of training improved SOT performance. This effect was larger in the combined group. Both groups could balance longer before falling, and this effect persisted for at least 4 weeks. Muscle strength increased marginally in the balance group and substantially in the combined group, and this effect persisted for at least 4 weeks.\n\n\nCONCLUSION\nMuscle strength and balance can be improved in persons with IPD by high-intensity resistance training and balance training.", "title": "" }, { "docid": "df55896d227ae0b4d565af22bffca3ac", "text": "Copper nanoparticles are being given considerable attention as of late due to their interesting properties and potential applications in many areas of industry. One such exploitable use is as the major constituent of conductive inks and pastes used for printing various electronic components. In this study, copper nanoparticles were synthesized through a relatively large-scale (5 l), high-throughput (0.2 M) process. This facile method occurs through the chemical reduction of copper sulfate with sodium hypophosphite in ethylene glycol within the presence of a polymer surfactant (PVP), which was included to prevent aggregation and give dispersion stability to the resulting colloidal nanoparticles. Reaction yields were determined to be quantitative while particle dispersion yields were between 68 and 73%. The size of the copper nanoparticles could be controlled between 30 and 65 nm by varying the reaction time, reaction temperature, and relative ratio of copper sulfate to the surfactant. Field emission scanning electron microscopy (FE-SEM) and transmission electron microscopy (TEM) images of the particles revealed a spherical shape within the reported size regime, and x-ray analysis confirmed the formation of face-centered cubic (FCC) metallic copper. Furthermore, inkjet printing nanocopper inks prepared from the polymer-stabilized copper nanoparticles onto polyimide substrates resulted in metallic copper traces with low electrical resistivities (≥3.6 µΩ cm, or ≥2.2 times the resistivity of bulk copper) after a relatively low-temperature sintering process (200 °C for up to 60 min).", "title": "" }, { "docid": "1dd4a95adcd4f9e7518518148c3605ac", "text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.", "title": "" }, { "docid": "abe32957798ec21bd7dbe714c21540ba", "text": "OBJECTIVE\nTo evaluate the effects of reflexology treatment on quality of life, sleep disturbances, and fatigue in breast cancer patients during radiation therapy.\n\n\nMETHODS/SUBJECTS\nA total of 72 women with breast cancer (stages 1-3) scheduled for radiation therapy were recruited.\n\n\nDESIGN\nWomen were allocated upon their preference either to the group receiving reflexology treatments once a week concurrently with radiotherapy and continued for 10 weeks or to the control group (usual care).\n\n\nOUTCOME MEASURES\nThe Lee Fatigue Scale, General Sleep Disturbance Scale, and Multidimensional Quality of Life Scale Cancer were completed by each patient in both arms at the beginning of the radiation treatment, after 5 weeks, and after 10 weeks of reflexology treatment.\n\n\nRESULTS\nThe final analysis included 58 women. The reflexology treated group demonstrated statistically significant lower levels of fatigue after 5 weeks of radiation therapy (p < 0.001), compared to the control group. It was also detected that although the quality of life in the control group deteriorated after 5 and 10 weeks of radiation therapy (p < 0.01 and p < 0.05, respectively), it was preserved in the reflexology group, which also demonstrated a significant improvement in the quality of sleep after 10 weeks of radiation treatment (p < 0.05). Similar patterns were obtained in the assessment of the pain levels experienced by the patients.\n\n\nCONCLUSIONS\nThe results of the present study indicate that reflexology may have a positive effect on fatigue, quality of sleep, pain, and quality of life in breast cancer patients during radiation therapy. Reflexology prevented the decline in quality of life and significantly ameliorated the fatigue and quality of sleep of these patients. An encouraging trend was also noted in amelioration of pain levels.", "title": "" }, { "docid": "436900539406faa9ff34c1af12b6348d", "text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.", "title": "" }, { "docid": "2f51d8d289a7c615ddb4dc01803612a7", "text": "Feedback is an important component of the design process, but gaining access to high-quality critique outside a classroom or firm is challenging. We present CrowdCrit, a web-based system that allows designers to receive design critiques from non-expert crowd workers. We evaluated CrowdCrit in three studies focusing on the designer's experience and benefits of the critiques. In the first study, we compared crowd and expert critiques and found evidence that aggregated crowd critique approaches expert critique. In a second study, we found that designers who got crowd feedback perceived that it improved their design process. The third study showed that designers were enthusiastic about crowd critiques and used them to change their designs. We conclude with implications for the design of crowd feedback services.", "title": "" } ]
scidocsrr
dfb0171ddc4b65f5fbae045df35ab9a3
A survey on network attacks and Intrusion detection systems
[ { "docid": "24b62b4d3ecee597cffef75e0864bdd8", "text": "Botnets can cause significant security threat and huge loss to organizations, and are difficult to discover their existence. Therefore they have become one of the most severe threats on the Internet. The core component of botnets is their command and control channel. Botnets often use IRC (Internet Relay Chat) as a communication channel through which the botmaster can control the bots to launch attacks or propagate more infections. In this paper, anomaly score based botnet detection is proposed to identify the botnet activities by using the similarity measurement and the periodic characteristics of botnets. To improve the detection rate, the proposed system employs two-level correlation relating the set of hosts with same anomaly behaviors. The proposed method can differentiate the malicious network traffic generated by infected hosts (bots) from that by normal IRC clients, even in a network with only a very small number of bots. The experiment results show that, regardless the size of the botnet in a network, the proposed approach efficiently detects abnormal IRC traffic and identifies botnet activities. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "081e474c622f122832490a54657e5051", "text": "To defend a network from intrusion is a generic problem of all time. It is important to develop a defense mechanism to secure the network from anomalous activities. This paper presents a comprehensive survey of methods and systems introduced by researchers in the past two decades to protect network resources from intrusion. A detailed pros and cons analysis of these methods and systems is also reported in this paper. Further, this paper also provides a list of issues and research challenges in this evolving field of research. We believe that, this knowledge will help to create a defense system.", "title": "" } ]
[ { "docid": "aee8080bb0a1c9de2eec907de095f1f9", "text": "PURPOSE OF REVIEW\nCranioplasty has been long practiced, and the reconstructive techniques continue to evolve. With a variety of options available for filling cranial defects, a review of the current practices in cranioplasty allows for reporting the most advanced techniques and specific indications.\n\n\nRECENT FINDINGS\nOverwhelming support remains for the use of autologous bone grafts in filling the cranial defects. Alloplastic alternatives have relative advantages and disadvantages depending on the patient population and specific indications. Application of imaging technology has allowed for the utilization of custom-made alloplastic implants when autologous bone grafts are not feasible.\n\n\nSUMMARY\nAutologous bone grafts remain the best option for adult and pediatric patients with viable donor sites and small-to-medium defects. Large defects in the adult population can be reconstructed with titanium mesh and polymethylmethacrylate overlay with or without the use of computer-assisted design and manufacturing customization. In pediatric patients, exchange cranioplasty offers a viable technique for using an autologous bone graft, while simultaneously filling the donor site with particulate bone graft. Advances in alloplastic materials and custom manufacturing of implants will have an important influence on cranioplasty techniques in the years to come.", "title": "" }, { "docid": "d4bbd07979940fd2b152144ab626fdb1", "text": "Extracting minutiae from fingerprint images is one of the most important steps in automatic fingerprint identification and classification. Minutiae are local discontinuities in the fingerprint pattern, mainly terminations and bifurcations. In this work we propose two methods for fingerprint image enhancement. The first one is carried out using local histogram equalization, Wiener filtering, and image binarization. The second method use a unique anisotropic filter for direct grayscale enhancement. The results achieved are compared with those obtained through some other methods. Both methods show some improvement in the minutiae detection process in terms of either efficiency or time required.", "title": "" }, { "docid": "40ebbaa3e7946a1ea6d39204b5efa611", "text": "In their article, \"Does the autistic child have a 'theory of mind'?,\" Baron-Cohen et al. [1985] proposed a novel paradigm to explain social impairment in children diagnosed as autistic (AD). Much research has been undertaken since their article went to print. The purpose of this commentary is to gauge whether Theory of Mind (ToM)-or lack thereof-is a valid model for explaining abnormal social behavior in children with AD. ToM is defined as \"the ability to impute mental states to oneself and to others\" and \"the ability to make inferences about what other people believe to be the case.\" The source for their model was provided by an article published earlier by Premack and Woodruff, \"Does the chimpanzee have a theory of mind?\" Later research in chimpanzees did not support a ToM in primates. From the outset, ToM as a neurocognitive model of autism has had many shortcomings-methodological, logical, and empirical. Other ToM assumptions, for example, its universality in all children in all cultures and socioeconomic conditions, are not supported by data. The age at which a ToM emerges, or events that presage a ToM, are too often not corroborated. Recent studies of mirror neurons, their location and interconnections in brain, their relationship to social behavior and language, and the effect of lesions there on speech, language and social behavior, strongly suggests that a neurobiological as opposed to neurocognitive model of autism is a more parsimonious explanation for the social and behavioral phenotypes observed in autism.", "title": "" }, { "docid": "5ee78ac120ab734826b08861133655a9", "text": "This paper presents an approach to organizing folktales based on a data structure called a plot graph, which captures the narrative flow of events in a folktale. The similarity between two folktales can be computed as the structural similarity between their corresponding plot graphs. This is performed using the well-known Needleman-Wunsch algorithm. To test the efficacy of this approach, experiments are carried out using a small collection of 24 folktales grouped into 5 categories based on the Aarne-Thompson index. The best result is obtained by combining the proposed structural-based similarity measure with a more conventional bag of words vector space model, where 19 out of the 24 folktales (79.16%) yield higher average similarity with folktales within their respective categories as opposed to across categories.", "title": "" }, { "docid": "b910376732bde1d7499875be8bdaa1ec", "text": "Social tagging, as a novel approach to information organization and discovery, has been widely adopted in many Web 2.0 applications. Tags contributed by users to annotate a variety of Web resources or items provide a new type of information that can be exploited by recommender systems. Nevertheless, the sparsity of the ternary interaction data among users, items, and tags limits the performance of tag-based recommendation algorithms. In this article, we propose to deal with the sparsity problem in social tagging by applying random walks on ternary interaction graphs to explore transitive associations between users and items. The transitive associations in this article refer to the path of the link between any two nodes whose length is greater than one. Taking advantage of these transitive associations can allow more accurate measurement of the relevance between two entities (e.g., user-item, user-user, and item-item). A PageRank-like algorithm has been developed to explore these transitive associations by spreading users’ preferences on an item similarity graph and spreading items’ influences on a user similarity graph. Empirical evaluation on three real-world datasets demonstrates that our approach can effectively alleviate the sparsity problem and improve the quality of item recommendation.", "title": "" }, { "docid": "88e59d7830d63fe49b1a4d49726b01db", "text": "Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures. Annotating NL utterances with their corresponding MRs is expensive and timeconsuming, and thus the limited availability of labeled data often becomes the bottleneck of data-driven, supervised models. We introduce STRUCTVAE, a variational auto-encoding model for semisupervised semantic parsing, which learns both from limited amounts of parallel data, and readily-available unlabeled NL utterances. STRUCTVAE models latent MRs not observed in the unlabeled data as treestructured latent variables. Experiments on semantic parsing on the ATIS domain and Python code generation show that with extra unlabeled data, STRUCTVAE outperforms strong supervised models.1", "title": "" }, { "docid": "da3650998a4bd6ea31467daa631d0e05", "text": "Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "53df69bf8750a7e97f12b1fcac14b407", "text": "In photovoltaic (PV) power systems where a set of series-connected PV arrays (PVAs) is connected to a conventional two-level inverter, the occurrence of partial shades and/or the mismatching of PVAs leads to a reduction of the power generated from its potential maximum. To overcome these problems, the connection of the PVAs to a multilevel diode-clamped converter is considered in this paper. A control and pulsewidth-modulation scheme is proposed, capable of independently controlling the operating voltage of each PVA. Compared to a conventional two-level inverter system, the proposed system configuration allows one to extract maximum power, to reduce the devices voltage rating (with the subsequent benefits in device-performance characteristics), to reduce the output-voltage distortion, and to increase the system efficiency. Simulation and experimental tests have been conducted with three PVAs connected to a four-level three-phase diode-clamped converter to verify the good performance of the proposed system configuration and control strategy.", "title": "" }, { "docid": "becd66e0637b9b6dd07b45e6966227d6", "text": "In real life, when telling a person’s age from his/her face, we tend to look at his/her whole face first and then focus on certain important regions like eyes. After that we will focus on each particular facial feature individually like the nose or the mouth so that we can decide the age of the person. Similarly, in this paper, we propose a new framework for age estimation, which is based on human face sub-regions. Each sub-network in our framework takes the input of two images each from human facial region. One of them is the global face, and the other is a vital sub-region. Then, we combine the predictions from different sub-regions based on a majority voting method. We call our framework Multi-Region Network Prediction Ensemble (MRNPE) and evaluate our approach using two popular public datasets: MORPH Album II and Cross Age Celebrity Dataset (CACD). Experiments show that our method outperforms the existing state-of-the-art age estimation methods by a significant margin. The Mean Absolute Errors (MAE) of age estimation are dropped from 3.03 to 2.73 years on the MORPH Album II and 4.79 to 4.40 years on the CACD.", "title": "" }, { "docid": "ac24229e51822e44cb09baaf44e9623e", "text": "Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy.", "title": "" }, { "docid": "fba3c3a0fbc08c992d388e6854890b01", "text": "This paper presents a revenue maximisation model for sales channel allocation based on dynamic programming. It helps the media seller to determine how to distribute the sales volume of page views between guaranteed and nonguaranteed channels for display advertising. The model can algorithmically allocate and price the future page views via standardised guaranteed contracts in addition to real-time bidding (RTB). This is one of a few studies that investigates programmatic guarantee (PG) with posted prices. Several assumptions are made for media buyers’ behaviour, such as risk-aversion, stochastic demand arrivals, and time and price effects. We examine our model with an RTB dataset and find it increases the seller’s expected total revenue by adopting different pricing and allocation strategies depending the level of competition in RTB campaigns. The insights from this research can increase the allocative efficiency of the current media sellers’ sales mechanism and thus improve their revenue.", "title": "" }, { "docid": "6b6e055e4d6aea80d4f01eee47256be1", "text": "Ponseti treatment for clubfoot has been successful, but recurrence continues to be an issue. After correction, patients are typically braced full time with a static abduction bar and shoes. Patient compliance with bracing is a modifiable risk factor for recurrence. We hypothesized that the use of Mitchell shoes and a dynamic abduction brace would increase compliance and thereby reduce the rate of recurrence. A prospective, randomized trial was carried out with consecutive patients treated for idiopathic clubfeet from 2008 to 2012. After casting and tenotomy, patients were randomized into either the dynamic or static abduction bar group. Both groups used Mitchell shoes. Patient demographics, satisfaction, and compliance were measured with self-reported questionnaires throughout follow-up. Thirty patients were followed up, with 15 in each group. Average follow-up was 18.7 months (range 3-40.7 months). Eight recurrences (26.7%) were found, with four in each group. Recurrences had a statistically significant higher number of casts and a longer follow-up time. Mean income, education level, patient-reported satisfaction and compliance, and age of caregiver tended to be lower in the recurrence group but were not statistically significant. No differences were found between the two brace types. Our study showed excellent patient satisfaction and reported compliance with Mitchell shoes and either the dynamic or static abduction bar. Close attention and careful education should be directed towards patients with known risk factors or difficult casting courses to maximize brace compliance, a modifiable risk factor for recurrence.", "title": "" }, { "docid": "e43814f288e1c5a84fb9d26b46fc7e37", "text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.", "title": "" }, { "docid": "4fa9db557f53fa3099862af87337cfa9", "text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.", "title": "" }, { "docid": "eaf3d25c7babb067e987b2586129e0e4", "text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.", "title": "" }, { "docid": "34e1566235f94a265564cbe5d0bf7cc1", "text": "Circuit techniques that overcome practical noise, reliability, and EMI limitations are reported. An auxiliary loop with ramping circuits suppresses pop-and-click noise to 1 mV for an amplifier with 4 V-achievable output voltage. Switching edge rate control enables the system to meet the EN55022 Class-B standard with a 15 dB margin. An enhanced scheme detects short-circuit conditions without relying on overlimit current events.", "title": "" }, { "docid": "99a9dd7ed22351a1b33528f878537da8", "text": "The aim of single image super-resolution is to reconstruct a high-resolution image from a single low-resolution input. Although the task is ill-posed it can be seen as finding a non-linear mapping from a low to high-dimensional space. Recent methods that rely on both neighborhood embedding and sparse-coding have led to tremendous quality improvements. Yet, many of the previous approaches are hard to apply in practice because they are either too slow or demand tedious parameter tweaks. In this paper, we propose to directly map from low to high-resolution patches using random forests. We show the close relation of previous work on single image super-resolution to locally linear regression and demonstrate how random forests nicely fit into this framework. During training the trees, we optimize a novel and effective regularized objective that not only operates on the output space but also on the input space, which especially suits the regression task. During inference, our method comprises the same well-known computational efficiency that has made random forests popular for many computer vision problems. In the experimental part, we demonstrate on standard benchmarks for single image super-resolution that our approach yields highly accurate state-of-the-art results, while being fast in both training and evaluation.", "title": "" }, { "docid": "de638a90e5a6ef3bf030d998b0e921a3", "text": "The quantization techniques have shown competitive performance in approximate nearest neighbor search. The state-of-the-art algorithm, composite quantization, takes advantage of the compositionabity, i.e., the vector approximation accuracy, as opposed to product quantization and Cartesian k-means. However, we have observed that the runtime cost of computing the distance table in composite quantization, which is used as a lookup table for fast distance computation, becomes nonnegligible in real applications, e.g., reordering the candidates retrieved from the inverted index when handling very large scale databases. To address this problem, we develop a novel approach, called sparse composite quantization, which constructs sparse dictionaries. The benefit is that the distance evaluation between the query and the dictionary element (a sparse vector) is accelerated using the efficient sparse vector operation, and thus the cost of distance table computation is reduced a lot. Experiment results on large scale ANN retrieval tasks (1M SIFTs and 1B SIFTs) and applications to object retrieval show that the proposed approach yields competitive performance: superior search accuracy to product quantization and Cartesian k-means with almost the same computing cost, and much faster ANN search than composite quantization with the same level of accuracy.", "title": "" }, { "docid": "4d79d71c019c0f573885ffa2bc67f48b", "text": "In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non-idealities that limit CMOS image sensor performance, and specify several key performance measures. One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by pixel-level processing, the applications of which promise to further improve CMOS image sensor performance and broaden their applicability beyond current markets.", "title": "" }, { "docid": "c5639c65908882291c29e147605c79ca", "text": "Dirofilariasis is a rare disease in humans. We report here a case of a 48-year-old male who was diagnosed with pulmonary dirofilariasis in Korea. On chest radiographs, a coin lesion of 1 cm in diameter was shown. Although it looked like a benign inflammatory nodule, malignancy could not be excluded. So, the nodule was resected by video-assisted thoracic surgery. Pathologically, chronic granulomatous inflammation composed of coagulation necrosis with rim of fibrous tissues and granulations was seen. In the center of the necrotic nodules, a degenerating parasitic organism was found. The parasite had prominent internal cuticular ridges and thick cuticle, a well-developed muscle layer, an intestinal tube, and uterine tubules. The parasite was diagnosed as an immature female worm of Dirofilaria immitis. This is the second reported case of human pulmonary dirofilariasis in Korea.", "title": "" } ]
scidocsrr
7b388588d67297cec35614d2702025c2
SEMAFOR 1.0: A Probabilistic Frame-Semantic Parser
[ { "docid": "33b2c5abe122a66b73840506aa3b443e", "text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.", "title": "" } ]
[ { "docid": "55772e55adb83d4fd383ddebcf564a71", "text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.", "title": "" }, { "docid": "0a0f826f1a8fa52d61892632fd403502", "text": "We show that sequence information can be encoded into highdimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word’s semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word order often signals a word’s grammatical role in a sentence and thus tells of the word’s meaning. Jones and Mewhort (2007) show that word order can be included in the semantic vectors using holographic reduced representation and convolution. We show here that the order information can be captured also by permuting of vector coordinates, thus providing a general and computationally light alternative to convolution.", "title": "" }, { "docid": "6a2d1dfb61a4e37c8554900e0d366f51", "text": "Attention Deficit/Hyperactivity Disorder (ADHD) is a neurobehavioral disorder which leads to the difficulty on focusing, paying attention and controlling normal behavior. Globally, the prevalence of ADHD is estimated to be 6.5%. Medicine has been widely used for the treatment of ADHD symptoms, but the patient may have a chance to suffer from the side effects of drug, such as vomit, rash, urticarial, cardiac arrthymia and insomnia. In this paper, we propose the alternative medicine system based on the brain-computer interface (BCI) technology called neurofeedback. The proposed neurofeedback system simultaneously employs two important signals, i.e. electroencephalogram (EEG) and hemoencephalogram (HEG), which can quickly reveal the brain functional network. The treatment criteria are that, for EEG signals, the patient needs to maintain the beta activities (13-30 Hz) while reducing the alpha activities (7-13 Hz). Simultaneously, HEG signals need to be maintained continuously increasing to some setting thresholds of the brain blood oxygenation levels. Time-frequency selective multilayer perceptron (MLP) is employed to capture the mentioned phenomena in real-time. The experimental results show that the proposed system yields the sensitivity of 98.16% and the specificity of 95.57%. Furthermore, from the resulting weights of the proposed MLP, we can also conclude that HEG signals yield the most impact to our neurofeedback treatment followed by the alpha, beta, and theta activities, respectively.", "title": "" }, { "docid": "eba769c6246b44d8ed7e5f08aac17731", "text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.", "title": "" }, { "docid": "1924730db532936166d07c6bab058800", "text": "The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum.", "title": "" }, { "docid": "3f1d69e8a2fdfc69e451679255782d70", "text": "This tutorial gives a broad view of modern approaches for scaling up machine learning and data mining methods on parallel/distributed platforms. Demand for scaling up machine learning is task-specific: for some tasks it is driven by the enormous dataset sizes, for others by model complexity or by the requirement for real-time prediction. Selecting a task-appropriate parallelization platform and algorithm requires understanding their benefits, trade-offs and constraints. This tutorial focuses on providing an integrated overview of state-of-the-art platforms and algorithm choices. These span a range of hardware options (from FPGAs and GPUs to multi-core systems and commodity clusters), programming frameworks (including CUDA, MPI, MapReduce, and DryadLINQ), and learning settings (e.g., semi-supervised and online learning). The tutorial is example-driven, covering a number of popular algorithms (e.g., boosted trees, spectral clustering, belief propagation) and diverse applications (e.g., recommender systems and object recognition in vision).\n The tutorial is based on (but not limited to) the material from our upcoming Cambridge U. Press edited book which is currently in production.\n Visit the tutorial website at http://hunch.net/~large_scale_survey/", "title": "" }, { "docid": "2732b8453269834e481428f054ff4992", "text": "Otsu reference proposed a criterion for maximizing the between-class variance of pixel intensity to perform picture thresholding. However, Otsu’s method for image segmentation is very time-consuming because of the inefficient formulation of the between-class variance. In this paper, a faster version of Otsu’s method is proposed for improving the efficiency of computation for the optimal thresholds of an image. First, a criterion for maximizing a modified between-class variance that is equivalent to the criterion of maximizing the usual between-class variance is proposed for image segmentation. Next, in accordance with the new criterion, a recursive algorithm is designed to efficiently find the optimal threshold. This procedure yields the same set of thresholds as the original method. In addition, the modified between-class variance can be pre-computed and stored in a look-up table. Our analysis of the new criterion clearly shows that it takes less computation to compute both the cumulative probability (zeroth order moment) and the mean (first order moment) of a class, and that determining the modified between-class variance by accessing a look-up table is quicker than that by performing mathematical arithmetic operations. For example, the experimental results of a five-level threshold selection show that our proposed method can reduce down the processing time from more than one hour by the conventional Otsu’s method to less than 107 seconds.", "title": "" }, { "docid": "44ea81d223e3c60c7b4fd1192ca3c4ba", "text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes", "title": "" }, { "docid": "b40ef74fd41676d51d0870578e483b27", "text": "In this paper, we propose a simple but effective image prior-dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of outdoor haze-free images. It is based on a key observation-most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of hazy images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a byproduct of haze removal.", "title": "" }, { "docid": "fbe0c6e8cbaf6c419990c1a7093fe2a9", "text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.", "title": "" }, { "docid": "0e803e853422328aeef59e426410df48", "text": "We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.", "title": "" }, { "docid": "121a388391c12de1329e74fdeebdaf10", "text": "In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victim's valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25% of exposed passwords match a victim's Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a user's historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s.", "title": "" }, { "docid": "b3cb053d44a90a2a9a9332ac920f0e90", "text": "This study develops a crowdfunding sponsor typology based on sponsors’ motivations for participating in a project. Using a two by two crowdfunding motivation framework, we analyzed six relevant funding motivations—interest, playfulness, philanthropy, reward, relationship, and recognition—and identified four types of crowdfunding sponsors: angelic backer, reward hunter, avid fan, and tasteful hermit. They are profiled in terms of the antecedents and consequences of funding motivations. Angelic backers are similar in some ways to traditional charitable donors while reward hunters are analogous to market investors; thus they differ in their approach to crowdfunding. Avid fans comprise the most passionate sponsor group, and they are similar to members of a brand community. Tasteful hermits support their projects as actively as avid fans, but they have lower extrinsic and others-oriented motivations. The results show that these sponsor types reflect the nature of crowdfunding as a new form of co-creation in the E-commerce context. 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "25d913188ee5790d5b3a9f5fb8b68dda", "text": "RPL, the routing protocol proposed by IETF for IPv6/6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.", "title": "" }, { "docid": "370767f85718121dc3975f383bf99d8b", "text": "A combinatorial classification and a phylogenetic analysis of the ten 12/8 time, seven-stroke bell rhythm timelines in African and Afro-American music are presented. New methods for rhythm classification are proposed based on measures of rhythmic oddity and off-beatness. These combinatorial classifications reveal several new uniqueness properties of the Bembé bell pattern that may explain its widespread popularity and preference among the other patterns in this class. A new distance measure called the swap-distance is introduced to measure the non-similarity of two rhythms that have the same number of strokes. A swap in a sequence of notes and rests of equal duration is the location interchange of a note and a rest that are adjacent in the sequence. The swap distance between two rhythms is defined as the minimum number of swaps required to transform one rhythm to the other. A phylogenetic analysis using Splits Graphs with the swap distance shows that each of the ten bell patterns can be derived from one of two “canonical” patterns with at most four swap operations, or from one with at most five swap operations. Furthermore, the phylogenetic analysis suggests that for these ten bell patterns there are no “ancestral” rhythms not contained in this set.", "title": "" }, { "docid": "774394b64cf9a98f481b343866f648a6", "text": "The aim of this study was to evaluate the anatomy of the central myelin portion and the central myelin-peripheral myelin transitional zone of the trigeminal, facial, glossopharyngeal and vagus nerves from fresh cadavers. The aim was also to investigate the relationship between the length and volume of the central myelin portion of these nerves with the incidences of the corresponding cranial dysfunctional syndromes caused by their compression to provide some more insights for a better understanding of mechanisms. The trigeminal, facial, glossopharyngeal and vagus nerves from six fresh cadavers were examined. The length of these nerves from the brainstem to the foramen that they exit were measured. Longitudinal sections were stained and photographed to make measurements. The diameters of the nerves where they exit/enter from/to brainstem, the diameters where the transitional zone begins, the distances to the most distal part of transitional zone from brainstem and depths of the transitional zones were measured. Most importantly, the volume of the central myelin portion of the nerves was calculated. Correlation between length and volume of the central myelin portion of these nerves and the incidences of the corresponding hyperactive dysfunctional syndromes as reported in the literature were studied. The distance of the most distal part of the transitional zone from the brainstem was 4.19 ± 0.81 mm for the trigeminal nerve, 2.86 ± 1.19 mm for the facial nerve, 1.51 ± 0.39 mm for the glossopharyngeal nerve, and 1.63 ± 1.15 mm for the vagus nerve. The volume of central myelin portion was 24.54 ± 9.82 mm3 in trigeminal nerve; 4.43 ± 2.55 mm3 in facial nerve; 1.55 ± 1.08 mm3 in glossopharyngeal nerve; 2.56 ± 1.32 mm3 in vagus nerve. Correlations (p < 0.001) have been found between the length or volume of central myelin portions of the trigeminal, facial, glossopharyngeal and vagus nerves and incidences of the corresponding diseases. At present it is rather well-established that primary trigeminal neuralgia, hemifacial spasm and vago-glossopharyngeal neuralgia have as one of the main causes a vascular compression. The strong correlations found between the lengths and volumes of the central myelin portions of the nerves and the incidences of the corresponding diseases is a plea for the role played by this anatomical region in the mechanism of these diseases.", "title": "" }, { "docid": "83de0252b28e4dcedefc239aaaee79e5", "text": "Recently, there has been immense interest in using unmanned aerial vehicles (UAVs) for civilian operations such as package delivery, aerial surveillance, and disaster response. As a result, UAV traffic management systems are needed to support potentially thousands of UAVs flying simultaneously in the air space, in order to ensure their liveness and safety requirements are met. Currently, the analysis of large multi-agent systems cannot tractably provide these guarantees if the agents’ set of maneuvers are unrestricted. In this paper, we propose to have platoons of UAVs flying on air highways in order to impose the air space structure that allows for tractable analysis and intuitive monitoring. For the air highway placement problem, we use the flexible and efficient fast marching method to solve the Eikonal equation, which produces a sequence of air highways that minimizes the cost of flying from an origin to any destination. Within the platoons that travel on the air highways, we model each vehicle as a hybrid system with modes corresponding to its role in the platoon. Using Hamilton-Jacobi reachability, we propose several liveness controllers and a safety controller that guarantee the success and safety of all mode transitions. For a single altitude range, our approach guarantees safety for one safety breach per vehicle; in the unlikely event of multiple safety breaches, safety can be guaranteed over multiple altitude ranges. We demonstrate the satisfaction of liveness and safety requirements through simulations of three common scenarios.", "title": "" }, { "docid": "06f27036cd261647c7670bdf854f5fb4", "text": "OBJECTIVE\nTo determine the formation and dissolution of calcium fluoride on the enamel surface after application of two fluoride gel-saliva mixtures.\n\n\nMETHOD AND MATERIALS\nFrom each of 80 bovine incisors, two enamel specimens were prepared and subjected to two different treatment procedures. In group 1, 80 specimens were treated with a mixture of an amine fluoride gel (1.25% F-; pH 5.2; 5 minutes) and human saliva. In group 2, 80 enamel blocks were subjected to a mixture of sodium fluoride gel (1.25% F; pH 5.5; 5 minutes) and human saliva. Subsequent to fluoride treatment, 40 specimens from each group were stored in human saliva and sterile water, respectively. Ten specimens were removed after each of 1 hour, 24 hours, 2 days, and 5 days and analyzed according to potassium hydroxide-soluble fluoride.\n\n\nRESULTS\nApplication of amine fluoride gel resulted in a higher amount of potassium hydroxide-soluble fluoride than did sodium fluoride gel 1 hour after application. Saliva exerted an inhibitory effect according to the dissolution rate of calcium fluoride. However, after 5 days, more than 90% of the precipitated calcium fluoride was dissolved in the amine fluoride group, and almost all potassium hydroxide-soluble fluoride was lost in the sodium fluoride group. Calcium fluoride apparently dissolves rapidly, even at almost neutral pH.\n\n\nCONCLUSION\nConsidering the limitations of an in vitro study, it is concluded that highly concentrated fluoride gels should be applied at an adequate frequency to reestablish a calcium fluoride-like layer.", "title": "" }, { "docid": "c2db241a94d9fec15af613d593730dea", "text": "This study investigated the influence of Cloisite-15A nanoclay on the physical, performance, and mechanical properties of bitumen binder. Cloisite-15A was blended in the bitumen in variegated percentages from 1% to 9% with increment of 2%. The blended bitumen was characterized using penetration, softening point, and dynamic viscosity using rotational viscometer, and compared with unmodified bitumen equally penetration grade 60/70. The rheological parameters were investigated using Dynamic Shear Rheometer (DSR), and mechanical properties were investigated by using Marshall Stability test. The results indicated an increase in softening point, dynamic viscosity and decrease in binder penetration. Rheological properties of bitumen increase complex modulus, decrease phase angle and improve rutting resistances as well. There was significant improvement in Marshall Stability, rather marginal improvement in flow value. The best improvement in the modified binder was obtained with 5% Cloisite-15A nanoclay. Keywords—Cloisite-15A, complex shear modulus, phase angle, rutting resistance.", "title": "" } ]
scidocsrr
d26016066331715339a082414469a654
GUI Design for IDE Command Recommendations
[ { "docid": "ef598ba4f9a4df1f42debc0eabd1ead8", "text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.", "title": "" } ]
[ { "docid": "c41259069ff779cf727ee4cfcf317cee", "text": "Trends in miniaturization have resulted in an explosion of small, low power devices with network connectivity. Welcome to the era of Internet of Things (IoT), wearable devices, and automated home and industrial systems. These devices are loaded with sensors, collect information from their surroundings, process it, and relay it to remote locations for further analysis. Pervasive and seeminly harmless, this new breed of devices raise security and privacy concerns. In this chapter, we evaluate the security of these devices from an industry point of view, concentrating on the design flow, and catalogue the types of vulnerabilities we have found. We also present an in-depth evaluation of the Google Nest Thermostat, the Nike+ Fuelband SE Fitness Tracker, the Haier SmartCare home automation system, and the Itron Centron CL200 electric meter. We study and present an analysis of the effects of these compromised devices in an every day setting. We then finish by discussing design flow enhancements, with security mechanisms that can be efficiently added into a device in a comparative way.", "title": "" }, { "docid": "bf08d673b40109d6d6101947258684fd", "text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.", "title": "" }, { "docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa", "text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking", "title": "" }, { "docid": "0ce46853852a20e5e0ab9aacd3ec20c1", "text": "In immunocompromised subjects, Epstein-Barr virus (EBV) infection of terminally differentiated oral keratinocytes may result in subclinical productive infection of the virus in the stratum spinosum and in the stratum granulosum with shedding of infectious virions into the oral fluid in the desquamating cells. In a minority of cases this productive infection with dysregulation of the cell cycle of terminally differentiated epithelial cells may manifest as oral hairy leukoplakia. This is a white, hyperkeratotic, benign lesion of low morbidity, affecting primarily the lateral border of the tongue. Factors that determine whether productive EBV replication within the oral epithelium will cause oral hairy leukoplakia include the fitness of local immune responses, the profile of EBV gene expression, and local environmental factors.", "title": "" }, { "docid": "be7662e67b3cff4991ae7249e8f8cde2", "text": "The kernelized correlation filter (KCF) is one of the state-of-the-art object trackers. However, it does not reasonably model the distribution of correlation response during tracking process, which might cause the drifting problem, especially when targets undergo significant appearance changes due to occlusion, camera shaking, and/or deformation. In this paper, we propose an output constraint transfer (OCT) method that by modeling the distribution of correlation response in a Bayesian optimization framework is able to mitigate the drifting problem. OCT builds upon the reasonable assumption that the correlation response to the target image follows a Gaussian distribution, which we exploit to select training samples and reduce model uncertainty. OCT is rooted in a new theory which transfers data distribution to a constraint of the optimized variable, leading to an efficient framework to calculate correlation filters. Extensive experiments on a commonly used tracking benchmark show that the proposed method significantly improves KCF, and achieves better performance than other state-of-the-art trackers. To encourage further developments, the source code is made available.", "title": "" }, { "docid": "4560e1b7318013be0688b8e73692fda4", "text": "This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension kmeans is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, YesNet uses RNN as a filter to get the final boxes, which is more efficient. For 416 × 416 input, Yes-Net achieves 74.3% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal.", "title": "" }, { "docid": "8a7a8de5cae191a4493e5a0e4f34bbf1", "text": "B-spline surfaces, although widely used, are incapable of describing surfaces of arbitrary topology. It is not possible to model a general closed surface or a surface with handles as a single non-degenerate B-spline. In practice such surfaces are often needed. In this paper, we present generalizations of biquadratic and bicubic B-spline surfaces that are capable of capturing surfaces of arbitrary topology (although restrictions are placed on the connectivity of the control mesh). These results are obtained by relaxing the sufficient but not necessary smoothness constraints imposed by B-splines and through the use of an n-sided generalization of B&eacute;zier surfaces called S-patches.", "title": "" }, { "docid": "bb4001c4cb5fde8d34fd48ee50eb053c", "text": "We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: Inspired by Occam’s razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using Rényi entropy. Our main result is that, under natural assumptions, if the exogenous variable has lowH0 entropy (cardinality) in the true direction, it must have high H0 entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum H1 entropy (Shannon Entropy) is equivalent to the problem of finding minimum joint entropy given n marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for n = 2 provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum Shannon entropy. Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.", "title": "" }, { "docid": "3cde70842ee80663cbdc04db6a871d46", "text": "Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.", "title": "" }, { "docid": "4f37b872c44c2bda3ff62e3e8ebf4391", "text": "This paper proposes a method based on conditional random fields to incorporate sentence structure (syntax and semantics) and context information to identify sentiments of sentences within a document. It also proposes and evaluates two different active learning strategies for labeling sentiment data. The experiments with the proposed approach demonstrate a 5-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods.", "title": "" }, { "docid": "b4e9cfc0dbac4a5d7f76001e73e8973d", "text": "Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target’s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.", "title": "" }, { "docid": "5e8154a99b4b0cc544cab604b680ebd2", "text": "This work presents performance of robust wearable antennas intended to operate in Wireless Body Area Networks (W-BAN) in UHF, TETRAPOL communication band, 380-400 MHz. We propose a Planar Inverted F Antenna (PIFA) as reliable antenna type for UHF W-BAN applications. In order to satisfy the robustness requirements of the UHF band, both from communication and mechanical aspect, a new technology for building these antennas was proposed. The antennas are built out of flexible conductive sheets encapsulated inside a silicone based elastomer, Polydimethylsiloxane (PDMS). The proposed antennas are resistive to washing, bending and perforating. From the communication point of view, opting for a PIFA antenna type we solve the problem of coupling to the wearer and thus improve the overall communication performance of the antenna. Several different tests and comparisons were performed in order to check the stability of the proposed antennas when they are placed on the wearer or left in a common everyday environ- ment, on the ground, table etc. S11 deviations are observed and compared with the commercially available wearable antennas. As a final check, the antennas were tested in the frame of an existing UHF TETRAPOL communication system. All the measurements were performed in a real university campus scenario, showing reliable and good performance of the proposed PIFA antennas.", "title": "" }, { "docid": "5f01e9cd6dc2f9bd051e172b3108f06d", "text": "Head pose estimation is recently a more and more popular area of research. For the last three decades new approaches have constantly been developed, and steadily better accuracy was achieved. Unsurprisingly, a very broad range of methods was explored statistical, geometrical and tracking-based to name a few. This paper presents a brief summary of the evolution of head pose estimation and a glimpse at the current state-of-the-art in this eld.", "title": "" }, { "docid": "4fa9db557f53fa3099862af87337cfa9", "text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.", "title": "" }, { "docid": "1fc9a4a769c7ff6d6ddeff7e5df7986b", "text": "This paper describes a model of problem solving for use in collaborative agents. It is intended as a practical model for use in implemented systems, rather than a study of the theoretical underpinnings of collaborative action. The model is based on our experience in building a series of interactive systems in different domains, including route planning, emergency management, and medical advising. It is currently being used in an implemented, end-to- end spoken dialogue system in which the system assists a person in managing their medications. While we are primarily focussed on human-machine collaboration, we believe that the model will equally well apply to interactions between sophisticated software agents that need to coordinate their activities.", "title": "" }, { "docid": "937de8ba80bd92084f9c2886a28874d1", "text": "Android security has been a hot spot recently in both academic research and public concerns due to numerous instances of security attacks and privacy leakage on Android platform. Android security has been built upon a permission based mechanism which restricts accesses of third-party Android applications to critical resources on an Android device. Such permission based mechanism is widely criticized for its coarse-grained control of application permissions and difficult management of permissions by developers, marketers, and end-users. In this paper, we investigate the arising issues in Android security, including coarse granularity of permissions, incompetent permission administration, insufficient permission documentation, over-claim of permissions, permission escalation attack, and TOCTOU (Time of Check to Time of Use) attack. We illustrate the relationships among these issues, and investigate the existing countermeasures to address these issues. In particular, we provide a systematic review on the development of these countermeasures, and compare them according to their technical features. Finally, we propose several methods to further mitigate the risk in Android security. a 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b0b2e50ea9020f6dd6419fbb0520cdfd", "text": "Social interactions, such as an aggressive encounter between two conspecific males or a mating encounter between a male and a female, typically progress from an initial appetitive or motivational phase, to a final consummatory phase. This progression involves both changes in the intensity of the animals' internal state of arousal or motivation and sequential changes in their behavior. How are these internal states, and their escalating intensity, encoded in the brain? Does this escalation drive the progression from the appetitive/motivational to the consummatory phase of a social interaction and, if so, how are appropriate behaviors chosen during this progression? Recent work on social behaviors in flies and mice suggests possible ways in which changes in internal state intensity during a social encounter may be encoded and coupled to appropriate behavioral decisions at appropriate phases of the interaction. These studies may have relevance to understanding how emotion states influence cognitive behavioral decisions at higher levels of brain function.", "title": "" }, { "docid": "a0d49d0f2dd9ef4fabf98d36f0180347", "text": "This study draws on the work/family border theory to investigate the role of information communication technology (ICT) use at home in shaping the characteristics of work/family borders (i.e. flexibility and permeability) and consequently influencing individuals’ perceived work-family conflict, technostress, and level of telecommuting. Data were collected from a probability sample of 509 information workers in Hong Kong who were not selfemployed. The results showed that the more that people used ICT to do their work at home, the greater they perceived their work/family borders flexible and permeable. Interestingly, low flexibility and high permeability, rather than the use of ICT at home, had much stronger influences on increasing, in particular, family-to-work conflict. As expected, work-tofamily conflict was significantly and positively associated with technostress. Results also showed that the telecommuters tended to be older, had lower family incomes, used ICT frequently at home, and had a permeable boundary that allowed work to penetrate their home domain. The theoretical and practical implications are discussed. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0a7f93e98e1d256ea6a4400f33753d6a", "text": "In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds a polygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensorplacement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the nextbest-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The International Journal of Robotics Research Vol. 21, No. 10–11, October-November 2002, pp. 829-848, ©2002 Sage Publications The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs. KEY WORDS—next-best view, safe region, online exploration, incidence constraints, map building", "title": "" }, { "docid": "dfde48aa79ac10382fe4b9a312662cd9", "text": "221 Abstract— Due to rapid advances and availabilities of powerful image processing software's, it is easy to manipulate and modify digital images. So it is very difficult for a viewer to judge the authenticity of a given image. Nowadays, it is possible to add or remove important features from an image without leaving any obvious traces of tampering. As digital cameras and video cameras replace their analog counterparts, the need for authenticating digital images, validating their content and detecting forgeries will only increase. For digital photographs to be used as evidence in law issues or to be circulated in mass media, it is necessary to check the authenticity of the image. So In this paper, describes an Image forgery detection method based on SIFT. In particular, we focus on detection of a special type of digital forgery – the copy-move attack, in a copy-move image forgery method; a part of an image is copied and then pasted on a different location within the same image. In this approach an improved algorithm based on scale invariant features transform (SIFT) is used to detect such cloning forgery, In this technique Transform is applied to the input image to yield a reduced dimensional representation, After that Apply key point detection and feature descriptor along with a matching over all the key points. Such a method allows us to both understand if a copy–move attack has occurred and, also furthermore gives output by applying clustering over matched points.", "title": "" } ]
scidocsrr
539b8778fa5e2573c9d6a1c3627ba881
The development of reading in children who speak English as a second language.
[ { "docid": "4272b4a73ecd9d2b60e0c60de0469f17", "text": "Suggesting that empirical work in the field of reading has advanced sufficiently to allow substantial agreed-upon results and conclusions, this literature review cuts through the detail of partially convergent, sometimes discrepant research findings to provide an integrated picture of how reading develops and how reading instruction should proceed. The focus of the review is prevention. Sketched is a picture of the conditions under which reading is most likely to develop easily--conditions that include stimulating preschool environments, excellent reading instruction, and the absence of any of a wide array of risk factors. It also provides recommendations for practice as well as recommendations for further research. After a preface and executive summary, chapters are (1) Introduction; (2) The Process of Learning to Read; (3) Who Has Reading Difficulties; (4) Predic:ors of Success and Failure in Reading; (5) Preventing Reading Difficulties before Kindergarten; (6) Instructional Strategies for Kindergarten and the Primary Grades; (7) Organizational Strategies for Kindergarten and the Primary Grades; (8) Helping Children with Reading Difficulties in Grades 1 to 3; (9) The Agents of Change; and (10) Recommendations for Practice and Research. Contains biographical sketches of the committee members and an index. Contains approximately 800 references.", "title": "" } ]
[ { "docid": "ed06666ec688b6a57b2f3eaa57853dcd", "text": "Sensor fusion is indispensable to improve accuracy and robustness in an autonomous navigation setting. However, in the space of end-to-end sensorimotor control, this multimodal outlook has received limited attention. In this work, we propose a novel stochastic regularization technique, called Sensor Dropout, to robustify multimodal sensor policy learning outcomes. We also introduce an auxiliary loss on policy network along with the standard DRL loss in order to reduce variance in actions of the multimodal sensor policy. Through extensive empirical testing, we demonstrate that our proposed policy can 1) operate with minimal performance drop in noisy environments and 2) remain functional even in the face of a sensor subset failure. Finally, through the visualization of gradients, we show that the learned policies are conditioned on the same latent input distribution despite having multiple and diverse observations spaces a hallmark of true sensorfusion. This efficacy of a multimodal sensor policy is shown through simulations on TORCS, a popular open-source racing car game. A demo video can be seen here: https://youtu.be/HC3TcJjXf3Q.", "title": "" }, { "docid": "5325138fcbb52c61903e7bb9bd1c890b", "text": "To simulate an efficient Intrusion Detection System (IDS) model, enormous amount of data are required to train and testing the model. To improve the accuracy and efficiency of the model, it is essential to infer the statistical properties from the observable elements of th e dataset. In this work, we have proposed some data preprocessing techniques such as filling the missing values, removing redundant samples, reduce the dimension, selecting most relevant features and finally, normalize the samples. After data preprocessing, we have simulated and tested the dataset by applying various data mining algorithms such as Support Vector Machine (SVM), Decision Tree, K nearest neighbor, K-Mean and Fuzzy C-Mean Clustering which provides better result in less computational time.", "title": "" }, { "docid": "51a9180623be4ddaf514377074edc379", "text": "Breast region measurements are important for research, but they may also become significant in the legal field as a quantitative tool for preoperative and postoperative evaluation. Direct anthropometric measurements can be taken in clinical practice. The aim of this study was to compare direct breast anthropometric measurements taken with a tape measure and a compass. Forty women, aged 18–60 years, were evaluated. They had 14 anatomical landmarks marked on the breast region and arms. The union of these points formed eight linear segments and one angle for each side of the body. The volunteers were evaluated by direct anthropometry in a standardized way, using a tape measure and a compass. Differences were found between the tape measure and the compass measurements for all segments analyzed (p > 0.05). Measurements obtained by tape measure and compass are not identical. Therefore, once the measurement tool is chosen, it should be used for the pre- and postoperative measurements in a standardized way. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "39f5413a937587b3afc9bbd9ee4b735f", "text": "examples in learning math. Science, 320(5875), 454–455. doi: 10.1126/science.1154659 Kaminski, J. A., Sloutsky, V. M., & Heckler, A. (2009). Transfer of mathematical knowledge: The portability of generic instantiations. Child Development Perspectives, 3(3), 151–155. doi:10.1111/j.1750-8606", "title": "" }, { "docid": "5f42f43bf4f46b821dac3b0d0be2f63a", "text": "The autonomous overtaking maneuver is a valuable technology in unmanned vehicle field. However, overtaking is always perplexed by its security and time cost. Now, an autonomous overtaking decision making method based on deep Q-learning network is proposed in this paper, which employs a deep neural network(DNN) to learn Q function from action chosen to state transition. Based on the trained DNN, appropriate action is adopted in different environments for higher reward state. A series of experiments are performed to verify the effectiveness and robustness of our proposed approach for overtaking decision making based on deep Q-learning method. The results support that our approach achieves better security and lower time cost compared with traditional reinforcement learning methods.", "title": "" }, { "docid": "9ed5fdb991edd5de57ffa7f13121f047", "text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.", "title": "" }, { "docid": "a30c2a8d3db81ae121e62af5994d3128", "text": "Recent advances in the fields of robotics, cyborg development, moral psychology, trust, multi agent-based systems and socionics have raised the need for a better understanding of ethics, moral reasoning, judgment and decision-making within the system of man and machines. Here we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the social moral norms at the collective end. We review salient works in the fields of trust and machine ethics research, underscore the importance and the need for a deeper understanding of ethical trust at the individual level and the development of collective social moral norms. Drawing upon the recent findings from neural sciences on mirror-neuron system (MNS) and social cognition, we present a bio-inspired Computational Model of Ethical Trust (CMET) to allow investigations of the interplay of ethical trust and social moral norms.", "title": "" }, { "docid": "7aa07ba3e04a79cf51dfc9c42b415628", "text": "A model is presented that permits the calculation of densities of 60-Hz magnetic fields throughout a residence from only a few measurements. We assume that residential magnetic fields are produced by sources external to the house and by the residential grounding circuit. The field from external sources is measured with a single probe. The field produced by the grounding circuit is calculated from the current flowing in the circuit and its geometry. The two fields are combined to give a prediction of the total field at any point in the house. A data-acquisition system was built to record the magnitude and phase of the grounding current and the field from external sources. The model's predictions were compared with measurements of the total magnetic field at a single location in 23 houses; a correlation coefficient of .87 was obtained, indicating that the model has good predictive capability. A more detailed study that was carried out in one house permitted comparisons of measurements with the model's predictions at locations throughout the house. Again, quite reasonable agreement was found. We also investigated the temporal variability of field readings in this house. Daily magnetic field averages were found to be considerably more stable than hourly averages. Finally, we demonstrate the use of the model in creating a profile of the magnetic fields in a home.", "title": "" }, { "docid": "4a9913930e2e07b867cc701b07e88eaa", "text": "There is little doubt that the incidence of depression in Britain is increasing. According to research at the Universities of London and Warwick, the incidence of depression among young people has doubled in the past 12 years. However, whether young or old, the question is why and what can be done? There are those who argue that the increasingly common phenomenon of depression is primarily psychological, and best dealt with by counselling. There are others who consider depression as a biochemical phenomenon, best dealt with by antidepressant medication. However, there is a third aspect to the onset and treatment of depression that is given little heed: nutrition. Why would nutrition have anything to do with depression? Firstly, we have seen a significant decline in fruit and vegetable intake (rich in folic acid), in fish intake (rich in essential fats) and an increase in sugar consumption, from 2 lb a year in the 1940s to 150 lb a year in many of today’s teenagers. Each of these nutrients is strongly linked to depression and could, theoretically, contribute to increasing rates of depression. Secondly, if depression is a biochemical imbalance it makes sense to explore how the brain normalises its own biochemistry, using nutrients as the precursors for key neurotransmitters such as serotonin. Thirdly, if 21st century living is extra-stressful, it would be logical to assume that increasing psychological demands would also increase nutritional requirements since the brain is structurally and functionally completely dependent on nutrients. So, what evidence is there to support suboptimal nutrition as a potential contributor to depression? These are the common imbalances connected to nutrition that are known to worsen your mood and motivation:", "title": "" }, { "docid": "d42bbb6fe8d99239993ed01aa44c32ef", "text": "Chemical communication plays a very important role in the lives of many social insects. Several different types of pheromones (species-specific chemical messengers) of ants have been described, particularly those involved in recruitment, recognition, territorial and alarm behaviours. Properties of pheromones include activity in minute quantities (thus requiring sensitive methods for chemical analysis) and specificity (which can have chemotaxonomic uses). Ants produce pheromones in various exocrine glands, such as the Dufour, poison, pygidial and mandibular glands. A wide range of substances have been identified from these glands.", "title": "" }, { "docid": "82ef80d6257c5787dcf9201183735497", "text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.", "title": "" }, { "docid": "2528b23554f934a67b3ed66f7df9d79e", "text": "In this paper, we implemented an approach to predict final exam scores from early course assessments of the students during the semester. We used a linear regression model to check which part of the evaluation of the course assessment affects final exam score the most. In addition, we explained the origins of data mining and data mining in education. After preprocessing and preparing data for the task in hand, we implemented the linear regression model. The results of our work show that quizzes are most accurate predictors of final exam scores compared to other kinds of assessments.", "title": "" }, { "docid": "6d4cd80341c429ecaaccc164b1bde5f9", "text": "One hundred and two olive RAPD profiles were sampled from all around the Mediterranean Basin. Twenty four clusters of RAPD profiles were shown in the dendrogram based on the Ward’s minimum variance algorithm using chi-square distances. Factorial discriminant analyses showed that RAPD profiles were correlated with the use of the fruits and the country or region of origin of the cultivars. This suggests that cultivar selection has occurred in different genetic pools and in different areas. Mitochondrial DNA RFLP analyses were also performed. These mitotypes supported the conclusion also that multilocal olive selection has occurred. This prediction for the use of cultivars will help olive growers to choose new foreign cultivars for testing them before an eventual introduction if they are well adapted to local conditions.", "title": "" }, { "docid": "e910310c5cc8357c570c6c4110c4e94f", "text": "Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.", "title": "" }, { "docid": "eaae33cb97b799eff093a7a527143346", "text": "RGB Video now is one of the major data sources of traffic surveillance applications. In order to detect the possible traffic events in the video, traffic-related objects, such as vehicles and pedestrians, should be first detected and recognized. However, due to the 2D nature of the RGB videos, there are technical difficulties in efficiently detecting and recognizing traffic-related objects from them. For instance, the traffic-related objects cannot be efficiently detected in separation while parts of them overlap, and complex background will influence the accuracy of the object detection. In this paper, we propose a robust RGB-D data based traffic scene understanding algorithm. By integrating depth information, we can calculate more discriminative object features and spatial information can be used to separate the objects in the scene efficiently. Experimental results show that integrating depth data can improve the accuracy of object detection and recognition. We also show that the analyzed object information plus depth data facilitate two important traffic event detection applications: overtaking warning and collision", "title": "" }, { "docid": "c57d4b7ea0e5f7126329626408f1da2d", "text": "Educational Data Mining (EDM) is an interdisciplinary ingenuous research area that handles the development of methods to explore data arising in a scholastic fields. Computational approaches used by EDM is to examine scholastic data in order to study educational questions. As a result, it provides intrinsic knowledge of teaching and learning process for effective education planning. This paper conducts a comprehensive study on the recent and relevant studies put through in this field to date. The study focuses on methods of analysing educational data to develop models for improving academic performances and improving institutional effectiveness. This paper accumulates and relegates literature, identifies consequential work and mediates it to computing educators and professional bodies. We identify research that gives well-fortified advice to amend edifying and invigorate the more impuissant segment students in the institution. The results of these studies give insight into techniques for ameliorating pedagogical process, presaging student performance, compare the precision of data mining algorithms, and demonstrate the maturity of open source implements.", "title": "" }, { "docid": "5f5c78b74e1e576dd48690b903bf4de4", "text": "Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.", "title": "" }, { "docid": "7fc10687c97d2219ce8555dd92baf57c", "text": "The wind-induced response of tall buildings is inherently sensitive to structural dynamic properties like frequency and damping ratio. The latter parameter in particular is fraught with uncertainty in the design stage and may result in a built structure whose acceleration levels exceed design predictions. This reality has motivated the need to monitor tall buildings in full-scale. This paper chronicles the authors’ experiences in the analysis of full-scale dynamic response data from tall buildings around the world, including full-scale datasets from high rises in Boston, Chicago, and Seoul. In particular, this study focuses on the effects of coupling, beat phenomenon, amplitude dependence, and structural system type on dynamic properties, as well as correlating observed periods of vibration against fi nite element predictions. The fi ndings suggest the need for time–frequency analyses to identify coalescing modes and the mechanisms spurring them. The study also highlighted the effect of this phenomenon on damping values, the overestimates that can result due to amplitude dependence, as well as the comparatively larger degree of energy dissipation experienced by buildings dominated by frame action. Copyright © 2007 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "f6b4ab40746d0c8c7e2b0113402667a9", "text": "This paper presents a method for measuring the semantic similarity between concepts in Knowledge Graphs (KGs) such as WordNet and DBpedia. Previous work on semantic similarity methods have focused on either the structure of the semantic network between concepts (e.g., path length and depth), or only on the Information Content (IC) of concepts. We propose a semantic similarity method, namely wpath, to combine these two approaches, using IC to weight the shortest path length between concepts. Conventional corpus-based IC is computed from the distributions of concepts over textual corpus, which is required to prepare a domain corpus containing annotated concepts and has high computational cost. As instances are already extracted from textual corpus and annotated by concepts in KGs, graph-based IC is proposed to compute IC based on the distributions of concepts over instances. Through experiments performed on well known word similarity datasets, we show that the wpath semantic similarity method has produced a statistically significant improvement over other semantic similarity methods. Moreover, in a real category classification evaluation, the wpath method has shown the best performance in terms of accuracy and F score.", "title": "" }, { "docid": "72c79181572c836cb92aac8fe7a14c5d", "text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).", "title": "" } ]
scidocsrr
7c91f8804822ca77c4c1a48f78bfdd61
A Simple Model for Classifying Web Queries by User Intent
[ { "docid": "28ea3d754c1a28ccfeb8a6e884898f96", "text": "Understanding users'search intent expressed through their search queries is crucial to Web search and online advertisement. Web query classification (QC) has been widely studied for this purpose. Most previous QC algorithms classify individual queries without considering their context information. However, as exemplified by the well-known example on query \"jaguar\", many Web queries are short and ambiguous, whose real meanings are uncertain without the context information. In this paper, we incorporate context information into the problem of query classification by using conditional random field (CRF) models. In our approach, we use neighboring queries and their corresponding clicked URLs (Web pages) in search sessions as the context information. We perform extensive experiments on real world search logs and validate the effectiveness and effciency of our approach. We show that we can improve the F1 score by 52% as compared to other state-of-the-art baselines.", "title": "" } ]
[ { "docid": "5ebdf5b9986df77e6b10bcf820b41a6c", "text": "Many neural networks can be regarded as attempting to approximate a multivariate function in terms of one-input one-output units. This note considers the problem of an exact representation of nonlinear mappings in terms of simpler functions of fewer variables. We review Kolmogorov's theorem on the representation of functions of several variables in terms of functions of one variable and show that it is irrelevant in the context of networks for learning.", "title": "" }, { "docid": "42303331bf6713c1809468532c153693", "text": "................................................................................................................................................ V Table of", "title": "" }, { "docid": "36c26d1be5d9ef1ffaf457246bbc3c90", "text": "In knowledge grounded conversation, domain knowledge plays an important role in a special domain such as Music. The response of knowledge grounded conversation might contain multiple answer entities or no entity at all. Although existing generative question answering (QA) systems can be applied to knowledge grounded conversation, they either have at most one entity in a response or cannot deal with out-ofvocabulary entities. We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses based on input message and related knowledge base (KB). To generate arbitrary number of answer entities even when these entities never appear in the training set, we design a dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context. It does not rely on the representations of entities, enabling our model deal with out-ofvocabulary entities. We collect a human-human conversation data (ConversMusic) with knowledge annotations. The proposed method is evaluated on CoversMusic and a public question answering dataset. Our proposed GenDS system outperforms baseline methods significantly in terms of the BLEU, entity accuracy, entity recall and human evaluation. Moreover,the experiments also demonstrate that GenDS works better even on small datasets.", "title": "" }, { "docid": "aeb12453020541d2465438e0868f6402", "text": "Location-based Services are emerging as popular applications in pervasive computing. Spatial k-anonymity is used in Locationbased Services to protect privacy, by hiding the association of a specific query with a specific user. Unfortunately, this approach fails in many practical cases such as: (i) personalized services, where the user identity is required, or (ii) applications involving groups of users (e.g., employees of the same company); in this case, associating a query to any member of the group, violates privacy. In this paper, we introduce the concept of Location Diversity, which solves the above-mentioned problems. Location Diversity improves Spatial k-anonymity by ensuring that each query can be associated with at least ` different semantic locations (e.g., school, shop, hospital, etc). We present an attack model that maps each observed query to a linear equation involving semantic locations, and we show that a necessary condition to preserve privacy is the existence of infinite solutions in the resulting system of linear equations. Based on this observation, we develop algorithms that generate groups of semantic locations, which preserve privacy and minimize the expected query processing and communication cost. The experimental evaluation demonstrates that our approach reduces significantly the privacy threats, while incurring minimal overhead.", "title": "" }, { "docid": "6e893839d1d4698698d38eb18073251a", "text": "Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multilingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.", "title": "" }, { "docid": "1dc7b9dc4f135625e2680dcde8c9e506", "text": "This paper empirically analyzes di erent e ects of advertising in a nondurable, experience good market. A dynamic learning model of consumer behavior is presented in which we allow both \\informative\" e ects of advertising and \\prestige\" or \\image\" e ects of advertising. This learning model is estimated using consumer level panel data tracking grocery purchases and advertising exposures over time. Empirical results suggest that in this data, advertising's primary e ect was that of informing consumers. The estimates are used to quantify the value of this information to consumers and evaluate welfare implications of an alternative advertising regulatory regime. JEL Classi cations: D12, M37, D83 ' Economics Dept., Boston University, Boston, MA 02115 (ackerber@bu.edu). This paper is a revised version of the second and third chapters of my doctoral dissertation at Yale University. Many thanks to my advisors: Steve Berry and Ariel Pakes, as well as Lanier Benkard, Russell Cooper, Gautam Gowrisankaran, Sam Kortum, Mike Riordan, John Rust, Roni Shachar, and many seminar participants, including most recently those at the NBER 1997Winter IO meetings, for advice and comments. I thank the Yale School of Management for gratefully providing the data used in this study. Financial support from the Cowles Foundation in the form of the Arvid Anderson Dissertation Fellowship is acknowledged and appreciated. All remaining errors in this paper are my own.", "title": "" }, { "docid": "d4954bab5fc4988141c509a6d6ab79db", "text": "Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS). However, as they lack the ability to model global characteristics of speech (such as speaker individualities or speaking styles), particularly when these characteristics have not been labeled, making neural autoregressive SS systems more expressive is still an open issue. In this paper, we propose to combine VoiceLoop, an autoregressive SS model, with Variational Autoencoder (VAE). This approach, unlike traditional autoregressive SS systems, uses VAE to model the global characteristics explicitly, enabling the expressiveness of the synthesized speech to be controlled in an unsupervised manner. Experiments using the VCTK and Blizzard2012 datasets show the VAE helps VoiceLoop to generate higher quality speech and to control the expressions in its synthesized speech by incorporating global characteristics into the speech generating process.", "title": "" }, { "docid": "2e3cee13657129d26ec236f9d2641e6c", "text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" }, { "docid": "91e93ebb9503a83f20d349d87d8f74dd", "text": "Data stream mining is an active research area that has recently emerged to discover knowledge from large amounts of continuously generated data. In this context, several data stream clustering algorithms have been proposed to perform unsupervised learning. Nevertheless, data stream clustering imposes several challenges to be addressed, such as dealing with nonstationary, unbounded data that arrive in an online fashion. The intrinsic nature of stream data requires the development of algorithms capable of performing fast and incremental processing of data objects, suitably addressing time and memory limitations. In this article, we present a survey of data stream clustering algorithms, providing a thorough discussion of the main design components of state-of-the-art algorithms. In addition, this work addresses the temporal aspects involved in data stream clustering, and presents an overview of the usually employed experimental methodologies. A number of references are provided that describe applications of data stream clustering in different domains, such as network intrusion detection, sensor networks, and stock market analysis. Information regarding software packages and data repositories are also available for helping researchers and practitioners. Finally, some important issues and open questions that can be subject of future research are discussed.", "title": "" }, { "docid": "21d9828d0851b4ded34e13f8552f3e24", "text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.", "title": "" }, { "docid": "89d4143e7845d191433882f3fa5aaa26", "text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory. In order to collect a large number of manipulation demonstrations for different objects, we developed a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot can even manipulate objects it has never seen before. Keywords— Robotics and Learning, Crowd-sourcing, Manipulation", "title": "" }, { "docid": "b26882cddec1690e3099757e835275d2", "text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.", "title": "" }, { "docid": "9af37841feed808345c39ee96ddff914", "text": "Wake-up receivers (WuRXs) are low-power radios that continuously monitor the RF environment to wake up a higher-power radio upon detection of a predetermined RF signature. Prior-art WuRXs have 100s of kHz of bandwidth [1] with low signature-to-wake-up-signal latency to help synchronize communication amongst nominally asynchronous wireless devices. However, applications such as unattended ground sensors and smart home appliances wake-up infrequently in an event-driven manner, and thus WuRX bandwidth and latency are less critical; instead, the most important metrics are power consumption and sensitivity. Unfortunately, current state-of-the-art WuRXs utilizing direct envelope-detecting [2] and IF/uncertain-IF [1,3] architectures (Fig. 24.5.1) achieve only modest sensitivity at low-power (e.g., −39dBm at 104nW [2]), or achieve excellent sensitivity at higher-power (e.g., −97dBm at 99µW [3]) via active IF gain elements. Neither approach meets the needs of next-generation event-driven sensing networks.", "title": "" }, { "docid": "af004fad4aa8b4ce414c0d36250f20b5", "text": "Software developers often face steep learning curves in using a new framework, library, or new versions of frameworks for developing their piece of software. In large organizations, developers learn and explore use of frameworks, rarely realizing, several peers may have already explored the same. A tool that helps locate samples of code, demonstrating use of frameworks or libraries would provide benefits of reuse, improved code quality and faster development. This paper describes an approach for locating common samples of source code from a repository by providing extensions to an information retrieval system. The approach improves the existing approaches in two ways. First, it provides the scalability of an information retrieval system, supporting search over thousands of source code files of an organization. Second, it provides more specific search on source code by preprocessing source code files and understanding elements of the code as opposed to considering code as plain text.", "title": "" }, { "docid": "ea4da468a0e7f84266340ba5566f4bdb", "text": "We present a novel realtime algorithm to compute the trajectory of each pedestrian in a crowded scene. Our formulation is based on an adaptive scheme that uses a combination of deterministic and probabilistic trackers to achieve high accuracy and efficiency simultaneously. Furthermore, we integrate it with a multi-agent motion model and local interaction scheme to accurately compute the trajectory of each pedestrian. We highlight the performance and benefits of our algorithm on well-known datasets with tens of pedestrians.", "title": "" }, { "docid": "285587e0e608d8bafa0962b5cf561205", "text": "BACKGROUND\nGeneralized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap.\n\n\nMETHODS\nParameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1.\n\n\nRESULTS\nIn the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR.\n\n\nCONCLUSIONS\nGAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.", "title": "" }, { "docid": "e7bbef4600048504c8019ff7fdb4758c", "text": "Convenient assays for superoxide dismutase have necessarily been of the indirect type. It was observed that among the different methods used for the assay of superoxide dismutase in rat liver homogenate, namely the xanthine-xanthine oxidase ferricytochromec, xanthine-xanthine oxidase nitroblue tetrazolium, and pyrogallol autoxidation methods, a modified pyrogallol autoxidation method appeared to be simple, rapid and reproducible. The xanthine-xanthine oxidase ferricytochromec method was applicable only to dialysed crude tissue homogenates. The xanthine-xanthine oxidase nitroblue tetrazolium method, either with sodium carbonate solution, pH 10.2, or potassium phosphate buffer, pH 7·8, was not applicable to rat liver homogenate even after extensive dialysis. Using the modified pyrogallol autoxidation method, data have been obtained for superoxide dismutase activity in different tissues of rat. The effect of age, including neonatal and postnatal development on the activity, as well as activity in normal and cancerous human tissues were also studied. The pyrogallol method has also been used for the assay of iron-containing superoxide dismutase inEscherichia coli and for the identification of superoxide dismutase on polyacrylamide gels after electrophoresis.", "title": "" }, { "docid": "a28917b48a9107b1d06885d7151f393b", "text": "Logistic regression is an increasingly popular statistical technique used to model the probability of discrete (i.e., binary or multinomial) outcomes. When properly applied, logistic regression analyses yield very powerful insights in to what attributes (i.e., variables) are more or less likely to predict event outcome in a population of interest. These models also show the extent to which changes in the values of the attributes may increase or decrease the predicted probability of event outcome.", "title": "" }, { "docid": "36867b8478a8bd6be79902efd5e9d929", "text": "Most state-of-the-art commercial storage virtualization systems focus only on one particular storage attribute, capacity. This paper describes the design, implementation and evaluation of a multi-dimensional storage virtualization system called Stonehenge, which is able to virtualize a cluster-based physical storage system along multiple dimensions, including bandwidth, capacity, and latency. As a result, Stonehenge is able to multiplex multiple virtual disks, each with a distinct bandwidth, capacity, and latency attribute, on a single physical storage system as if they are separate physical disks. A key enabling technology for Stonehenge is an efficiency-aware real-time disk scheduling algorithm called dual-queue disk scheduling, which maximizes disk utilization efficiency while providing Quality of Service (QoS) guarantees. To optimize disk utilization efficiency, Stonehenge exploits run-time measurements extensively, for admission control, computing latency-derived bandwidth requirement, and predicting disk service time.", "title": "" } ]
scidocsrr
8913aeaeb31812ab614555aa4dc52714
Sleep timing is more important than sleep length or quality for medical school performance.
[ { "docid": "5a1b5f961bf6ed78cff2df6e2ed2d212", "text": "The transition from wakefulness to sleep is marked by pronounced changes in brain activity. The brain rhythms that characterize the two main types of mammalian sleep, slow-wave sleep (SWS) and rapid eye movement (REM) sleep, are thought to be involved in the functions of sleep. In particular, recent theories suggest that the synchronous slow-oscillation of neocortical neuronal membrane potentials, the defining feature of SWS, is involved in processing information acquired during wakefulness. According to the Standard Model of memory consolidation, during wakefulness the hippocampus receives input from neocortical regions involved in the initial encoding of an experience and binds this information into a coherent memory trace that is then transferred to the neocortex during SWS where it is stored and integrated within preexisting memory traces. Evidence suggests that this process selectively involves direct connections from the hippocampus to the prefrontal cortex (PFC), a multimodal, high-order association region implicated in coordinating the storage and recall of remote memories in the neocortex. The slow-oscillation is thought to orchestrate the transfer of information from the hippocampus by temporally coupling hippocampal sharp-wave/ripples (SWRs) and thalamocortical spindles. SWRs are synchronous bursts of hippocampal activity, during which waking neuronal firing patterns are reactivated in the hippocampus and neocortex in a coordinated manner. Thalamocortical spindles are brief 7-14 Hz oscillations that may facilitate the encoding of information reactivated during SWRs. By temporally coupling the readout of information from the hippocampus with conditions conducive to encoding in the neocortex, the slow-oscillation is thought to mediate the transfer of information from the hippocampus to the neocortex. Although several lines of evidence are consistent with this function for mammalian SWS, it is unclear whether SWS serves a similar function in birds, the only taxonomic group other than mammals to exhibit SWS and REM sleep. Based on our review of research on avian sleep, neuroanatomy, and memory, although involved in some forms of memory consolidation, avian sleep does not appear to be involved in transferring hippocampal memories to other brain regions. Despite exhibiting the slow-oscillation, SWRs and spindles have not been found in birds. Moreover, although birds independently evolved a brain region--the caudolateral nidopallium (NCL)--involved in performing high-order cognitive functions similar to those performed by the PFC, direct connections between the NCL and hippocampus have not been found in birds, and evidence for the transfer of information from the hippocampus to the NCL or other extra-hippocampal regions is lacking. Although based on the absence of evidence for various traits, collectively, these findings suggest that unlike mammalian SWS, avian SWS may not be involved in transferring memories from the hippocampus. Furthermore, it suggests that the slow-oscillation, the defining feature of mammalian and avian SWS, may serve a more general function independent of that related to coordinating the transfer of information from the hippocampus to the PFC in mammals. Given that SWS is homeostatically regulated (a process intimately related to the slow-oscillation) in mammals and birds, functional hypotheses linked to this process may apply to both taxonomic groups.", "title": "" }, { "docid": "06e74a431b45aec75fb21066065e1353", "text": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities.", "title": "" }, { "docid": "ec36f7ad0a916ab4040b0fddbf7b1172", "text": "To review the state of research on the association between sleep among school-aged children and academic outcomes, the authors reviewed published studies investigating sleep, school performance, and cognitive and achievement tests. Tables with brief descriptions of each study's research methods and outcomes are included. Research reveals a high prevalence among school-aged children of suboptimal amounts of sleep and poor sleep quality. Research demonstrates that suboptimal sleep affects how well students are able to learn and how it may adversely affect school performance. Recommendations for further research are discussed.", "title": "" } ]
[ { "docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa", "text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking", "title": "" }, { "docid": "9680944f9e6b4724bdba752981845b68", "text": "A software product line is a set of program variants, typically generated from a common code base. Feature models describe variability in product lines by documenting features and their valid combinations. In product-line engineering, we need to reason about variability and program variants for many different tasks. For example, given a feature model, we might want to determine the number of all valid feature combinations or compute specific feature combinations for testing. However, we found that contemporary reasoning approaches can only reason about feature combinations, not about program variants, because they do not take abstract features into account. Abstract features are features used to structure a feature model that, however, do not have any impact at implementation level. Using existing feature-model reasoning mechanisms for program variants leads to incorrect results. Hence, although abstract features represent domain decisions that do not affect the generation of a program variant. We raise awareness of the problem of abstract features for different kinds of analyses on feature models. We argue that, in order to reason about program variants, abstract features should be made explicit in feature models. We present a technique based on propositional formulas that enables to reason about program variants rather than feature combinations. In practice, our technique can save effort that is caused by considering the same program variant multiple times, for example, in product-line testing.", "title": "" }, { "docid": "62c4ad2cdd38d8ab8e08bd6636cb3e09", "text": "When modeling resonant inverters considering the harmonic balance method, the order of the obtained transfer functions is twice the state variables number. This is explained because two components are considered for each state variable. In order to obtain a simpler transfer function model of a halfbridge series resonant inverter, different techniques of model order reduction have been considered in this work. Thus, a reduced-order model has been obtained by residualization providing much simpler analytical expressions than the original model. The proposed model has been validated by simulation and experimentally. The validity range of the proposed model is extended up to a tenth of the switching frequency. Taking into account the great load variability of induction heating applications, the proposed reduced-order model will allow the design of advanced controllers such as Gain-Scheduling.", "title": "" }, { "docid": "3f9a46f472ab276c39fb96b78df132ee", "text": "In this paper, we present a novel technique that enables capturing of detailed 3D models from flash photographs integrating shading and silhouette cues. Our main contribution is an optimization framework which not only captures subtle surface details but also handles changes in topology. To incorporate normals estimated from shading, we employ a mesh-based deformable model using deformation gradient. This method is capable of manipulating precise geometry and, in fact, it outperforms previous methods in terms of both accuracy and efficiency. To adapt the topology of the mesh, we convert the mesh into an implicit surface representation and then back to a mesh representation. This simple procedure removes self-intersecting regions of the mesh and solves the topology problem effectively. In addition to the algorithm, we introduce a hand-held setup to achieve multi-view photometric stereo. The key idea is to acquire flash photographs from a wide range of positions in order to obtain a sufficient lighting variation even with a standard flash unit attached to the camera. Experimental results showed that our method can capture detailed shapes of various objects and cope with topology changes well.", "title": "" }, { "docid": "998f2515ea7ceb02f867b709d4a987f9", "text": "Crop pest and disease diagnosis are amongst important issues arising in the agriculture sector since it has significant impacts on the production of agriculture for a nation. The applying of expert system technology for crop pest and disease diagnosis has the potential to quicken and improve advisory matters. However, the development of an expert system in relation to diagnosing pest and disease problems of a certain crop as well as other identical research works remains limited. Therefore, this study investigated the use of expert systems in managing crop pest and disease of selected published works. This article aims to identify and explain the trends of methodologies used by those works. As a result, a conceptual framework for managing crop pest and disease was proposed on basis of the selected previous works. This article is hoped to relatively benefit the growth of research works pertaining to the development of an expert system especially for managing crop pest and disease in the agriculture domain.", "title": "" }, { "docid": "42f3032626b2a002a855476a718a2b1b", "text": "Learning controllers for bipedal robots is a challenging problem, often requiring expert knowledge and extensive tuning of parameters that vary in different situations. Recently, deep reinforcement learning has shown promise at automatically learning controllers for complex systems in simulation. This has been followed by a push towards learning controllers that can be transferred between simulation and hardware, primarily with the use of domain randomization. However, domain randomization can make the problem of finding stable controllers even more challenging, especially for underactuated bipedal robots. In this work, we explore whether policies learned in simulation can be transferred to hardware with the use of high-fidelity simulators and structured controllers. We learn a neural network policy which is a part of a more structured controller. While the neural network is learned in simulation, the rest of the controller stays fixed, and can be tuned by the expert as needed. We show that using this approach can greatly speed up the rate of learning in simulation, as well as enable transfer of policies between simulation and hardware. We present our results on an ATRIAS robot and explore the effect of action spaces and cost functions on the rate of transfer between simulation and hardware. Our results show that structured policies can indeed be learned in simulation and implemented on hardware successfully. This has several advantages, as the structure preserves the intuitive nature of the policy, and the neural network improves the performance of the hand-designed policy. In this way, we propose a way of using neural networks to improve expert designed controllers, while maintaining ease of understanding.", "title": "" }, { "docid": "7faed0b112a15a3b53c94df44a1bcb26", "text": "Since the stability of the method of fundamental solutions (MFS) is a severe issue, the estimation on the bounds of condition number Cond is important to real application. In this paper, we propose the new approaches for deriving the asymptotes of Cond, and apply them for the Dirichlet problem of Laplace’s equation, to provide the sharp bound of Cond for disk domains. Then the new bound of Cond is derived for bounded simply connected domains with mixed types of boundary conditions. Numerical results are reported for Motz’s problem by adding singular functions. The values of Cond grow exponentially with respect to the number of fundamental solutions used. Note that there seems to exist no stability analysis for the MFS on non-disk (or non-elliptic) domains. Moreover, the expansion coefficients obtained by the MFS are oscillatingly large, to cause the other kind of instability: subtraction cancelation errors in the final harmonic solutions.", "title": "" }, { "docid": "4e8d7e1fdb48da4198e21ae1ef2cd406", "text": "This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95% compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pretrain action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics [14], UCF-101 [30] and HMDB-51 [15], models pre-trained on SLAC outperform baselines trained from scratch, by 2.0%, 20.1% and 35.4% in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 [12] and ActivityNet-v1.3[2], our localization model improves the mAP of baseline model by 8.6% and 2.5%, respectively.", "title": "" }, { "docid": "c5113ff741d9e656689786db10484a07", "text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.", "title": "" }, { "docid": "0ee09adae30459337f8e7261165df121", "text": "Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.", "title": "" }, { "docid": "9b94a383b2a6e778513a925cc88802ad", "text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was largely ignored in literature. In this paper, a novel model is proposed for pedestrian behavior modeling by including stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality classification, and abnormal event detection. To evaluate our model, a large pedestrian walking route dataset1 is built. The walking routes of 12, 684 pedestrians from a one-hour crowd surveillance video are manually annotated. It will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.", "title": "" }, { "docid": "4f8a233a8de165f2aeafbad9c93a767a", "text": "Can images be decomposed into the sum of a geometric part and a textural part? In a theoretical breakthrough, [Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence, RI: American Mathematical Society, 2001] proposed variational models that force the geometric part into the space of functions with bounded variation, and the textural part into a space of oscillatory distributions. Meyer's models are simple minimization problems extending the famous total variation model. However, their numerical solution has proved challenging. It is the object of a literature rich in variants and numerical attempts. This paper starts with the linear model, which reduces to a low-pass/high-pass filter pair. A simple conversion of the linear filter pair into a nonlinear filter pair involving the total variation is introduced. This new-proposed nonlinear filter pair retains both the essential features of Meyer's models and the simplicity and rapidity of the linear model. It depends upon only one transparent parameter: the texture scale, measured in pixel mesh. Comparative experiments show a better and faster separation of cartoon from texture. One application is illustrated: edge detection.", "title": "" }, { "docid": "35dacb4b15e5c8fbd91cee6da807799a", "text": "Stochastic gradient algorithms have been the main focus of large-scale learning problems and led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.", "title": "" }, { "docid": "5b1c38fccbd591e6ab00a66ef636eb5d", "text": "There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed ‘big data’. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mechanism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the existing approaches using Brewer’s CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.", "title": "" }, { "docid": "68f0bdda44beba9203a785b8be1035bb", "text": "Nasal mucociliary clearance is one of the most important factors affecting nasal delivery of drugs and vaccines. This is also the most important physiological defense mechanism inside the nasal cavity. It removes inhaled (and delivered) particles, microbes and substances trapped in the mucus. Almost all inhaled particles are trapped in the mucus carpet and transported with a rate of 8-10 mm/h toward the pharynx. This transport is conducted by the ciliated cells, which contain about 100-250 motile cellular appendages called cilia, 0.3 µm wide and 5 µm in length that beat about 1000 times every minute or 12-15 Hz. For efficient mucociliary clearance, the interaction between the cilia and the nasal mucus needs to be well structured, where the mucus layer is a tri-layer: an upper gel layer that floats on the lower, more aqueous solution, called the periciliary liquid layer and a third layer of surfactants between these two main layers. Pharmacokinetic calculations of the mucociliary clearance show that this mechanism may account for a substantial difference in bioavailability following nasal delivery. If the formulation irritates the nasal mucosa, this mechanism will cause the irritant to be rapidly diluted, followed by increased clearance, and swallowed. The result is a much shorter duration inside the nasal cavity and therefore less nasal bioavailability.", "title": "" }, { "docid": "b2ad81e0c7e352dac4caea559ac675bb", "text": "A linearly polarized miniaturized printed dipole antenna with novel half bowtie radiating arm is presented for wireless applications including the 2.4 GHz ISM band. This design is approximately 0.363 λ in length at central frequency of 2.97 GHz. An integrated balun with inductive transitions is employed for wideband impedance matching without changing the geometry of radiating arms. This half bowtie dipole antenna displays 47% bandwidth, and a simulated efficiency of over 90% with miniature size. The radiation patterns are largely omnidirectional and display a useful level of measured gain across the impedance bandwidth. The size and performance of the miniaturized half bowtie dipole antenna is compared with similar reduced size antennas with respect to their overall footprint, substrate dielectric constant, frequency of operation and impedance bandwidth. This half bowtie design in this communication outperforms the reference antennas in virtually all categories.", "title": "" }, { "docid": "86a3a5f09181567c5b66d926b0f9d240", "text": "Indigenous \"First Nations\" communities have consistently associated their disproportionate rates of psychiatric distress with historical experiences of European colonization. This emphasis on the socio-psychological legacy of colonization within tribal communities has occasioned increasingly widespread consideration of what has been termed historical trauma within First Nations contexts. In contrast to personal experiences of a traumatic nature, the concept of historical trauma calls attention to the complex, collective, cumulative, and intergenerational psychosocial impacts that resulted from the depredations of past colonial subjugation. One oft-cited exemplar of this subjugation--particularly in Canada--is the Indian residential school. Such schools were overtly designed to \"kill the Indian and save the man.\" This was institutionally achieved by sequestering First Nations children from family and community while forbidding participation in Native cultural practices in order to assimilate them into the lower strata of mainstream society. The case of a residential school \"survivor\" from an indigenous community treatment program on a Manitoba First Nations reserve is presented to illustrate the significance of participation in traditional cultural practices for therapeutic recovery from historical trauma. An indigenous rationale for the postulated efficacy of \"culture as treatment\" is explored with attention to plausible therapeutic mechanisms that might account for such recovery. To the degree that a return to indigenous tradition might benefit distressed First Nations clients, redressing the socio-psychological ravages of colonization in this manner seems a promising approach worthy of further research investigation.", "title": "" }, { "docid": "ef925e9d448cf4ca9a889b5634b685cf", "text": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.", "title": "" }, { "docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc", "text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting", "title": "" } ]
scidocsrr
80d7567d1d8943c76e6a979ffd1cfa0c
Real fuzzy PID control of the UAV AR.Drone 2.0 for hovering under disturbances in known environments
[ { "docid": "7e884438ee8459a441cbe1500f1bac88", "text": "We consider the problem of autonomously flying Miniature Aerial Vehicles (MAVs) in indoor environments such as home and office buildings. The primary long range sensor in these MAVs is a miniature camera. While previous approaches first try to build a 3D model in order to do planning and control, our method neither attempts to build nor requires a 3D model. Instead, our method first classifies the type of indoor environment the MAV is in, and then uses vision algorithms based on perspective cues to estimate the desired direction to fly. We test our method on two MAV platforms: a co-axial miniature helicopter and a toy quadrotor. Our experiments show that our vision algorithms are quite reliable, and they enable our MAVs to fly in a variety of corridors and staircases.", "title": "" }, { "docid": "c12d534d219e3d249ba3da1c0956c540", "text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.", "title": "" } ]
[ { "docid": "c78ef06693d0b8ae37989b5574938c90", "text": "Relational databases have been around for many decades and are the database technology of choice for most traditional data-intensive storage and retrieval applications. Retrievals are usually accomplished using SQL, a declarative query language. Relational database systems are generally efficient unless the data contains many relationships requiring joins of large tables. Recently there has been much interest in data stores that do not use SQL exclusively, the so-called NoSQL movement. Examples are Google's BigTable and Facebook's Cassandra. This paper reports on a comparison of one such NoSQL graph database called Neo4j with a common relational database system, MySQL, for use as the underlying technology in the development of a software system to record and query data provenance information.", "title": "" }, { "docid": "b2b4e5162b3d7d99a482f9b82820d59e", "text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.", "title": "" }, { "docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37", "text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.", "title": "" }, { "docid": "c71ada1231703f2ecb2c2872ef7d5632", "text": "We present a spatial multiplex optical transmission system named the “Smart Light” (See Figure 1), which provides multiple data streams to multiple points simultaneously. This system consists of a projector and some devices along with a photo-detector. The projector projects images with invisible information to the devices, and devices receive some data. In this system, the data stream is expandable to a positionbased audio or video stream by using DMDs (Digital Micro-mirror Device) or LEDs (Light Emitting Diode) with unperceivable space-time modulation. First, in a preliminary experiment, we confirmed with a commercially produced XGA grade projector transmitting a million points that the data rate of its path is a few bits per second. Detached devices can receive relative position data and other properties from the projector. Second, we made an LED type high-speed projector to transmit audio streams using modulated light on an object and confirmed the transmission of positionbased audio stream data.", "title": "" }, { "docid": "b9c0ccebb8f7339830daccb235338d4a", "text": "ÐA problem gaining interest in pattern recognition applied to data mining is that of selecting a small representative subset from a very large data set. In this article, a nonparametric data reduction scheme is suggested. It attempts to represent the density underlying the data. The algorithm selects representative points in a multiscale fashion which is novel from existing density-based approaches. The accuracy of representation by the condensed set is measured in terms of the error in density estimates of the original and reduced sets. Experimental studies on several real life data sets show that the multiscale approach is superior to several related condensation methods both in terms of condensation ratio and estimation error. The condensed set obtained was also experimentally shown to be effective for some important data mining tasks like classification, clustering, and rule generation on large data sets. Moreover, it is empirically found that the algorithm is efficient in terms of sample complexity. Index TermsÐData mining, multiscale condensation, scalability, density estimation, convergence in probability, instance learning.", "title": "" }, { "docid": "888e8f68486c08ffe538c46ba76de85c", "text": "Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.", "title": "" }, { "docid": "b2d334cc7d79d2e3ebd573bbeaa2dfbe", "text": "Objectives\nTo measure the occurrence and levels of depression, anxiety and stress in undergraduate dental students using the Depression, Anxiety and Stress Scale (DASS-21).\n\n\nMethods\nThis cross-sectional study was conducted in November and December of 2014. A total of 289 dental students were invited to participate, and 277 responded, resulting in a response rate of 96%. The final sample included 247 participants. Eligible participants were surveyed via a self-reported questionnaire that included the validated DASS-21 scale as the assessment tool and questions about demographic characteristics and methods for managing stress.\n\n\nResults\nAbnormal levels of depression, anxiety and stress were identified in 55.9%, 66.8% and 54.7% of the study participants, respectively. A multiple linear regression analysis revealed multiple predictors: gender (for anxiety b=-3.589, p=.016 and stress b=-4.099, p=.008), satisfaction with faculty relationships (for depression b=-2.318, p=.007; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), satisfaction with peer relationships (for depression b=-3.527, p<.001; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), and dentistry as the first choice for field of study (for stress b=-2.648, p=.045). The standardized coefficients demonstrated the relationship and strength of the predictors for each subscale. To cope with stress, students engaged in various activities such as reading, watching television and seeking emotional support from others.\n\n\nConclusions\nThe high occurrence of depression, anxiety and stress among dental students highlights the importance of providing support programs and implementing preventive measures to help students, particularly those who are most susceptible to higher levels of these psychological conditions.", "title": "" }, { "docid": "8cd52cdc44c18214c471716745e3c00f", "text": "The design of electric vehicles require a complete paradigm shift in terms of embedded systems architectures and software design techniques that are followed within the conventional automotive systems domain. It is increasingly being realized that the evolutionary approach of replacing the engine of a car by an electric engine will not be able to address issues like acceptable vehicle range, battery lifetime performance, battery management techniques, costs and weight, which are the core issues for the success of electric vehicles. While battery technology has crucial importance in the domain of electric vehicles, how these batteries are used and managed pose new problems in the area of embedded systems architecture and software for electric vehicles. At the same time, the communication and computation design challenges in electric vehicles also have to be addressed appropriately. This paper discusses some of these research challenges.", "title": "" }, { "docid": "9df5329fcf5e5dd6394f76040d8d8402", "text": "Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices. In this work, we show that multi-task learning is naturally suited to handle the statistical challenges of this setting, and propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Our method and theory for the first time consider issues of high communication cost, stragglers, and fault tolerance for distributed multi-task learning. The resulting method achieves significant speedups compared to alternatives in the federated setting, as we demonstrate through simulations on real-world federated datasets.", "title": "" }, { "docid": "962ab9e871dc06c3cd290787dc7e71aa", "text": "The conventional digital hardware computational blocks with different structures are designed to compute the precise results of the assigned calculations. The main contribution of our proposed Bio-inspired Imprecise Computational blocks (BICs) is that they are designed to provide an applicable estimation of the result instead of its precise value at a lower cost. These novel structures are more efficient in terms of area, speed, and power consumption with respect to their precise rivals. Complete descriptions of sample BIC adder and multiplier structures as well as their error behaviors and synthesis results are introduced in this paper. It is then shown that these BIC structures can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.", "title": "" }, { "docid": "7208a2b257c7ba7122fd2e278dd1bf4a", "text": "Abstract—This paper shows in detail the mathematical model of direct and inverse kinematics for a robot manipulator (welding type) with four degrees of freedom. Using the D-H parameters, screw theory, numerical, geometric and interpolation methods, the theoretical and practical values of the position of robot were determined using an optimized algorithm for inverse kinematics obtaining the values of the particular joints in order to determine the virtual paths in a relatively short time.", "title": "" }, { "docid": "02fd763f6e15b07187e3cbe0fd3d0e18", "text": "The Batcher`s bitonic sorting algorithm is a parallel sorting algorithm, which is used for sorting the numbers in modern parallel machines. There are various parallel sorting algorithms such as radix sort, bitonic sort, etc. It is one of the efficient parallel sorting algorithm because of load balancing property. It is widely used in various scientific and engineering applications. However, Various researches have worked on a bitonic sorting algorithm in order to improve up the performance of original batcher`s bitonic sorting algorithm. In this paper, tried to review the contribution made by these researchers.", "title": "" }, { "docid": "1203f22bfdfc9ecd211dbd79a2043a6a", "text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.", "title": "" }, { "docid": "4a6c7b68ea23f910f0edc35f4542e5cb", "text": "Microgrids have been proposed in order to handle the impacts of Distributed Generators (DGs) and make conventional grids suitable for large scale deployments of distributed generation. However, the introduction of microgrids brings some challenges. Protection of a microgrid and its entities is one of them. Due to the existence of generators at all levels of the distribution system and two distinct operating modes, i.e. Grid Connected and Islanded modes, the fault currents in a system vary substantially. Consequently, the traditional fixed current relay protection schemes need to be improved. This paper presents a conceptual design of a microgrid protection system which utilizes extensive communication to monitor the microgrid and update relay fault currents according to the variations in the system. The proposed system is designed so that it can respond to dynamic changes in the system such as connection/disconnection of DGs.", "title": "" }, { "docid": "9afdd51ba034e9580c52f0aba50dfa4b", "text": "Advances in field programmable gate arrays (FPGAs), which are the platform of choice for reconfigurable computing, have made it possible to use FPGAs in increasingly ma ny areas of computing, including complex scientific applicati ons. These applications demand high performance and high-preci s on, floating-point arithmetic. Until now, most of the research has not focussed on compliance with IEEE standard 754, focusing ins tead upon custom formats and bitwidths. In this paper, we present double-precision floating-point cores that are parameteri zed by their degree of pipelining and the features of IEEE standard754 that they implement. We then analyze the effects of supporti ng the standard when these cores are used in an FPGA-based accelerator for Lennard-Jones force and potential calculations that are part of molecular dynamics (MD) simulations.", "title": "" }, { "docid": "2431ee8fb0dcfd84c61e60ee41a95edb", "text": "Web applications have become a very popular means of developing software. This is because of many advantages of web applications like no need of installation on each client machine, centralized data, reduction in business cost etc. With the increase in this trend web applications are becoming vulnerable for attacks. Cross site scripting (XSS) is the major threat for web application as it is the most basic attack on web application. It provides the surface for other types of attacks like Cross Site Request Forgery, Session Hijacking etc. There are three types of XSS attacks i.e. non-persistent (or reflected) XSS, persistent (or stored) XSS and DOM-based vulnerabilities. There is one more type that is not as common as those three types, induced XSS. In this work we aim to study and consolidate the understanding of XSS and their origin, manifestation, kinds of dangers and mitigation efforts for XSS. Different approaches proposed by researchers are presented here and an analysis of these approaches is performed. Finally the conclusion is drawn at the end of the work.", "title": "" }, { "docid": "cc6895789b42f7ae779c2236cde4636a", "text": "Modern day social media search and recommender systems require complex query formulation that incorporates both user context and their explicit search queries. Users expect these systems to be fast and provide relevant results to their query and context. With millions of documents to choose from, these systems utilize a multi-pass scoring function to narrow the results and provide the most relevant ones to users. Candidate selection is required to sift through all the documents in the index and select a relevant few to be ranked by subsequent scoring functions. It becomes crucial to narrow down the document set while maintaining relevant ones in resulting set. In this tutorial we survey various candidate selection techniques and deep dive into case studies on a large scale social media platform. In the later half we provide hands-on tutorial where we explore building these candidate selection models on a real world dataset and see how to balance the tradeoff between relevance and latency.", "title": "" }, { "docid": "18b0f6712396476dc4171128ff08a355", "text": "Heterogeneous multicore architectures have the potential for high performance and energy efficiency. These architectures may be composed of small power-efficient cores, large high-performance cores, and/or specialized cores that accelerate the performance of a particular class of computation. Architects have explored multiple dimensions of heterogeneity, both in terms of micro-architecture and specialization. While early work constrained the cores to share a single ISA, this work shows that allowing heterogeneous ISAs further extends the effectiveness of such architectures\n This work exploits the diversity offered by three modern ISAs: Thumb, x86-64, and Alpha. This architecture has the potential to outperform the best single-ISA heterogeneous architecture by as much as 21%, with 23% energy savings and a reduction of 32% in Energy Delay Product.", "title": "" }, { "docid": "033b05d21f5b8fb5ce05db33f1cedcde", "text": "Seasonal occurrence of the common cutworm Spodoptera litura (Fab.) (Lepidoptera: Noctuidae) moths captured in synthetic sex pheromone traps and associated field population of eggs and larvae in soybean were examined in India from 2009 to 2011. Male moths of S. litura first appeared in late July or early August and continued through October. Peak male trap catches occurred during the second fortnight of September, which was within soybean reproductive stages. Similarly, the first appearance of S. litura egg masses and larval populations were observed after the first appearance of male moths in early to mid-August, and were present in the growing season up to late September to mid-October. The peak appearance of egg masses and larval populations always corresponded with the peak activity of male moths recorded during mid-September in all years. Correlation studies showed that weekly mean trap catches were linearly and positively correlated with egg masses and larval populations during the entire growing season of soybean. Seasonal means of male moth catches in pheromone traps during the 2010 and 2011 seasons were significantly lower than the catches during the 2009 season. However, seasonal means of the egg masses and larval populations were not significantly different between years. Pheromone traps may be useful indicators of the onset of numbers of S. litura eggs and larvae in soybean fields.", "title": "" }, { "docid": "20c6da8e705ba063d139d4adba7bcde2", "text": "Copyright © 2010 American Heart Association. All rights reserved. Print ISSN: 0009-7322. Online 72514 Circulation is published by the American Heart Association. 7272 Greenville Avenue, Dallas, TX DOI: 10.1161/CIR.0b013e3181f9a223 published online Oct 11, 2010; Circulation Care, Perioperative and Resuscitation Critical Association Council on Clinical Cardiology and Council on Cardiopulmonary, Parshall, Gary S. Francis, Mihai Gheorghiade and on behalf of the American Heart Anderson, Cynthia Arslanian-Engoren, W. Brian Gibler, James K. McCord, Mark B. Neal L. Weintraub, Sean P. Collins, Peter S. Pang, Phillip D. Levy, Allen S. Statement From the American Heart Association Treatment, and Disposition: Current Approaches and Future Aims. A Scientific Acute Heart Failure Syndromes: Emergency Department Presentation, http://circ.ahajournals.org located on the World Wide Web at: The online version of this article, along with updated information and services, is", "title": "" } ]
scidocsrr
89ec42167ac8e1243fca82dc5a7df1ae
RGBD-camera based get-up event detection for hospital fall prevention
[ { "docid": "b9a893fb526955b5131860a1402e2f7c", "text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.", "title": "" } ]
[ { "docid": "d90954eaae0c9d84e261c6d0794bbf76", "text": "The index case of the Ebola virus disease epidemic in West Africa is believed to have originated in Guinea. By June 2014, Guinea, Liberia, and Sierra Leone were in the midst of a full-blown and complex global health emergency. The devastating effects of this Ebola epidemic in West Africa put the global health response in acute focus for urgent international interventions. Accordingly, in October 2014, a World Health Organization high-level meeting endorsed the concept of a phase 2/3 clinical trial in Liberia to study Ebola vaccines. As a follow-up to the global response, in November 2014, the Government of Liberia and the US Government signed an agreement to form a research partnership to investigate Ebola and to assess intervention strategies for treating, controlling, and preventing the disease in Liberia. This agreement led to the establishment of the Joint Liberia-US Partnership for Research on Ebola Virus in Liberia as the beginning of a long-term collaborative partnership in clinical research between the two countries. In this article, we discuss the methodology and related challenges associated with the implementation of the Ebola vaccines clinical trial, based on a double-blinded randomized controlled trial, in Liberia.", "title": "" }, { "docid": "3f8ed9f5b015f50989ebde22329e6e7c", "text": "In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided.", "title": "" }, { "docid": "af598c452d9a6589e45abe702c7cab58", "text": "This paper proposes the concept of “liveaction virtual reality games” as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, context-aware input devices, and embedded/embodied interactions. Live-action virtual reality games are “live-action games” because a player physically acts out (using his/her real body and senses) his/her “avatar” (his/her virtual representation) in the game stage – the mixed-reality environment where the game happens. The game stage is a kind of “augmented virtuality” – a mixedreality where the virtual world is augmented with real-world information. In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information. Physical objects that reside in the physical world are also mapped to virtual elements. Liveaction virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities. This setup enables the players to touch physical architectural elements (such as walls) and other objects, “feeling” the game stage. Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly. Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.", "title": "" }, { "docid": "c1bfef951e9775f6ffc949c5110e1bd1", "text": "In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism.", "title": "" }, { "docid": "80c7a60035f08fcefc6f5e0ba1c82405", "text": "This paper deals with word length in twenty of Jane Austen's letters and is part of a research project performed in Göttingen. Word length in English has so far only been studied in the context of contemporary texts (Hasse & Weinbrenner, 1995; Riedemann, 1994) and in the English dictionary (Rothschild, 1986). It has been ascertained that word length in texts abides by a law having the form of the mixed Poisson distribution -an assumption which in a language like English can easily be justified. However, in special texts other regularities can arise. Individual or genre-like factors can induce a systematic deviation in one or more frequency classes. We say that the phenomenon is on the way to another attractor. The first remedy in such cases is a local modification of the given frequency classes; the last remedy is the search for another model. THE DATA Letters were examined because it can be assumed that they are written down without interruption, and hence revised versions or the conscious use of stylistic means are the exception. The assumed natural rhythm governing word length in writing is thus believed to have remained mostly uninfluenced and constant. The length of the selected letters is between 126 and 494 words. They date from 1796 to 1817 and are partly businesslike and partly private. The letters to Jane Austen's sister Cassandra above all are written in an 'informal' style. In general, however, the letters are on a high stylistic level, which is not only characteristic of the use of language at that time, but also a main feature of Jane Austen's personal style. Thus contractions such as don't, can't, wouldn 't etc. do not occur. word depends on the number of vowels or diphthongs. Diphthongs and triphthongs can also be differentiated, both of these would count as one syllable. This paper only deals with diphthongs. The number of syllables of abbreviations is counted according to its fully spoken form. Thus addresses and titles such as 'Mrs', 'Mr', 'Md' and 'Capt' consist of two syllables; 'Lieut' consists of three syllables. The same holds for figures and for the abbreviations of months. MS is the common short form for 'Manuscript'; 'comps' (complements), 'G.Mama' (Grandmama), 'morn' (morning), 'c ' (could), 'w ' (would) or 'rec' (received) seem to be the writer's idiosyncratic abbreviations. In all cases length is determined by the spoken form. The analysis is based on the 'received pronunciation' of British English. Only the running text without address, date, or place has been considered. ANALYSING THE DATA General Criteria Length is determined by the number of syllables in each word. \"Word\" is defined as an orthographic unit. The number of syllables in a Findings As ascertained by the software tool 'AltmannFitter' (1994) the best model was found to be the positive Singh-Poisson distribution (= inflated zero truncated Poisson distribution), which has the following formula: *Address correspondence to: J. Frischen, Brüder-Grimm-Allee 2, 37075 Göttingen, Germany. D ow nl oa de d by [ K or ea U ni ve rs ity ] at 0 4: 53 1 0 Ja nu ar y 20 15 WORD LENGTH JANE AUSTEN'S LETTERS 81 aae' Table 3. Letter 16, Austen, 1798, to Cassandra Austen. fx NPx aae~ x\\(l-e-)' x=2,3,... Distributions modified in this way indicate that the author tends to leave the basic model (in the case of English, the Poisson distribution) by local modification of the shortest class (here x 188 57 15 4 1 187.79 56.53 16.38 3.561 0.74J", "title": "" }, { "docid": "34f6603912c9775fc48329e596467107", "text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.", "title": "" }, { "docid": "91d0f12e9303b93521146d4d650a63df", "text": "We utilize the state-of-the-art in deep learning to show that we can learn by example what constitutes humor in the context of a Yelp review. To the best of the authors knowledge, no systematic study of deep learning for humor exists – thus, we construct a scaffolded study. First, we use “shallow” methods such as Random Forests and Linear Discriminants built on top of bag-of-words and word vector features. Then, we build deep feedforward networks on top of these features – in some sense, measuring how much of an effect basic feedforward nets help. Then, we use recurrent neural networks and convolutional neural networks to more accurately model the sequential nature of a review.", "title": "" }, { "docid": "402bf66ab180944e8f3068bef64fbc77", "text": "EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.", "title": "" }, { "docid": "0c67bd1867014053a5bec3869f3b4f8c", "text": "BACKGROUND AND PURPOSE\nConstraint-induced movement therapy (CI therapy) has previously been shown to produce large improvements in actual amount of use of a more affected upper extremity in the \"real-world\" environment in patients with chronic stroke (ie, >1 year after the event). This work was carried out in an American laboratory. Our aim was to determine whether these results could be replicated in another laboratory located in Germany, operating within the context of a healthcare system in which administration of conventional types of physical therapy is generally more extensive than in the United States.\n\n\nMETHODS\nFifteen chronic stroke patients were given CI therapy, involving restriction of movement of the intact upper extremity by placing it in a sling for 90% of waking hours for 12 days and training (by shaping) of the more affected extremity for 7 hours on the 8 weekdays during that period.\n\n\nRESULTS\nPatients showed a significant and very large degree of improvement from before to after treatment on a laboratory motor test and on a test assessing amount of use of the affected extremity in activities of daily living in the life setting (effect sizes, 0.9 and 2.2, respectively), with no decrement in performance at 6-month follow-up. During a pretreatment control test-retest interval, there were no significant changes on these tests.\n\n\nCONCLUSIONS\nResults replicate in Germany the findings with CI therapy in an American laboratory, suggesting that the intervention has general applicability.", "title": "" }, { "docid": "077162116799dffe986cb488dda2ee56", "text": "We present hybrid concolic testing, an algorithm that interleaves random testing with concolic execution to obtain both a deep and a wide exploration of program state space. Our algorithm generates test inputs automatically by interleaving random testing until saturation with bounded exhaustive symbolic exploration of program points. It thus combines the ability of random search to reach deep program states quickly together with the ability of concolic testing to explore states in a neighborhood exhaustively. We have implemented our algorithm on top of CUTE and applied it to obtain better branch coverage for an editor implementation (VIM 5.7, 150K lines of code) as well as a data structure implementation in C. Our experiments suggest that hybrid concolic testing can handle large programs and provide, for the same testing budget, almost 4× the branch coverage than random testing and almost 2× that of concolic testing.", "title": "" }, { "docid": "01e53610e746555afadfc9387a66ce05", "text": "This paper presents a survey of the autopilot systems for small or micro unmanned aerial vehicles (UAVs). The objective is to provide a summary of the current commercial, open source and research autopilot systems for convenience of potential small UAV users. The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both the hardware and software viewpoints. Several typical off-the-shelf autopilot packages are compared in terms of sensor packages, observation approaches and controller strengths. Afterwards some open source autopilot systems are introduced. Conclusion is made with a summary of the current autopilot market and a remark on the future development.", "title": "" }, { "docid": "c7f0856c282d1039e44ba6ef50948d32", "text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.", "title": "" }, { "docid": "d7e7cdc9ac55d5af199395becfe02d73", "text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.", "title": "" }, { "docid": "8f570416ceecf87310b7780ec935d814", "text": "BACKGROUND\nInguinal lymph node involvement is an important prognostic factor in penile cancer. Inguinal lymph node dissection allows staging and treatment of inguinal nodal disease. However, it causes morbidity and is associated with complications, such as lymphocele, skin loss and infection. Video Endoscopic Inguinal Lymphadenectomy (VEIL) is an endoscopic procedure, and it seems to be a new and attractive approach duplicating the standard open procedure with less morbidity. We present here a critical perioperative assessment with points of technique.\n\n\nMETHODS\nTen patients with moderate to high grade penile carcinoma with clinically negative inguinal lymph nodes were subjected to elective VEIL. VEIL was done in standard surgical steps. Perioperative parameters were assessed that is - duration of the surgery, lymph-related complications, time until drain removal, lymph node yield, surgical emphysema and histopathological positivity of lymph nodes.\n\n\nRESULTS\nOperative time for VEIL was 120 to 180 minutes. Lymph node yield was 7 to 12 lymph nodes. No skin related complications were seen with VEIL. Lymph related complications, that is, lymphocele, were seen in only two patients. The suction drain was removed after four to eight days (mean 5.1). Overall morbidity was 20% with VEIL.\n\n\nCONCLUSION\nIn our early experience, VEIL was a safe and feasible technique in patients with penile carcinoma with non palpable inguinal lymph nodes. It allows the removal of inguinal lymph nodes within the same limits as in conventional surgical dissection and potentially reduces surgical morbidity.", "title": "" }, { "docid": "72ddcb7a55918a328576a811a89d245b", "text": "Among all new emerging RNA species, microRNAs (miRNAs) have attracted the interest of the scientific community due to their implications as biomarkers of prognostic value, disease progression, or diagnosis, because of defining features as robust association with the disease, or stable presence in easily accessible human biofluids. This field of research has been established twenty years ago, and the development has been considerable. The regulatory nature of miRNAs makes them great candidates for the treatment of infectious diseases, and a successful example in the field is currently being translated to clinical practice. This review will present a general outline of miRNAmolecules, as well as successful stories of translational significance which are getting us closer from the basic bench studies into clinical practice.", "title": "" }, { "docid": "9b0ed9c60666c36f8cf33631f791687d", "text": "The central notion of Role-Based Access Control (RBAC) is that users do not have discretionary access to enterprise objects. Instead, access permissions are administratively associated with roles, and users are administratively made members of appropriate roles. This idea greatly simplifies management of authorization while providing an opportunity for great flexibility in specifying and enforcing enterprisespecific protection policies. Users can be made members of roles as determined by their responsibilities and qualifications and can be easily reassigned from one role to another without modifying the underlying access structure. Roles can be granted new permissions as new applications and actions are incorporated, and permissions can be revoked from roles as needed. Some users and vendors have recognized the potential benefits of RBAC without a precise definition of what RBAC constitutes. Some RBAC features have been implemented in commercial products without a frame of reference as to the functional makeup and virtues of RBAC [1]. This lack of definition makes it difficult for consumers to compare products and for vendors to get credit for the effectiveness of their products in addressing known security problems. To correct these deficiencies, a number of government sponsored research efforts are underway to define RBAC precisely in terms of its features and the benefits it affords. This research includes: surveys to better understand the security needs of commercial and government users [2], the development of a formal RBAC model, architecture, prototype, and demonstrations to validate its use and feasibility. As a result of these efforts, RBAC systems are now beginning to emerge. The purpose of this paper is to provide additional insight as to the motivations and functionality that might go behind the official RBAC name.", "title": "" }, { "docid": "644f61bc267d3dcb915f8c36c1584605", "text": "This paper discusses the design and development of an experimental tabletop robot called \"Haru\" based on design thinking methodology. Right from the very beginning of the design process, we have brought an interdisciplinary team that includes animators, performers and sketch artists to help create the first iteration of a distinctive anthropomorphic robot design based on a concept that leverages form factor with functionality. Its unassuming physical affordance is intended to keep human expectation grounded while its actual interactive potential stokes human interest. The meticulous combination of both subtle and pronounced mechanical movements together with its stunning visual displays, highlight its affective affordance. As a result, we have developed the first iteration of our tabletop robot rich in affective potential for use in different research fields involving long-term human-robot interaction.", "title": "" }, { "docid": "86820c43e63066930120fa5725b5b56d", "text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.", "title": "" } ]
scidocsrr
266bd9346ae3016067c36dcb68031cca
Image encryption using chaotic logistic map
[ { "docid": "fc9eae18a5a44ee7df22d6c7bdb5a164", "text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.", "title": "" } ]
[ { "docid": "d8a68a9e769f137e06ab05e4d4075dce", "text": "The inelastic response of existing reinforced concrete (RC) buildings without seismic details is investigated, presenting the results from more than 1000 nonlinear analyses. The seismic performance is investigated for two buildings, a typical building form of the 60s and a typical form of the 80s. Both structures are designed according to the old Greek codes. These building forms are typical for that period for many Southern European countries. Buildings of the 60s do not have seismic details, while buildings of the 80s have elementary seismic details. The influence of masonry infill walls is also investigated for the building of the 60s. Static pushover and incremental dynamic analyses (IDA) for a set of 15 strong motion records are carried out for the three buildings, two bare and one infilled. The IDA predictions are compared with the results of pushover analysis and the seismic demand according to Capacity Spectrum Method (CSM) and N2 Method. The results from IDA show large dispersion on the response, available ductility capacity, behaviour factor and failure displacement, depending on the strong motion record. CSM and N2 predictions are enveloped by the nonlinear dynamic predictions, but have significant differences from the mean values. The better behaviour of the building of the 80s compared to buildings of the 60s is validated with both pushover and nonlinear dynamic analyses. Finally, both types of analysis show that fully infilled frames exhibit an improved behaviour compared to bare frames.", "title": "" }, { "docid": "9150005965c893e6c2efa15c469fdffb", "text": "Low power has emerged as a principal theme in today's electronics industry. The need for low power has caused a major paradigm shift in which power dissipation is as important as performance and area. This article presents an in-depth survey of CAD methodologies and techniques for designing low power digital CMOS circuits and systems and describes the many issues facing designers at architectural, logical, and physical levels of design abstraction. It reviews some of the techniques and tools that have been proposed to overcome these difficulties and outlines the future challenges that must be met to design low power, high performance systems.", "title": "" }, { "docid": "6558b2a3c43e11d58f3bb829425d6a8d", "text": "While end-to-end neural conversation models have led to promising advances in reducing hand-crafted features and errors induced by the traditional complex system architecture, they typically require an enormous amount of data due to the lack of modularity. Previous studies adopted a hybrid approach with knowledge-based components either to abstract out domainspecific information or to augment data to cover more diverse patterns. On the contrary, we propose to directly address the problem using recent developments in the space of continual learning for neural models. Specifically, we adopt a domainindependent neural conversational model and introduce a novel neural continual learning algorithm that allows a conversational agent to accumulate skills across different tasks in a data-efficient way. To the best of our knowledge, this is the first work that applies continual learning to conversation systems. We verified the efficacy of our method through a conversational skill transfer from either synthetic dialogs or human-human dialogs to human-computer conversations in a customer support domain.", "title": "" }, { "docid": "435200b067ebd77f69a04cc490d73fa6", "text": "Self-mutilation of genitalia is an extremely rare entity, usually found in psychotic patients. Klingsor syndrome is a condition in which such an act is based upon religious delusions. The extent of genital mutilation can vary from superficial cuts to partial or total amputation of penis to total emasculation. The management of these patients is challenging. The aim of the treatment is restoration of the genital functionality. Microvascular reanastomosis of the phallus is ideal but it is often not possible due to the delay in seeking medical attention, non viability of the excised phallus or lack of surgical expertise. Hence, it is not unusual for these patients to end up with complete loss of the phallus and a perineal urethrostomy. We describe a patient with Klingsor syndrome who presented to us with near total penile amputation. The excised phallus was not viable and could not be used. The patient was managed with surgical reconstruction of the penile stump which was covered with loco-regional flaps. The case highlights that a functional penile reconstruction is possible in such patients even when microvascular reanastomosis is not feasible. This technique should be attempted before embarking upon perineal urethrostomy.", "title": "" }, { "docid": "c2891abf8297b5dcf0e21dfa9779a017", "text": "The success of knowledge-sharing communities like Wikipedia and the advances in automatic information extraction from textual and Web sources have made it possible to build large \"knowledge repositories\" such as DBpedia, Freebase, and YAGO. These collections can be viewed as graphs of entities and relationships (ER graphs) and can be represented as a set of subject-property-object (SPO) triples in the Semantic-Web data model RDF. Queries can be expressed in the W3C-endorsed SPARQL language or by similarly designed graph-pattern search. However, exact-match query semantics often fall short of satisfying the users' needs by returning too many or too few results. Therefore, IR-style ranking models are crucially needed.\n In this paper, we propose a language-model-based approach to ranking the results of exact, relaxed and keyword-augmented graph pattern queries over RDF graphs such as ER graphs. Our method estimates a query model and a set of result-graph models and ranks results based on their Kullback-Leibler divergence with respect to the query model. We demonstrate the effectiveness of our ranking model by a comprehensive user study.", "title": "" }, { "docid": "4d4de3ff3c99779c7fd5bd60fc006189", "text": "With the fast growing information technologies, high efficiency AC-DC front-end power supplies are becoming more and more desired in all kinds of distributed power system applications due to the energy conservation consideration. For the power factor correction (PFC) stage, the conventional constant frequency average current mode control has very low efficiency at light load due to high switching frequency related loss. The constant on-time control for PFC features the automatic reduction of switching frequency at light load, resulting improved light load efficiency. However, lower heavy load efficiency of the constant on-time control is observed because of very high frequency at Continuous Conduction Mode (CCM). By carefully comparing the on-time and frequency profiles between constant on-time and constant frequency control, a novel adaptive on-time control is proposed to improve the light load efficiency without sacrificing the heavy load efficiency. The performance of the adaptive on-time control is verified by experiment.", "title": "" }, { "docid": "aba4e6baa69a2ca7d029ebc33931fd4d", "text": "Along with the improvement of radar technologies Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR) has come to be an active research area. SAR/ISAR are radar techniques to generate a two-dimensional high-resolution image of a target. Unlike other similar experiments using Convolutional Neural Networks (CNN) to solve this problem, we utilize an unusual approach that leads to better performance and faster training times. Our CNN uses complex values generated by a simulation to train the network; additionally, we utilize a multi-radar approach to increase the accuracy of the training and testing processes, thus resulting in higher accuracies than the other papers working on SAR/ISAR ATR. We generated our dataset with 7 different aircraft models with a radar simulator we developed called RadarPixel; it is a Windows GUI program implemented using Matlab and Java programing, the simulator is capable of accurately replicating a real SAR/ISAR configurations. Our objective is utilize our multiradar technique and determine the optimal number of radars needed to detect and classify targets.", "title": "" }, { "docid": "74f8127bc620fa1c9797d43dedea4d45", "text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.", "title": "" }, { "docid": "63ed24b818f83ab04160b5c690075aac", "text": "In this paper, we discuss the impact of digital control in high-frequency switched-mode power supplies (SMPS), including point-of-load and isolated DC-DC converters, microprocessor power supplies, power-factor-correction rectifiers, electronic ballasts, etc., where switching frequencies are typically in the hundreds of kHz to MHz range, and where high efficiency, static and dynamic regulation, low size and weight, as well as low controller complexity and cost are very important. To meet these application requirements, a digital SMPS controller may include fast, small analog-to-digital converters, hardware-accelerated programmable compensators, programmable digital modulators with very fine time resolution, and a standard microcontroller core to perform programming, monitoring and other system interface tasks. Based on recent advances in circuit and control techniques, together with rapid advances in digital VLSI technology, we conclude that high-performance digital controller solutions are both feasible and practical, leading to much enhanced system integration and performance gains. Examples of experimentally demonstrated results are presented, together with pointers to areas of current and future research and development.", "title": "" }, { "docid": "08f49b003a3a5323e38e4423ba6503a4", "text": "Neurofeedback (NF), a type of neurobehavioral training, has gained increasing attention in recent years, especially concerning the treatment of children with ADHD. Promising results have emerged from recent randomized controlled studies, and thus, NF is on its way to becoming a valuable addition to the multimodal treatment of ADHD. In this review, we summarize the randomized controlled trials in children with ADHD that have been published within the last 5 years and discuss issues such as the efficacy and specificity of effects, treatment fidelity and problems inherent in placebo-controlled trials of NF. Directions for future NF research are outlined, which should further address specificity and help to determine moderators and mediators to optimize and individualize NF training. Furthermore, we describe methodological (tomographic NF) and technical ('tele-NF') developments that may also contribute to further improvements in treatment outcome.", "title": "" }, { "docid": "6ea4ecb12ca077c07f4706b6d11130db", "text": "We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions. In particular, we study the number of linear regions, i.e. pieces, that a PWL function represented by a DNN can attain, both theoretically and empirically. We present (i) tighter upper and lower bounds for the maximum number of linear regions on rectifier networks, which are exact for inputs of dimension one; (ii) a first upper bound for multi-layer maxout networks; and (iii) a first method to perform exact enumeration or counting of the number of regions by modeling the DNN with a mixed-integer linear formulation. These bounds come from leveraging the dimension of the space defining each linear region. The results also indicate that a deep rectifier network can only have more linear regions than every shallow counterpart with same number of neurons if that number exceeds the dimension of the input.", "title": "" }, { "docid": "cc2e24cd04212647f1c29482aa12910d", "text": "A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.", "title": "" }, { "docid": "7b1a6768cc6bb975925a754343dc093c", "text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.", "title": "" }, { "docid": "53ebcdf1dfb5b850228ac422fdd50490", "text": "A frequent goal of flow cytometric analysis is to classify cells as positive or negative for a given marker, or to determine the precise ratio of positive to negative cells. This requires good and reproducible instrument setup, and careful use of controls for analyzing and interpreting the data. The type of controls to include in various kinds of flow cytometry experiments is a matter of some debate and discussion. In this tutorial, we classify controls in various categories, describe the options within each category, and discuss the merits of each option.", "title": "" }, { "docid": "e28f2a2d5f3a0729943dca52da5d45b6", "text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odometry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframebased, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Furthermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s.", "title": "" }, { "docid": "dbcdcd2cdf8894f853339b5fef876dde", "text": "Genicular nerve radiofrequency ablation (RFA) has recently gained popularity as an intervention for chronic knee pain in patients who have failed other conservative or surgical treatments. Long-term efficacy and adverse events are still largely unknown. Under fluoroscopic guidance, thermal RFA targets the lateral superior, medial superior, and medial inferior genicular nerves, which run in close proximity to the genicular arteries that play a crucial role in supplying the distal femur, knee joint, meniscus, and patella. RFA targets nerves by relying on bony landmarks, but fails to provide visualization of vascular structures. Although vascular injuries after genicular nerve RFA have not been reported, genicular vascular complications are well documented in the surgical literature. This article describes the anatomy, including detailed cadaveric dissections and schematic drawings, of the genicular neurovascular bundle. The present investigation also included a comprehensive literature review of genicular vascular injuries involving those arteries which lie near the targets of genicular nerve RFA. These adverse vascular events are documented in the literature as case reports. Of the 27 cases analyzed, 25.9% (7/27) involved the lateral superior genicular artery, 40.7% (11/27) involved the medial superior genicular artery, and 33.3% (9/27) involved the medial inferior genicular artery. Most often, these vascular injuries result in the formation of pseudoaneurysm, arteriovenous fistula (AVF), hemarthrosis, and/or osteonecrosis of the patella. Although rare, these complications carry significant morbidities. Based on the detailed dissections and review of the literature, our investigation suggests that vascular injury is a possible risk of genicular RFA. Lastly, recommendations are offered to minimize potential iatrogenic complications.", "title": "" }, { "docid": "c9d95b3656c703f4ce49c591a3f0a00f", "text": "Due to cellular heterogeneity, cell nuclei classification, segmentation, and detection from pathological images are challenging tasks. In the last few years, Deep Convolutional Neural Networks (DCNN) approaches have been shown state-of-the-art (SOTA) performance on histopathological imaging in different studies. In this work, we have proposed different advanced DCNN models and evaluated for nuclei classification, segmentation, and detection. First, the Densely Connected Recurrent Convolutional Network (DCRN) model is used for nuclei classification. Second, Recurrent Residual U-Net (R2U-Net) is applied for nuclei segmentation. Third, the R2U-Net regression model which is named UD-Net is used for nuclei detection from pathological images. The experiments are conducted with different datasets including Routine Colon Cancer(RCC) classification and detection dataset, and Nuclei Segmentation Challenge 2018 dataset. The experimental results show that the proposed DCNN models provide superior performance compared to the existing approaches for nuclei classification, segmentation, and detection tasks. The results are evaluated with different performance metrics including precision, recall, Dice Coefficient (DC), Means Squared Errors (MSE), F1-score, and overall accuracy. We have achieved around 3.4% and 4.5% better F-1 score for nuclei classification and detection tasks compared to recently published DCNN based method. In addition, R2U-Net shows around 92.15% testing accuracy in term of DC. These improved methods will help for pathological practices for better quantitative analysis of nuclei in Whole Slide Images(WSI) which ultimately will help for better understanding of different types of cancer in clinical workflow.", "title": "" }, { "docid": "165fbade7d495ce47a379520697f0d75", "text": "Neutral-point-clamped (NPC) inverters are the most widely used topology of multilevel inverters in high-power applications (several megawatts). This paper presents in a very simple way the basic operation and the most used modulation and control techniques developed to date. Special attention is paid to the loss distribution in semiconductors, and an active NPC inverter is presented to overcome this problem. This paper discusses the main fields of application and presents some technological problems such as capacitor balance and losses.", "title": "" } ]
scidocsrr
d5d8cb033291263ffeb48f31e72cde1b
Rekindling network protocol innovation with user-level stacks
[ { "docid": "f9c938a98621f901c404d69a402647c7", "text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.", "title": "" } ]
[ { "docid": "bf5f08174c55ed69e454a87ff7fbe6e2", "text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "bb201a87b4f81c9c4d2c8889d4bd3a6a", "text": "Computers have difficulty learning how to play Texas Hold’em Poker. The game contains a high degree of stochasticity, hidden information, and opponents that are deliberately trying to mis-represent their current state. Poker has a much larger game space than classic parlour games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems. In this paper, we present several algorithms for teaching agents how to play No-Limit Texas Hold’em Poker using a hybrid method known as evolving neural networks. Furthermore, we adapt heuristics such as halls of fame and co-evolution to be able to handle populations of Poker agents, which can sometimes contain several hundred opponents, instead of a single opponent. Our agents were evaluated against several benchmark agents. Experimental results show the overall best performance was obtained by an agent evolved from a single population (i.e., with no co-evolution) using a large hall of fame. These results demonstrate the effectiveness of our algorithms in creating competitive No-Limit Texas Hold’em Poker agents.", "title": "" }, { "docid": "cf1d8589fb42bd2af21e488e3ea79765", "text": "This paper presents ProRace, a dynamic data race detector practical for production runs. It is lightweight, but still offers high race detection capability. To track memory accesses, ProRace leverages instruction sampling using the performance monitoring unit (PMU) in commodity processors. Our PMU driver enables ProRace to sample more memory accesses at a lower cost compared to the state-of-the-art Linux driver. Moreover, ProRace uses PMU-provided execution contexts including register states and program path, and reconstructs unsampled memory accesses offline. This technique allows \\ProRace to overcome inherent limitations of sampling and improve the detection coverage by performing data race detection on the trace with not only sampled but also reconstructed memory accesses. Experiments using racy production software including apache and mysql shows that, with a reasonable offline cost, ProRace incurs only 2.6% overhead at runtime with 27.5% detection probability with a sampling period of 10,000.", "title": "" }, { "docid": "86e4fa3a9cc7dd6298785f40dae556b6", "text": "Stochastic block model (SBM) and its variants are popular models used in community detection for network data. In this paper, we propose a feature adjusted stochastic block model (FASBM) to capture the impact of node features on the network links as well as to detect the residual community structure beyond that explained by the node features. The proposed model can accommodate multiple node features and estimate the form of feature impacts from the data. Moreover, unlike many existing algorithms that are limited to binary-valued interactions, the proposed FASBM model and inference approaches are easily applied to relational data that generates from any exponential family distribution. We illustrate the methods on simulated networks and on two real world networks: a brain network and an US air-transportation network.", "title": "" }, { "docid": "49a6de5759f4e760f68939e9292928d8", "text": "An ongoing controversy exists in the prototyping community about how closely in form and function a user-interface prototype should represent the final product. This dispute is referred to as the \" Low-versus High-Fidelity Prototyping Debate.'' In this article, we discuss arguments for and against low-and high-fidelity prototypes , guidelines for the use of rapid user-interface proto-typing, and the implications for user-interface designers.", "title": "" }, { "docid": "d44bc13e5dd794a70211aac7ba44103b", "text": "Endowing artificial agents with the ability to empathize is believed to enhance their social behavior and to make them more likable, trustworthy, and caring. Neuropsychological findings substantiate that empathy occurs to different degrees depending on several factors including, among others, a person’s mood, personality, and social relationships with others. Although there is increasing interest in endowing artificial agents with affect, personality, and the ability to build social relationships, little attention has been devoted to the role of such factors in influencing their empathic behavior. In this paper, we present a computational model of empathy which allows a virtual human to exhibit different degrees of empathy. The presented model is based on psychological models of empathy and is applied and evaluated in the context of a conversational agent scenario.", "title": "" }, { "docid": "dc330168eb4ca331c8fbfa40b6abdd66", "text": "For multimedia communications, the low computational complexity of coder is required to integrate services of several media sources due to the limited computing capability of the personal information machine. The Multi-pulse Maximum Likelihood Quantization (MP-MLQ) algorithm with high computational complexity and high quality has been used in the G.723.1 standard codec. To reduce the computational complexity of the MP-MLQ method, this paper presents an efficient pre-selection scheme to simplify the excitation codebook search procedure which is computationally the most demand-ing. We propose a fast search algorithm which uses an energy function to predict the candidate pulses, and the codebook is redesigned to become the multi-track position structure. Simulation results show that the average of the perceptual evaluation of speech quality (PESQ) is degraded slightly, by only 0.056, and our proposed method can reduce computational complexity by about 52.8% relative to the original G.723.1 MP-MLQ computation load with perceptually negligible degradation. Our objective evaluations verify that the proposed method can provide speech quality comparable to that of the original MP-MLQ approach.", "title": "" }, { "docid": "ccddd7df2b5246c44d349bfb0aae499a", "text": "We consider stochastic multi-armed bandit problems with complex actions over a set of basic arms, where the decision maker plays a complex action rather than a basic arm in each round. The reward of the complex action is some function of the basic arms’ rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. Thus, feedback across complex actions may be coupled due to the nature of the reward function. We prove a frequentist regret bound for Thompson sampling in a very general setting involving parameter, action and observation spaces and a likelihood function over them. The bound holds for discretely-supported priors over the parameter space without additional structural properties such as closed-form posteriors, conjugate prior structure or independence across arms. The regret bound scales logarithmically with time but, more importantly, with an improved constant that non-trivially captures the coupling across complex actions due to the structure of the rewards. As applications, we derive improved regret bounds for classes of complex bandit problems involving selecting subsets of arms, including the first nontrivial regret bounds for nonlinear MAX reward feedback from subsets.", "title": "" }, { "docid": "2a3273a7308273887b49f2d6cc99fe68", "text": "The healthcare industry collects huge amounts of healthcare data which, unfortunately, are not \";mined\"; to discover hidden information for effective decision making. Discovery of hidden patterns and relationships often goes unexploited. Advanced data mining techniques can help remedy this situation. This research has developed a prototype Intelligent Heart Disease Prediction System (IHDPS) using data mining techniques, namely, Decision Trees, Naive Bayes and Neural Network. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. IHDPS can answer complex \";what if\"; queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established. IHDPS is Web-based, user-friendly, scalable, reliable and expandable. It is implemented on the .NET platform.", "title": "" }, { "docid": "3810acca479f6fa5d4f314d36a27b42c", "text": "The paper describes a stabilization control of two wheels driven wheelchair based on pitch angle disturbance observer (PADO). PADO makes it possible to stabilize the wheelchair motion and remove casters. This brings a sophisticated mobility of wheelchair because the casters are obstacle to realize step passage motion and so on. The proposed approach based on PADO is robust against disturbance of pitch angle direction and the more functional wheelchairs is expected in the developed system. The validity of the proposed method is confirmed by simulation and experiment.", "title": "" }, { "docid": "64330f538b3d8914cbfe37565ab0d648", "text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.", "title": "" }, { "docid": "ec9c0ba115e68545e263a82d6282d43e", "text": "A 1.8 GHz LC VCO in 1.8-V supply is presented. The VCO achieves low power consumption by optimum selection of inductance in the L-C tank. To increase the tuning range, a three-bit switching capacitor array is used for digital switched tuning. Designed in 0.18μm RF CMOS technology, the proposed VCO achieves a phase noise of -126.2dBc/Hz at 1MHz offset and consumes 1.38mA core current at 1.8-V voltage supply.", "title": "" }, { "docid": "2172e78731ee63be5c15549e38c4babb", "text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.", "title": "" }, { "docid": "8877d6753d6b7cd39ba36c074ca56b00", "text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "title": "" }, { "docid": "d8f54e45818fd88fc8e5689de55428a3", "text": "When brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: The changes become extremely difficult to notice, even when they are large, presented repeatedly, and the observer expects them to occur (Rensink, O’Regan, & Clark, 1997). To determine the mechanisms behind this induced “change blindness”, four experiments examine its dependence on initial preview and on the nature of the interruptions used. Results support the proposal that representations at the early stages of visual processing are inherently volatile, and that focused attention is needed to stabilize them sufficiently to support the perception of change.", "title": "" }, { "docid": "c30ea570f744f576014aeacf545b027c", "text": "We aimed to examine the effect of different doses of lutein supplementation on visual function in subjects with long-term computer display light exposure. Thirty-seven healthy subjects with long-term computer display light exposure ranging in age from 22 to 30 years were randomly assigned to one of three groups: Group L6 (6 mg lutein/d, n 12); Group L12 (12 mg lutein/d, n 13); and Group Placebo (maltodextrin placebo, n 12). Levels of serum lutein and visual performance indices such as visual acuity, contrast sensitivity and glare sensitivity were measured at weeks 0 and 12. After 12-week lutein supplementation, serum lutein concentrations of Groups L6 and L12 increased from 0.356 (SD 0.117) to 0.607 (SD 0.176) micromol/l, and from 0.328 (SD 0.120) to 0.733 (SD 0.354) micromol/l, respectively. No statistical changes from baseline were observed in uncorrected visual acuity and best-spectacle corrected visual acuity, whereas there was a trend toward increase in visual acuity in Group L12. Contrast sensitivity in Groups L6 and L12 increased with supplementation, and statistical significance was reached at most visual angles of Group L12. No significant change was observed in glare sensitivity over time. Visual function in healthy subjects who received the lutein supplement improved, especially in contrast sensitivity, suggesting that a higher intake of lutein may have beneficial effects on the visual performance.", "title": "" }, { "docid": "eadc50aebc6b9c2fbd16f9ddb3094c00", "text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.", "title": "" }, { "docid": "e9dc75f34b398b4e0d028f4dbbb707d1", "text": "INTRODUCTION\nUniversity students are potentially important targets for the promotion of healthy lifestyles as this may reduce the risks of lifestyle-related disorders later in life. This cross-sectional study examined differences in eating behaviours, dietary intake, weight status, and body composition between male and female university students.\n\n\nMETHODOLOGY\nA total of 584 students (59.4% females and 40.6% males) aged 20.6 +/- 1.4 years from four Malaysian universities in the Klang Valley participated in this study. Participants completed the Eating Behaviours Questionnaire and two-day 24-hour dietary recall. Body weight, height, waist circumference and percentage of body fat were measured.\n\n\nRESULTS\nAbout 14.3% of males and 22.4% of females were underweight, while 14.0% of males and 12.3% of females were overweight and obese. A majority of the participants (73.8% males and 74.6% females) skipped at least one meal daily in the past seven days. Breakfast was the most frequently skipped meal. Both males and females frequently snacked during morning tea time. Fruits and biscuits were the most frequently consumed snack items. More than half of the participants did not meet the Malaysian Recommended Nutrient Intake (RNI) for energy, vitamin C, thiamine, riboflavin, niacin, iron (females only), and calcium. Significantly more males than females achieved the RNI levels for energy, protein and iron intakes.\n\n\nCONCLUSION\nThis study highlights the presence of unhealthy eating behaviours, inadequate nutrient intake, and a high prevalence of underweight among university students. Energy and nutrient intakes differed between the sexes. Therefore, promoting healthy eating among young adults is crucial to achieve a healthy nutritional status.", "title": "" }, { "docid": "1dc615b299a8a63caa36cd8e36459323", "text": "Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly.", "title": "" }, { "docid": "ac46286c7d635ccdcd41358666026c12", "text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.", "title": "" } ]
scidocsrr
3d0d9de8d64948a55b956e46c69dca01
Role of video games in improving health-related outcomes: a systematic review.
[ { "docid": "f9c37f460fc0a4e7af577ab2cbe7045b", "text": "Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults.", "title": "" } ]
[ { "docid": "4a240b05fbb665596115841d238a483b", "text": "BACKGROUND\nAttachment theory is one of the most important achievements of contemporary psychology. Role of medical students in the community health is important, so we need to know about the situation of happiness and attachment style in these students.\n\n\nOBJECTIVES\nThis study was aimed to assess the relationship between medical students' attachment styles and demographic characteristics.\n\n\nMATERIALS AND METHODS\nThis cross-sectional study was conducted on randomly selected students of Medical Sciences in Kurdistan University, in 2012. To collect data, Hazan and Shaver's attachment style measure and the Oxford Happiness Questionnaire were used. The results were analyzed using the SPSS software version 16 (IBM, Chicago IL, USA) and statistical analysis was performed via t-test, Chi-square test, and multiple regression tests.\n\n\nRESULTS\nSecure attachment style was the most common attachment style and the least common was ambivalent attachment style. Avoidant attachment style was more common among single persons than married people (P = 0.03). No significant relationship was observed between attachment style and gender and grade point average of the studied people. The mean happiness score of students was 62.71. In multivariate analysis, the variables of secure attachment style (P = 0.001), male gender (P = 0.005), and scholar achievement (P = 0.047) were associated with higher happiness score.\n\n\nCONCLUSION\nThe most common attachment style was secure attachment style, which can be a positive prognostic factor in medical students, helping them to manage stress. Higher frequency of avoidant attachment style among single persons, compared with married people, is mainly due to their negative attitude toward others and failure to establish and maintain relationships with others.", "title": "" }, { "docid": "f845508acabb985dd80c31774776e86b", "text": "In this paper, we introduce two input devices for wearable computers, called GestureWrist and GesturePad. Both devices allow users to interact with wearable or nearby computers by using gesture-based commands. Both are designed to be as unobtrusive as possible, so they can be used under various social contexts. The first device, called GestureWrist, is a wristband-type input device that recognizes hand gestures and forearm movements. Unlike DataGloves or other hand gesture-input devices, all sensing elements are embedded in a normal wristband. The second device, called GesturePad, is a sensing module that can be attached on the inside of clothes, and users can interact with this module from the outside. It transforms conventional clothes into an interactive device without changing their appearance.", "title": "" }, { "docid": "e051c1dafe2a2f45c48a79c320894795", "text": "In this paper we present a graph-based model that, utilizing relations between groups of System-calls, detects whether an unknown software sample is malicious or benign, and classifies a malicious software to one of a set of known malware families. More precisely, we utilize the System-call Dependency Graphs (or, for short, ScD-graphs), obtained by traces captured through dynamic taint analysis. We design our model to be resistant against strong mutations applying our detection and classification techniques on a weighted directed graph, namely Group Relation Graph, or Gr-graph for short, resulting from ScD-graph after grouping disjoint subsets of its vertices. For the detection process, we propose the $$\\Delta $$ Δ -similarity metric, and for the process of classification, we propose the SaMe-similarity and NP-similarity metrics consisting the SaMe-NP similarity. Finally, we evaluate our model for malware detection and classification showing its potentials against malicious software measuring its detection rates and classification accuracy.", "title": "" }, { "docid": "315af705427ee4363fe4614dc72eb7a7", "text": "The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.", "title": "" }, { "docid": "05d3d0d62d2cff27eace1fdfeecf9814", "text": "This article solves the equilibrium problem in a pure-exchange, continuous-time economy in which some agents face information costs or other types of frictions effectively preventing them from investing in the stock market. Under the assumption that the restricted agents have logarithmic utilities, a complete characterization of equilibrium prices and consumption/ investment policies is provided. A simple calibration shows that the model can help resolve some of the empirical asset pricing puzzles.", "title": "" }, { "docid": "3fdd81a3e2c86f43152f72e159735a42", "text": "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.", "title": "" }, { "docid": "1530571213fb98e163cb3cf45cfe9cc6", "text": "We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.", "title": "" }, { "docid": "a42b9567dfc9e9fe92bc9aeb38ef5e5a", "text": "This paper presents a physical model for planar spiral inductors on silicon, which accounts for eddy current effect in the conductor, crossover capacitance between the spiral and center-tap, capacitance between the spiral and substrate, substrate ohmic loss, and substrate capacitance. The model has been confirmed with measured results of inductors having a wide range of layout and process parameters. This scalable inductor model enables the prediction and optimization of inductor performance.", "title": "" }, { "docid": "1301030c091eeb23d43dd3bfa6763e77", "text": "A new system for web attack detection is presented. It follows the anomaly-based approach, therefore known and unknown attacks can be detected. The system relies on a XML file to classify the incoming requests as normal or anomalous. The XML file, which is built from only normal traffic, contains a description of the normal behavior of the target web application statistically characterized. Any request which deviates from the normal behavior is considered an attack. The system has been applied to protect a real web application. An increasing number of training requests have been used to train the system. Experiments show that when the XML file has enough information to closely characterize the normal behavior of the target web application, a very high detection rate is reached while the false alarm rate remains very low.", "title": "" }, { "docid": "88cf3138707e74f9efec06f039d7ea76", "text": "In the electricity sector, energy conservation through technological and behavioral change is estimated to have a savings potential of 123 million metric tons of carbon per year, which represents 20% of US household direct emissions in the United States. In this article, we investigate the effectiveness of nonprice information strategies to motivate conservation behavior. We introduce environment and health-based messaging as a behavioral strategy to reduce energy use in the home and promote energy conservation. In a randomized controlled trial with real-time appliance-level energy metering, we find that environment and health-based information strategies, which communicate the environmental and public health externalities of electricity production, such as pounds of pollutants, childhood asthma, and cancer, outperform monetary savings information to drive behavioral change in the home. Environment and health-based information treatments motivated 8% energy savings versus control and were particularly effective on families with children, who achieved up to 19% energy savings. Our results are based on a panel of 3.4 million hourly appliance-level kilowatt-hour observations for 118 residences over 8 mo. We discuss the relative impacts of both cost-savings information and environmental health messaging strategies with residential consumers.", "title": "" }, { "docid": "4ac12c76112ff2085c4701130448f5d5", "text": "A key point in the deployment of new wireless services is the cost-effective extension and enhancement of the network's radio coverage in indoor environments. Distributed Antenna Systems using Fiber-optics distribution (F-DAS) represent a suitable method of extending multiple-operator radio coverage into indoor premises, tunnels, etc. Another key point is the adoption of MIMO (Multiple Input — Multiple Output) transmission techniques which can exploit the multipath nature of the radio link to ensure reliable, high-speed wireless communication in hostile environments. In this paper novel indoor deployment solutions based on Radio over Fiber (RoF) and distributed-antenna MIMO techniques are presented and discussed, highlighting their potential in different cases.", "title": "" }, { "docid": "997eb22a6f924bc560ede89e37dc4620", "text": "We illustrate an architecture for a conversational agent based on a modular knowledge representation. This solution provides intelligent conversational agents with a dynamic and flexible behavior. The modularity of the architecture allows a concurrent and synergic use of different techniques, making it possible to use the most adequate methodology for the management of a specific characteristic of the domain, of the dialogue, or of the user behavior. We show the implementation of a proof-of-concept prototype: a set of modules exploiting different knowledge representation techniques and capable to differently manage conversation features has been developed. Each module is automatically triggered through a component, named corpus callosum, whose task is to choose, time by time, the most adequate chatbot knowledge section to activate.", "title": "" }, { "docid": "0f20cfce49eaa9f447fc45b1d4c04be0", "text": "Face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media and law enforcement. There has been tremendous progress in face recognition accuracy over the past few decades, much of which can be attributed to deep learning based approaches during the last five years. Indeed, automated face recognition systems are now believed to surpass human performance in some scenarios. Despite this progress, a crucial question still remains unanswered: given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. By explicitly accounting for two sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is able to estimate the capacity of any given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of a 128-dimensional deep neural network based face representation, FaceNet [1], and that of the classical Eigenfaces [2] representation of the same dimensionality. Our numerical experiments on unconstrained faces indicate that, (a) our capacity estimation model yields a capacity upper bound of 5.8×108 for FaceNet and 1×100 for Eigenface representation at a false acceptance rate (FAR) of 1%, (b) the capacity of the face representation reduces drastically as you lower the desired FAR (for FaceNet representation; the capacity at FAR of 0.1% and 0.001% is 2.4×106 and 7.0×102, respectively), and (c) the empirical performance of the FaceNet representation is significantly below the theoretical limit.", "title": "" }, { "docid": "9f7aa5978855e173a45d443e46cbf5dd", "text": "Online gaming franchises such as World of Tanks, Defense of the Ancients, and StarCraft have attracted hundreds of millions of users who, apart from playing the game, also socialize with each other through gaming and viewing gamecasts. As a form of User Generated Content (UGC), gamecasts play an important role in user entertainment and gamer education. They deserve the attention of both industrial partners and the academic communities, corresponding to the large amount of revenue involved and the interesting research problems associated with UGC sites and social networks. Although previous work has put much effort into analyzing general UGC sites such as YouTube, relatively little is known about the gamecast sharing sites. In this work, we provide the first comprehensive study of gamecast sharing sites, including commercial streaming-based sites such as Amazon’s Twitch.tv and community-maintained replay-based sites such as WoTreplays. We collect and share a novel dataset on WoTreplays that includes more than 380,000 game replays, shared by more than 60,000 creators with more than 1.9 million gamers. Together with an earlier published dataset on Twitch.tv, we investigate basic characteristics of gamecast sharing sites, and we analyze the activities of their creators and spectators. Among our results, we find that (i) WoTreplays and Twitch.tv are both fast-consumed repositories, with millions of gamecasts being uploaded, viewed, and soon forgotten; (ii) both the gamecasts and the creators exhibit highly skewed popularity, with a significant heavy tail phenomenon; and (iii) the upload and download preferences of creators and spectators are different: while the creators emphasize their individual skills, the spectators appreciate team-wise tactics. Our findings provide important knowledge for infrastructure and service improvement, for example, in the design of proper resource allocation mechanisms that consider future gamecasting and in the tuning of incentive policies that further help player retention.", "title": "" }, { "docid": "f9d333d7d8aa3f7fb834b202a3b10a3b", "text": "Human skin is the largest organ in our body which provides protection against heat, light, infections and injury. It also stores water, fat, and vitamin. Cancer is the leading cause of death in economically developed countries and the second leading cause of death in developing countries. Skin cancer is the most commonly diagnosed type of cancer among men and women. Exposure to UV rays, modernize diets, smoking, alcohol and nicotine are the main cause. Cancer is increasingly recognized as a critical public health problem in Ethiopia. There are three type of skin cancer and they are recognized based on their own properties. In view of this, a digital image processing technique is proposed to recognize and predict the different types of skin cancers using digital image processing techniques. Sample skin cancer image were taken from American cancer society research center and DERMOFIT which are popular and widely focuses on skin cancer research. The classification system was supervised corresponding to the predefined classes of the type of skin cancer. Combining Self organizing map (SOM) and radial basis function (RBF) for recognition and diagnosis of skin cancer is by far better than KNN, Naïve Bayes and ANN classifier. It was also showed that the discrimination power of morphology and color features was better than texture features but when morphology, texture and color features were used together the classification accuracy was increased. The best classification accuracy (88%, 96.15% and 95.45% for Basal cell carcinoma, Melanoma and Squamous cell carcinoma respectively) were obtained using combining SOM and RBF. The overall classification accuracy was 93.15%.", "title": "" }, { "docid": "6e8a9c37672ec575821da5c9c3145500", "text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "70e89d5d0b886b1c32b1f1b8c01db99b", "text": "In clinical dictation, speakers try to be as concise as possible to save time, often resulting in utterances without explicit punctuation commands. Since the end product of a dictated report, e.g. an out-patient letter, does require correct orthography, including exact punctuation, the latter need to be restored, preferably by automated means. This paper describes a method for punctuation restoration based on a stateof-the-art stack of NLP and machine learning techniques including B-RNNs with an attention mechanism and late fusion, as well as a feature extraction technique tailored to the processing of medical terminology using a novel vocabulary reduction model. To the best of our knowledge, the resulting performance is superior to that reported in prior art on similar tasks.", "title": "" }, { "docid": "d76d09ca1e87eb2e08ccc03428c62be0", "text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.", "title": "" }, { "docid": "8a3e49797223800cb644fe2b819f9950", "text": "In this paper, we present machine learning approaches for characterizing and forecasting the short-term demand for on-demand ride-hailing services. We propose the spatio-temporal estimation of the demand that is a function of variable effects related to traffic, pricing and weather conditions. With respect to the methodology, a single decision tree, bootstrap-aggregated (bagged) decision trees, random forest, boosted decision trees, and artificial neural network for regression have been adapted and systematically compared using various statistics, e.g. R-square, Root Mean Square Error (RMSE), and slope. To better assess the quality of the models, they have been tested on a real case study using the data of DiDi Chuxing, the main on-demand ride-hailing service provider in China. In the current study, 199,584 time-slots describing the spatio-temporal ride-hailing demand has been extracted with an aggregated-time interval of 10 mins. All the methods are trained and validated on the basis of two independent samples from this dataset. The results revealed that boosted decision trees provide the best prediction accuracy (RMSE=16.41), while avoiding the risk of over-fitting, followed by artificial neural network (20.09), random forest (23.50), bagged decision trees (24.29) and single decision tree (33.55). ∗Currently under review for publication †Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium, Email: ismail.saadi@ulg.ac.be ‡Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: melvin.wong@polymtl.ca §Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: bilal.farooq@polymtl.ca ¶Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ‖Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ar X iv :1 70 3. 02 43 3v 1 [ cs .L G ] 7 M ar 2 01 7", "title": "" }, { "docid": "050443f5d84369f942c3f611775d37ed", "text": "A variety of methods for computing factor scores can be found in the psychological literature. These methods grew out of a historic debate regarding the indeterminate nature of the common factor model. Unfortunately, most researchers are unaware of the indeterminacy issue and the problems associated with a number of the factor scoring procedures. This article reviews the history and nature of factor score indeterminacy. Novel computer programs for assessing the degree of indeterminacy in a given analysis, as well as for computing and evaluating different types of factor scores, are then presented and demonstrated using data from the Wechsler Intelligence Scale for Children-Third Edition. It is argued that factor score indeterminacy should be routinely assessed and reported as part of any exploratory factor analysis and that factor scores should be thoroughly evaluated before they are reported or used in subsequent statistical analyses.", "title": "" } ]
scidocsrr
ab115421d84a4bcab680d9dfeb9d9ef6
BAG OF REGION EMBEDDINGS VIA LOCAL CONTEXT UNITS FOR TEXT CLASSIFICATION
[ { "docid": "ac46e6176377612544bb74c064feed67", "text": "The existence and use of standard test collections in information retrieval experimentation allows results to be compared between research groups and over time. Such comparisons, however, are rarely made. Most researchers only report results from their own experiments, a practice that allows lack of overall improvement to go unnoticed. In this paper, we analyze results achieved on the TREC Ad-Hoc, Web, Terabyte, and Robust collections as reported in SIGIR (1998–2008) and CIKM (2004–2008). Dozens of individual published experiments report effectiveness improvements, and often claim statistical significance. However, there is little evidence of improvement in ad-hoc retrieval technology over the past decade. Baselines are generally weak, often being below the median original TREC system. And in only a handful of experiments is the score of the best TREC automatic run exceeded. Given this finding, we question the value of achieving even a statistically significant result over a weak baseline. We propose that the community adopt a practice of regular longitudinal comparison to ensure measurable progress, or at least prevent the lack of it from going unnoticed. We describe an online database of retrieval runs that facilitates such a practice.", "title": "" }, { "docid": "fe1bc993047a95102f4331f57b1f9197", "text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.", "title": "" }, { "docid": "c612ee4ad1b4daa030e86a59543ca53b", "text": "The dominant approach for many NLP tasks are recurrent neura l networks, in particular LSTMs, and convolutional neural networks. However , these architectures are rather shallow in comparison to the deep convolutional n etworks which are very successful in computer vision. We present a new archite ctur for text processing which operates directly on the character level and uses o nly small convolutions and pooling operations. We are able to show that the performa nce of this model increases with the depth: using up to 29 convolutional layer s, we report significant improvements over the state-of-the-art on several public t ext classification tasks. To the best of our knowledge, this is the first time that very de ep convolutional nets have been applied to NLP.", "title": "" } ]
[ { "docid": "244c79d374bdbe44406fc514610e4ee7", "text": "This article surveys some theoretical aspects of cellular automata CA research. In particular, we discuss classical and new results on reversibility, conservation laws, limit sets, decidability questions, universality and topological dynamics of CA. The selection of topics is by no means comprehensive and reflects the research interests of the author. The main goal is to provide a tutorial of CA theory to researchers in other branches of natural computing, to give a compact collection of known results with references to their proofs, and to suggest some open problems. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7918167cbceddcc24b4d22f094b167dd", "text": "This paper is presented the study of the social influence by using social features in fitness mobile applications and habit that persuades the working-aged people, in the context of continuous fitness mobile application usage to promote the physical activity. Our conceptual model consisted of Habit and Social Influence. The social features based on the Persuasive Technology (1) Normative Influence, (2) Social Comparison, (3) Competition, (4) Co-operation, and (5) Social Recognition were embedded in the Social Influence construct of UTAUT2 model. The questionnaires were an instrument for this study. The target group was 443 working-aged people who live in Thailand's central region. The results reveal that the factors significantly affecting Behavioral Intention toward Use Behavior are Normative Influence, Social Comparison, Competition, and Co-operation. Only the Social Recognition is insignificantly affecting Behavioral Intention to use fitness mobile applications. The Behavioral Intention and Habit also significantly support the Use Behavior. The social features in fitness mobile application should be developed to promote the physical activity.", "title": "" }, { "docid": "c6afc173351fe404f7c5b68d2a0bc0a8", "text": "BACKGROUND\nCombined traumatic brain injury (TBI) and hemorrhagic shock (HS) is highly lethal. In a nonsurvival model of TBI + HS, addition of high-dose valproic acid (VPA) (300 mg/kg) to hetastarch reduced brain lesion size and associated swelling 6 hours after injury; whether this would have translated into better neurologic outcomes remains unknown. It is also unclear whether lower doses of VPA would be neuroprotective. We hypothesized that addition of low-dose VPA to normal saline (NS) resuscitation would result in improved long-term neurologic recovery and decreased brain lesion size.\n\n\nMETHODS\nTBI was created in anesthetized swine (40-43 kg) by controlled cortical impact, and volume-controlled hemorrhage (40% volume) was induced concurrently. After 2 hours of shock, animals were randomized (n = 5 per group) to NS (3× shed blood) or NS + VPA (150 mg/kg). Six hours after resuscitation, packed red blood cells were transfused, and animals were recovered. Peripheral blood mononuclear cells were analyzed for acetylated histone-H3 at lysine-9. A Neurological Severity Score (NSS) was assessed daily for 30 days. Brain magnetic resonance imaging was performed on Days 3 and 10. Cognitive performance was assessed by training animals to retrieve food from color-coded boxes.\n\n\nRESULTS\nThere was a significant increase in histone acetylation in the NS + VPA-treated animals compared with NS treatment. The NS + VPA group demonstrated significantly decreased neurologic impairment and faster speed of recovery as well as smaller brain lesion size compared with the NS group. Although the final cognitive function scores were similar between the groups, the VPA-treated animals reached the goal significantly faster than the NS controls.\n\n\nCONCLUSION\nIn this long-term survival model of TBI + HS, addition of low-dose VPA to saline resuscitation resulted in attenuated neurologic impairment, faster neurologic recovery, smaller brain lesion size, and a quicker normalization of cognitive functions.", "title": "" }, { "docid": "28075920fae3e973911b299db86c792e", "text": "DNA methylation is a well-studied genetic modification crucial to regulate the functioning of the genome. Its alterations play an important role in tumorigenesis and tumor-suppression. Thus, studying DNA methylation data may help biomarker discovery in cancer. Since public data on DNA methylation become abundant – and considering the high number of methylated sites (features) present in the genome – it is important to have a method for efficiently processing such large datasets. Relying on big data technologies, we propose BIGBIOCL an algorithm that can apply supervised classification methods to datasets with hundreds of thousands of features. It is designed for the extraction of alternative and equivalent classification models through iterative deletion of selected features. We run experiments on DNA methylation datasets extracted from The Cancer Genome Atlas, focusing on three tumor types: breast, kidney, and thyroid carcinomas. We perform classifications extracting several methylated sites and their associated genes with accurate performance (accuracy>97%). Results suggest that BIGBIOCL can perform hundreds of classification iterations on hundreds of thousands of features in few hours. Moreover, we compare the performance of our method with other state-of-the-art classifiers and with a wide-spread DNA methylation analysis method based on network analysis. Finally, we are able to efficiently compute multiple alternative classification models and extract from DNA-methylation large datasets a set of candidate genes to be further investigated to determine their active role in cancer. BIGBIOCL, results of experiments, and a guide to carry on new experiments are freely available on GitHub at https://github.com/fcproj/BIGBIOCL.", "title": "" }, { "docid": "2568f7528049b4ffc3d9a8b4f340262b", "text": "We introduce a new form of linear genetic programming (GP). Two methods of acceleration of our GP approach are discussed: 1) an efficient algorithm that eliminates intron code and 2) a demetic approach to virtually parallelize the system on a single processor. Acceleration of runtime is especially important when operating with complex data sets, because they are occuring in real-world applications. We compare GP performance on medical classification problems from a benchmark database with results obtained by neural networks. Our results show that GP performs comparable in classification and generalization.", "title": "" }, { "docid": "75f916790044fab6e267c5c5ec5846b7", "text": "Detecting circles from a digital image is very important in shape recognition. In this paper, an efficient randomized algorithm (RCD) for detecting circles is presented, which is not based on the Hough transform (HT). Instead of using an accumulator for saving the information of the related parameters in the HT-based methods, the proposed RCD does not need an accumulator. The main concept used in the proposed RCD is that we first randomly select four edge pixels in the image and define a distance criterion to determine whether there is a possible circle in the image; after finding a possible circle, we apply an evidence-collecting process to further determine whether the possible circle is a true circle or not. Some synthetic images with different levels of noises and some realistic images containing circular objects with some occluded circles and missing edges have been taken to test the performance. Experimental results demonstrate that the proposed RCD is faster than other HT-based methods for the noise level between the light level and the modest level. For a heavy noise level, the randomized HT could be faster than the proposed RCD, but at the expense of massive memory requirements.c © 2001 Academic Press", "title": "" }, { "docid": "a50ea2739751249e2832cae2df466d0b", "text": "The Arabic Online Commentary (AOC) (Zaidan and Callison-Burch, 2011) is a large-scale repository of Arabic dialects with manual labels for 4 varieties of the language. Existing dialect identification models exploiting the dataset pre-date the recent boost deep learning brought to NLP and hence the data are not benchmarked for use with deep learning, nor is it clear how much neural networks can help tease the categories in the data apart. We treat these two limitations: We (1) benchmark the data, and (2) empirically test 6 different deep learning methods on the task, comparing peformance to several classical machine learning models under different conditions (i.e., both binary and multi-way classification). Our experimental results show that variants of (attention-based) bidirectional recurrent neural networks achieve best accuracy (acc) on the task, significantly outperforming all competitive baselines. On blind test data, our models reach 87.65% acc on the binary task (MSA vs. dialects), 87.4% acc on the 3-way dialect task (Egyptian vs. Gulf vs. Levantine), and 82.45% acc on the 4-way variants task (MSA vs. Egyptian vs. Gulf vs. Levantine). We release our benchmark for future work on the dataset.", "title": "" }, { "docid": "53df69bf8750a7e97f12b1fcac14b407", "text": "In photovoltaic (PV) power systems where a set of series-connected PV arrays (PVAs) is connected to a conventional two-level inverter, the occurrence of partial shades and/or the mismatching of PVAs leads to a reduction of the power generated from its potential maximum. To overcome these problems, the connection of the PVAs to a multilevel diode-clamped converter is considered in this paper. A control and pulsewidth-modulation scheme is proposed, capable of independently controlling the operating voltage of each PVA. Compared to a conventional two-level inverter system, the proposed system configuration allows one to extract maximum power, to reduce the devices voltage rating (with the subsequent benefits in device-performance characteristics), to reduce the output-voltage distortion, and to increase the system efficiency. Simulation and experimental tests have been conducted with three PVAs connected to a four-level three-phase diode-clamped converter to verify the good performance of the proposed system configuration and control strategy.", "title": "" }, { "docid": "44e4797655292e97651924115fd8d711", "text": "Information and communication technology has the capability to improve the process by which governments involve citizens in formulating public policy and public projects. Even though much of government regulations may now be in digital form (and often available online), due to their complexity and diversity, identifying the ones relevant to a particular context is a non-trivial task. Similarly, with the advent of a number of electronic online forums, social networking sites and blogs, the opportunity of gathering citizens’ petitions and stakeholders’ views on government policy and proposals has increased greatly, but the volume and the complexity of analyzing unstructured data makes this difficult. On the other hand, text mining has come a long way from simple keyword search, and matured into a discipline capable of dealing with much more complex tasks. In this paper we discuss how text-mining techniques can help in retrieval of information and relationships from textual data sources, thereby assisting policy makers in discovering associations between policies and citizens’ opinions expressed in electronic public forums and blogs etc. We also present here, an integrated text mining based architecture for e-governance decision support along with a discussion on the Indian scenario.", "title": "" }, { "docid": "119ea9c1d6b2cf2063efaf4d5ed7e756", "text": "In this paper, we use shape grammars (SGs) for facade parsing, which amounts to segmenting 2D building facades into balconies, walls, windows, and doors in an architecturally meaningful manner. The main thrust of our work is the introduction of reinforcement learning (RL) techniques to deal with the computational complexity of the problem. RL provides us with techniques such as Q-learning and state aggregation which we exploit to efficiently solve facade parsing. We initially phrase the 1D parsing problem in terms of a Markov Decision Process, paving the way for the application of RL-based tools. We then develop novel techniques for the 2D shape parsing problem that take into account the specificities of the facade parsing problem. Specifically, we use state aggregation to enforce the symmetry of facade floors and demonstrate how to use RL to exploit bottom-up, image-based guidance during optimization. We provide systematic results on the Paris building dataset and obtain state-of-the-art results in a fraction of the time required by previous methods. We validate our method under diverse imaging conditions and make our software and results available online.", "title": "" }, { "docid": "e0e7bece9dd69ac775824b2ed40965d8", "text": "In this paper, we consider an adaptive base-stock policy for a single-item inventory system, where the demand process is non-stationary. In particular, the demand process is an integrated moving average process of order (0, 1, 1), for which an exponential-weighted moving average provides the optimal forecast. For the assumed control policy we characterize the inventory random variable and use this to find the safety stock requirements for the system. From this characterization, we see that the required inventory, both in absolute terms and as it depends on the replenishment lead-time, behaves much differently for this case of non-stationary demand compared with stationary demand. We then show how the single-item model extends to a multistage, or supply-chain context; in particular we see that the demand process for the upstream stage is not only non-stationary but also more variable than that for the downstream stage. We also show that for this model there is no value from letting the upstream stages see the exogenous demand. The paper concludes with some observations about the practical implications of this work.", "title": "" }, { "docid": "6a6063c05941c026b083bfcc573520f8", "text": "This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records (Koopman et al. in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/2702613.2732781 , 2015b). In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results”—the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.", "title": "" }, { "docid": "8c3aaa5011c7974a18b17d2a604127b7", "text": "The threat of Distributed Denial of Service (DDoS) has become a major issue in network security and is difficult to detect because all DDoS traffics have normal packet characteristics. Various detection and defense algorithms have been studied. One of them is an entropy-based intrusion detection approach that is a powerful and simple way to identify abnormal conditions from network channels. However, the burden of computing information entropy values from heavy flow still exists. To reduce the computing time, we have developed a DDoS detection scheme using a compression entropy method. It allows us to significantly reduce the computation time for calculating information entropy. However, our experiment suggests that the compression entropy approach tends to be too sensitive to verify real network attacks and produces many false negatives. In this paper, we propose a fast entropy scheme that can overcome the issue of false negatives and will not increase the computational time. Our simulation shows that the fast entropy computing method not only reduced computational time by more than 90% compared to conventional entropy, but also increased the detection accuracy compared to conventional and compression entropy approaches.", "title": "" }, { "docid": "0116f3e12fbaf2705f36d658fdbe66bb", "text": "This paper presents a metric to quantify visual scene movement perceived inside a virtual environment (VE) and illustrates how this method could be used in future studies to determine a cybersickness dose value to predict levels of cybersickness in VEs. Sensory conflict theories predict that cybersickness produced by a VE is a kind of visually induced motion sickness. A comprehensive review indicates that there is only one subjective measure to quantify visual stimuli presented inside a VE. A metric, referred to as spatial velocity (SV), is proposed. It combines objective measures of scene complexity and scene movement velocity. The theoretical basis for the proposed SV metric and the algorithms for its implementation are presented. Data from two previous experiments on cybersickness were reanalyzed using the metric. Results showed that increasing SV by either increasing the scene complexity or scene velocity significantly increased the rated level of cybersickness. A strong correlation between SV and the level of cybersickness was found. The use of the spatial velocity metric to predict levels of cybersickness is also discussed.", "title": "" }, { "docid": "26eb8fc38928446194d0110aca3a8b9c", "text": "The requirement for high quality pulps which are widely used in paper industries has increased the demand for pulp refining (beating) process. Pulp refining is a promising approach to improve the pulp quality by changing the fiber characteristics. The diversity of research on the effect of refining on fiber properties which is due to the different pulp sources, pulp consistency and refining equipment has interested us to provide a review on the studies over the last decade. In this article, the influence of pulp refining on structural properties i.e., fibrillations, fine formation, fiber length, fiber curl, crystallinity and distribution of surface chemical compositions is reviewed. The effect of pulp refining on electrokinetic properties of fiber e.g., surface and total charges of pulps is discussed. In addition, an overview of different refining theories, refiners as well as some tests for assessing the pulp refining is presented.", "title": "" }, { "docid": "240c47d27533069f339d8eb090a637a9", "text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.", "title": "" }, { "docid": "a9399439831a970fcce8e0101696325f", "text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.", "title": "" }, { "docid": "d3a0931c03c80f5aa639cdc0d8cc331b", "text": "We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.", "title": "" }, { "docid": "574f1eb961c4469a16b4fde10d455ff4", "text": "To study the fundamental effects of the spinning capsule on the overall performance of a dry powder inhaler (Aerolizer®). The capsule motion was visualized using high-speed photography. Computational fluid dynamics (CFD) analysis was performed to determine the flowfield generated in the device with and without the presence of different sized capsules at 60 l min−1. The inhaler dispersion performance was measured with mannitol powder using a multistage liquid impinger at the same flowrate. The capsule size (3, 4, and 5) was found to make no significant difference to the device flowfield, the particle-device impaction frequency, or the dispersion performance of the inhaler. Reducing the capsule size reduced only the capsule retention by 4%. In contrast, without the presence of the spinning capsule, turbulence levels were increased by 65%, FPFEm (wt% particles ≤6.8 μm in the aerosol referenced against the amount of powder emitted from the device) increased from 59% to 65%, while particle-mouthpiece impaction decreased by 2.5 times. When the powder was dispersed from within compared to from outside the spinning capsule containing four 0.6 mm holes at each end, the FPFEm was increased significantly from 59% to 76%, and the throat retention was dropped from 14% to 6%. The presence, but not the size, of a capsule has significant effects on the inhaler performance. The results suggested that impaction between the particles and the spinning capsule does not play a major role in powder dispersion. However, the capsule can provide additional strong mechanisms of deagglomeration dependent on the size of the capsule hole.", "title": "" } ]
scidocsrr
563c09f24750dd82b154ad316ac4d7a4
Product Aspect Ranking and Its Applications
[ { "docid": "e677ba3fa8d54fad324add0bda767197", "text": "In this paper, we present a novel approach for mining opinions from product reviews, where it converts opinion mining task to identify product features, expressions of opinions and relations between them. By taking advantage of the observation that a lot of product features are phrases, a concept of phrase dependency parsing is introduced, which extends traditional dependency parsing to phrase level. This concept is then implemented for extracting relations between product features and expressions of opinions. Experimental evaluations show that the mining task can benefit from phrase dependency parsing.", "title": "" } ]
[ { "docid": "ab2e5ec6e48c87b3e4814840ad29afe7", "text": "This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are full parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG's spurious ambiguity, the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG's nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.", "title": "" }, { "docid": "f0fa5907c11c3adb43942cc6d2cfdd47", "text": "Executive Summary/Abstract:. Timely implementation of a Master Data Management (MDM) Organisation is essential and requires a structured design. This presentation covers the way MDM processes and roles can link the business users to metadata and enable improved master data management , through disciplined organisation alignment and training. As organisations implement integrated ERP systems, the attention on master data is often limited to Data Migration. Therefore, a plan for recovery is frequently required.", "title": "" }, { "docid": "bddea9fd4d14f591e6fb6acc3cc057f1", "text": "We present an analysis of musical influence using intact lyrics of over 550,000 songs, extending existing research on lyrics through a novel approach using directed networks. We form networks of lyrical influence over time at the level of three-word phrases, weighted by tf-idf. An edge reduction analysis of strongly connected components suggests highly central artist, songwriter, and genre network topologies. Visualizations of the genre network based on multidimensional scaling confirm network centrality and provide insight into the most influential genres at the heart of the network. Next, we present metrics for influence and self-referential behavior, examining their interactions with network centrality and with the genre diversity of songwriters. Here, we uncover a negative correlation between songwriters’ genre diversity and the robustness of their connections. By examining trends among the data for top genres, songwriters, and artists, we address questions related to clustering, influence, and isolation of nodes in the networks. We conclude by discussing promising future applications of lyrical influence networks in music information retrieval research. The networks constructed in this study are made publicly available for research purposes.", "title": "" }, { "docid": "8b4e09bb13d3d01d3954f32cbb4c9e27", "text": "Higher-level semantics such as visual attributes are crucial for fundamental multimedia applications. We present a novel attribute discovery approach that can automatically identify, model and name attributes from an arbitrary set of image and text pairs that can be easily gathered on the Web. Different from conventional attribute discovery methods, our approach does not rely on any pre-defined vocabularies and human labeling. Therefore, we are able to build a large visual knowledge base without any human efforts. The discovery is based on a novel deep architecture, named Independent Component Multimodal Autoencoder (ICMAE), that can continually learn shared higher-level representations across the visual and textual modalities. With the help of the resultant representations encoding strong visual and semantic evidences, we propose to (a) identify attributes and their corresponding high-quality training images, (b) iteratively model them with maximum compactness and comprehensiveness, and (c) name the attribute models with human understandable words. To date, the proposed system has discovered 1,898 attributes over 1.3 million pairs of image and text. Extensive experiments on various real-world multimedia datasets demonstrate the quality and effectiveness of the discovered attributes, facilitating multimedia applications such as image annotation and retrieval as compared to the state-of-the-art approaches.", "title": "" }, { "docid": "96d8e375616a7ee137276d385c14a18a", "text": "Constructivism is a theory of learning which claims that students construct knowledge rather than merely receive and store knowledge transmitted by the teacher. Constructivism has been extremely influential in science and mathematics education, but not in computer science education (CSE). This paper surveys constructivism in the context of CSE, and shows how the theory can supply a theoretical basis for debating issues and evaluating proposals.", "title": "" }, { "docid": "bf257fae514c28dc3b4c31ff656a00e9", "text": "The objective of the present study is to evaluate the acute effects of low-level laser therapy (LLLT) on functional capacity, perceived exertion, and blood lactate in hospitalized patients with heart failure (HF). Patients diagnosed with systolic HF (left ventricular ejection fraction <45 %) were randomized and allocated prospectively into two groups: placebo LLLT group (n = 10)—subjects who were submitted to placebo laser and active LLLT group (n = 10)—subjects who were submitted to active laser. The 6-min walk test (6MWT) was performed, and blood lactate was determined at rest (before LLLT application and 6MWT), immediately after the exercise test (time 0) and recovery (3, 6, and 30 min). A multi-diode LLLT cluster probe (DMC, São Carlos, Brazil) was used. Both groups increased 6MWT distance after active or placebo LLLT application compared to baseline values (p = 0.03 and p = 0.01, respectively); however, no difference was observed during intergroup comparison. The active LLLT group showed a significant reduction in the perceived exertion Borg (PEB) scale compared to the placebo LLLT group (p = 0.006). In addition, the group that received active LLLT showed no statistically significant difference for the blood lactate level through the times analyzed. The placebo LLLT group demonstrated a significant increase in blood lactate between the rest and recovery phase (p < 0.05). Acute effects of LLLT irradiation on skeletal musculature were not able to improve the functional capacity of hospitalized patients with HF, although it may favorably modulate blood lactate metabolism and reduce perceived muscle fatigue.", "title": "" }, { "docid": "b555bb25c809e47f0f9fc8cec483d794", "text": "The assessment of oxygen saturation in arterial blood by pulse oximetry (SpO₂) is based on the different light absorption spectra for oxygenated and deoxygenated hemoglobin and the analysis of photoplethysmographic (PPG) signals acquired at two wavelengths. Commercial pulse oximeters use two wavelengths in the red and infrared regions which have different pathlengths and the relationship between the PPG-derived parameters and oxygen saturation in arterial blood is determined by means of an empirical calibration. This calibration results in an inherent error, and pulse oximetry thus has an error of about 4%, which is too high for some clinical problems. We present calibration-free pulse oximetry for measurement of SpO₂, based on PPG pulses of two nearby wavelengths in the infrared. By neglecting the difference between the path-lengths of the two nearby wavelengths, SpO₂ can be derived from the PPG parameters with no need for calibration. In the current study we used three laser diodes of wavelengths 780, 785 and 808 nm, with narrow spectral line-width. SaO₂ was calculated by using each pair of PPG signals selected from the three wavelengths. In measurements on healthy subjects, SpO₂ values, obtained by the 780-808 nm wavelength pair were found to be in the normal range. The measurement of SpO₂ by two nearby wavelengths in the infrared with narrow line-width enables the assessment of SpO₂ without calibration.", "title": "" }, { "docid": "9bfe782c94805544051a3dcb522d7a2c", "text": "In this paper, we propose an algorithm to predict the social popularity (i.e., the numbers of views, comments, and favorites) of content on social networking services using only text annotations. Instead of analyzing image/video content, we try to estimate social popularity by a combination of weight vectors obtained from a support vector regression (SVR) and tag frequency. Since our proposed algorithm uses text annotations instead of image/video features, its computational cost is small. As a result, we can estimate social popularity more efficiently than previously proposed methods. Furthermore, tags that significantly affect social popularity can be extracted using our algorithm. Our experiments involved using one million photos on the social networking website Flickr, and the results showed a high correlation between actual social popularity and the determination thereof using our algorithm. Moreover, the proposed algorithm can achieve high classification accuracy with regard to a classification between popular and unpopular content.", "title": "" }, { "docid": "83c35d9d7df9fcf9d5f93b82466a6bbe", "text": "In a cable-driven parallel robot, elastic cables are used to manipulate the end effector in the workspace. In this paper we present a dynamic analysis and system identification for the complete actuator unit of a cable robot including servo controller, winch, cable, cable force sensor and field bus communication. We establish a second-order system with dead time as an analagous model. Based on this investigation, we propose the design and stability analysis of a cable force controller. We present the implementation of feed-forward and integral controllers based on a stiffness model of the cables. As the platform position is not observable the challenge is to control the cable force while maintaining the positional accuracy. Experimental evaluation of the force controller shows, that the absolute positional accuracy is even improved.", "title": "" }, { "docid": "cd9552d9891337f7e58b3e7e36dfab54", "text": "Multi-variant program execution is an application of n-version programming, in which several slightly different instances of the same program are executed in lockstep on a multiprocessor. These variants are created in such a way that they behave identically under \"normal\" operation and diverge when \"out of specification\" events occur, which may be indicative of attacks. This paper assess the effectiveness of different code variation techniques to address different classes of vulnerabilities. In choosing a variant or combination of variants, security demands need to be balanced against runtime overhead. Our study indicates that a good combination of variations when running two variants is to choose one of instruction set randomization, system call number randomization, and register randomization, and use that together with library entry point randomization. Running more variants simultaneously makes it exponentially more difficult to take over the system.", "title": "" }, { "docid": "129dd084e485da5885e2720a4bddd314", "text": "In the present day developing houses, the procedures adopted during the development of software using agile methodologies are acknowledged as a better option than the procedures followed during conventional software development due to its innate characteristics such as iterative development, rapid delivery and reduced risk. Hence, it is desirable that the software development industries should have proper planning for estimating the effort required in agile software development. The existing techniques such as expert opinion, analogy and disaggregation are mostly observed to be ad hoc and in this manner inclined to be mistaken in a number of cases. One of the various approaches for calculating effort of agile projects in an empirical way is the story point approach (SPA). This paper presents a study on analysis of prediction accuracy of estimation process executed in order to improve it using SPA. Different machine learning techniques such as decision tree, stochastic gradient boosting and random forest are considered in order to assess prediction more qualitatively. A comparative analysis of these techniques with existing techniques is also presented and analyzed in order to critically examine their performance.", "title": "" }, { "docid": "0687cc3d9df74b2ff1dd94d55b773493", "text": "What should I wear? We present Magic Mirror, a virtual fashion consultant, which can parse, appreciate and recommend the wearing. Magic Mirror is designed with a large display and Kinect to simulate the real mirror and interact with users in augmented reality. Internally, Magic Mirror is a practical appreciation system for automatic aesthetics-oriented clothing analysis. Specifically, we focus on the clothing collocation rather than the single one, the style (aesthetic words) rather than the visual features. We bridge the gap between the visual features and aesthetic words of clothing collocation to enable the computer to learn appreciating the clothing collocation. Finally, both object and subject evaluations verify the effectiveness of the proposed algorithm and Magic Mirror system.", "title": "" }, { "docid": "13173c37670511963b23a42a3cc7e36b", "text": "In patients having a short nose with a short septal length and/or severe columellar retraction, a septal extension graft is a good solution, as it allows the dome to move caudally and pushes down the columellar base. Fixing the medial crura of the alar cartilages to a septal extension graft leads to an uncomfortably rigid nasal tip and columella, and results in unnatural facial animation. Further, because of the relatively small and weak septal cartilage in the East Asian population, undercorrection of a short nose is not uncommon. To overcome these shortcomings, we have used the septal extension graft combined with a derotation graft. Among 113 patients who underwent the combined procedure, 82 patients had a short nose deformity alone; the remaining 31 patients had a short nose with columellar retraction. Thirty-two patients complained of nasal tip stiffness caused by a septal extension graft from previous operations. In addition to the septal extension graft, a derotation graft was used for bridging the gap between the alar cartilages and the septal extension graft for tip lengthening. Satisfactory results were obtained in 102 (90%) patients. Eleven (10%) patients required revision surgery. This combination method is a good surgical option for patients who have a short nose with small septal cartilages and do not have sufficient cartilage for tip lengthening by using a septal extension graft alone. It can also overcome the postoperative nasal tip rigidity of a septal extension graft.", "title": "" }, { "docid": "1b314c55b86355e1fd0ef5d5ce9a89ba", "text": "3D printing technology is rapidly maturing and becoming ubiquitous. One of the remaining obstacles to wide-scale adoption is that the object to be printed must fit into the working volume of the 3D printer. We propose a framework, called Chopper, to decompose a large 3D object into smaller parts so that each part fits into the printing volume. These parts can then be assembled to form the original object. We formulate a number of desirable criteria for the partition, including assemblability, having few components, unobtrusiveness of the seams, and structural soundness. Chopper optimizes these criteria and generates a partition either automatically or with user guidance. Our prototype outputs the final decomposed parts with customized connectors on the interfaces. We demonstrate the effectiveness of Chopper on a variety of non-trivial real-world objects.", "title": "" }, { "docid": "5cc07ca331deb81681b3f18355c0e586", "text": "BACKGROUND\nHyaluronic acid (HA) formulations are used for aesthetic applications. Different cross-linking technologies result in HA dermal fillers with specific characteristic visco-elastic properties.\n\n\nOBJECTIVE\nBio-integration of three CE-marked HA dermal fillers, a cohesive (monophasic) polydensified, a cohesive (monophasic) monodensified and a non-cohesive (biphasic) filler, was analysed with a follow-up of 114 days after injection. Our aim was to study the tolerability and inflammatory response of these fillers, their patterns of distribution in the dermis, and influence on tissue integrity.\n\n\nMETHODS\nThree HA formulations were injected intradermally into the iliac crest region in 15 subjects. Tissue samples were analysed after 8 and 114 days by histology and immunohistochemistry, and visualized using optical and transmission electron microscopy.\n\n\nRESULTS\nHistological results demonstrated that the tested HA fillers showed specific characteristic bio-integration patterns in the reticular dermis. Observations under the optical and electron microscopes revealed morphological conservation of cutaneous structures. Immunohistochemical results confirmed absence of inflammation, immune response and granuloma.\n\n\nCONCLUSION\nThe three tested dermal fillers show an excellent tolerability and preservation of the dermal cells and matrix components. Their tissue integration was dependent on their visco-elastic properties. The cohesive polydensified filler showed the most homogeneous integration with an optimal spreading within the reticular dermis, which is achieved by filling even the smallest spaces between collagen bundles and elastin fibrils, while preserving the structural integrity of the latter. Absence of adverse reactions confirms safety of the tested HA dermal fillers.", "title": "" }, { "docid": "8c0cbfc060b3a6aa03fd8305baf06880", "text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.", "title": "" }, { "docid": "1eb2715d2dfec82262c7b3870db9b649", "text": "Leadership is a crucial component to the success of academic health science centers (AHCs) within the shifting U.S. healthcare environment. Leadership talent acquisition and development within AHCs is immature and approaches to leadership and its evolution will be inevitable to refine operations to accomplish the critical missions of clinical service delivery, the medical education continuum, and innovations toward discovery. To reach higher organizational outcomes in AHCs requires a reflection on what leadership approaches are in place and how they can better support these missions. Transactional leadership approaches are traditionally used in AHCs and this commentary suggests that movement toward a transformational approach is a performance improvement opportunity for AHC leaders. This commentary describes the transactional and transformational approaches, how they complement each other, and how to access the transformational approach. Drawing on behavioral sciences, suggestions are made on how a transactional leader can change her cognitions to align with the four dimensions of the transformational leadership approach.", "title": "" }, { "docid": "9818399b4c119b58723c59e76bbfc1bd", "text": "Many vertex-centric graph algorithms can be expressed using asynchronous parallelism by relaxing certain read-after-write data dependences and allowing threads to compute vertex values using stale (i.e., not the most recent) values of their neighboring vertices. We observe that on distributed shared memory systems, by converting synchronous algorithms into their asynchronous counterparts, algorithms can be made tolerant to high inter-node communication latency. However, high inter-node communication latency can lead to excessive use of stale values causing an increase in the number of iterations required by the algorithms to converge. Although by using bounded staleness we can restrict the slowdown in the rate of convergence, this also restricts the ability to tolerate communication latency. In this paper we design a relaxed memory consistency model and consistency protocol that simultaneously tolerate communication latency and minimize the use of stale values. This is achieved via a coordinated use of best effort refresh policy and bounded staleness. We demonstrate that for a range of asynchronous graph algorithms and PDE solvers, on an average, our approach outperforms algorithms based upon: prior relaxed memory models that allow stale values by at least 2.27x; and Bulk Synchronous Parallel (BSP) model by 4.2x. We also show that our approach frequently outperforms GraphLab, a popular distributed graph processing framework.", "title": "" }, { "docid": "388a8494d6aa7b51d9567bd2e401f3ce", "text": "An appropriate image representation induces some good image treatment algorithms. Hypergraph theory is a theory of finite combinatorial sets, modeling a lot of problems of operational research and combinatorial optimization. Hypergraphs are now used in many domains such as chemistry, engineering and image processing. We present an overview of a hypergraph-based picture representation giving much application in picture manipulation, analysis and restoration: the Image Adaptive Neighborhood Hypergraph (IANH). With the IANH it is possible to build powerful noise detection an elimination algorithm, but also to make some edges detection or some image segmentation. IANH has various applications and this paper presents a survey of them.", "title": "" }, { "docid": "934b1a0959389d32382978cdd411ba87", "text": "Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like \"bleed\" and \"punch\" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.", "title": "" } ]
scidocsrr
7a4fcb24bbaec04b6699f8dd33a65836
Mental Health Problems in University Students : A Prevalence Study
[ { "docid": "1497e47ada570797e879bbc4aba432a1", "text": "The mental health of university students is an area of increasing concern worldwide. The objective of this study is to examine the prevalence of depression, anxiety and stress among a group of Turkish university students. Depression Anxiety and Stress Scale (DASS-42) completed anonymously in the students’ respective classrooms by 1,617 students. Depression, anxiety and stress levels of moderate severity or above were found in 27.1, 47.1 and 27% of our respondents, respectively. Anxiety and stress scores were higher among female students. First- and second-year students had higher depression, anxiety and stress scores than the others. Students who were satisfied with their education had lower depression, anxiety and stress scores than those who were not satisfied. The high prevalence of depression, anxiety and stress symptoms among university students is alarming. This shows the need for primary and secondary prevention measures, with the development of adequate and appropriate support services for this group.", "title": "" } ]
[ { "docid": "0ef6e54d7190dde80ee7a30c5ecae0c3", "text": "Games have been an important tool for motivating undergraduate students majoring in computer science and engineering. However, it is difficult to build an entire game for education from scratch, because the task requires high-level programming skills and expertise to understand the graphics and physics. Recently, there have been many different game artificial intelligence (AI) competitions, ranging from board games to the state-of-the-art video games (car racing, mobile games, first-person shooting games, real-time strategy games, and so on). The competitions have been designed such that participants develop their own AI module on top of public/commercial games. Because the materials are open to the public, it is quite useful to adopt them for an undergraduate course project. In this paper, we report our experiences using the Angry Birds AI Competition for such a project-based course. In the course, teams of students consider computer vision, strategic decision-making, resource management, and bug-free coding for their outcome. To promote understanding of game contents generation and extensive testing on the generalization abilities of the student's AI program, we developed software to help them create user-created levels. Students actively participated in the project and the final outcome was comparable with that of successful entries in the 2013 International Angry Birds AI Competition. Furthermore, it leads to the development of a new parallelized Angry Birds AI Competition platform with undergraduate students aiming to use advanced optimization algorithms for their controllers.", "title": "" }, { "docid": "0fba05a38cb601a1b08e6105e6b949c1", "text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.", "title": "" }, { "docid": "f1df8b69dfec944b474b9b26de135f55", "text": "Background:There are currently two million cancer survivors in the United Kingdom, and in recent years this number has grown by 3% per annum. The aim of this paper is to provide long-term projections of cancer prevalence in the United Kingdom.Methods:National cancer registry data for England were used to estimate cancer prevalence in the United Kingdom in 2009. Using a model of prevalence as a function of incidence, survival and population demographics, projections were made to 2040. Different scenarios of future incidence and survival, and their effects on cancer prevalence, were also considered. Colorectal, lung, prostate, female breast and all cancers combined (excluding non-melanoma skin cancer) were analysed separately.Results:Assuming that existing trends in incidence and survival continue, the number of cancer survivors in the United Kingdom is projected to increase by approximately one million per decade from 2010 to 2040. Particularly large increases are anticipated in the oldest age groups, and in the number of long-term survivors. By 2040, almost a quarter of people aged at least 65 will be cancer survivors.Conclusion:Increasing cancer survival and the growing/ageing population of the United Kingdom mean that the population of survivors is likely to grow substantially in the coming decades, as are the related demands upon the health service. Plans must, therefore, be laid to ensure that the varied needs of cancer survivors can be met in the future.", "title": "" }, { "docid": "28d19824a598ae20039f2ed5d8885234", "text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.", "title": "" }, { "docid": "a574355d46c6e26efe67aefe2869a0cb", "text": "The continuously increasing cost of the US healthcare system has received significant attention. Central to the ideas aimed at curbing this trend is the use of technology in the form of the mandate to implement electronic health records (EHRs). EHRs consist of patient information such as demographics, medications, laboratory test results, diagnosis codes, and procedures. Mining EHRs could lead to improvement in patient health management as EHRs contain detailed information related to disease prognosis for large patient populations. In this article, we provide a structured and comprehensive overview of data mining techniques for modeling EHRs. We first provide a detailed understanding of the major application areas to which EHR mining has been applied and then discuss the nature of EHR data and its accompanying challenges. Next, we describe major approaches used for EHR mining, the metrics associated with EHRs, and the various study designs. With this foundation, we then provide a systematic and methodological organization of existing data mining techniques used to model EHRs and discuss ideas for future research.", "title": "" }, { "docid": "02e63f2279dbd980c6689bec5ea18411", "text": "Reflection photoplethysmography (PPG) using 530 nm (green) wavelength light has the potential to be a superior method for monitoring heart rate (HR) during normal daily life due to its relative freedom from artifacts. However, little is known about the accuracy of pulse rate (PR) measured by 530 nm light PPG during motion. Therefore, we compared the HR measured by electrocadiography (ECG) as a reference with PR measured by 530, 645 (red), and 470 nm (blue) wavelength light PPG during baseline and while performing hand waving in 12 participants. In addition, we examined the change of signal-to-noise ratio (SNR) by motion for each of the three wavelengths used for the PPG. The results showed that the limit of agreement in Bland-Altman plots between the HR measured by ECG and PR measured by 530 nm light PPG (±0.61 bpm) was smaller than that achieved when using 645 and 470 nm light PPG (±3.20 bpm and ±2.23 bpm, respectively). The ΔSNR (the difference between baseline and task values) of 530 and 470nm light PPG was significantly smaller than ΔSNR for red light PPG. In conclusion, 530 nm light PPG could be a more suitable method than 645 and 470nm light PPG for monitoring HR in normal daily life.", "title": "" }, { "docid": "5ccf0b3f871f8362fccd4dbd35a05555", "text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.", "title": "" }, { "docid": "736ee2bed70510d77b1f9bb13b3bee68", "text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.", "title": "" }, { "docid": "c60c83c93577377bad43ed1972079603", "text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module", "title": "" }, { "docid": "57e9467bfbc4e891acd00dcdac498e0e", "text": "Cross-cultural perspectives have brought renewed interest in the social aspects of the self and the extent to which individuals define themselves in terms of their relationships to others and to social groups. This article provides a conceptual review of research and theory of the social self, arguing that the personal, relational, and collective levels of self-definition represent distinct forms of selfrepresentation with different origins, sources of self-worth, and social motivations. A set of 3 experiments illustrates haw priming of the interpersonal or collective \"we\" can alter spontaneous judgments of similarity and self-descriptions.", "title": "" }, { "docid": "e50c921d664f970daa8050bad282e066", "text": "In the complex decision-environments that characterize e-business settings, it is important to permit decision-makers to proactively manage data quality. In this paper we propose a decision-support framework that permits decision-makers to gauge quality both in an objective (context-independent) and in a context-dependent manner. The framework is based on the information product approach and uses the Information Product Map (IPMAP). We illustrate its application in evaluating data quality using completeness—a data quality dimension that is acknowledged as important. A decision-support tool (IPView) for managing data quality that incorporates the proposed framework is also described. D 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "01c267fbce494fcfabeabd38f18c19a3", "text": "New insights in the programming physics of silicided polysilicon fuses integrated in 90 nm CMOS have led to a programming time of 100 ns, while achieving a resistance increase of 107. This is an order of magnitude better than any previously published result for the programming time and resistance increase individually. Simple calculations and TEM-analyses substantiate the proposed programming mechanism. The advantage of a rectangular fuse head over a tapered fuse head is shown and explained", "title": "" }, { "docid": "e875d4a88e73984e37f5ce9ffe543791", "text": "A set of face stimuli called the NimStim Set of Facial Expressions is described. The goal in creating this set was to provide facial expressions that untrained individuals, characteristic of research participants, would recognize. This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.", "title": "" }, { "docid": "829eafadf393a66308db452eeef617d5", "text": "The goal of creating non-biological intelligence has been with us for a long time, predating the nominal 1956 establishment of the field of artificial intelligence by centuries or, under some definitions, even by millennia. For much of this history it was reasonable to recast the goal of “creating” intelligence as that of “designing” intelligence. For example, it would have been reasonable in the 17th century, as Leibnitz was writing about reasoning as a form of calculation, to think that the process of creating artificial intelligence would have to be something like the process of creating a waterwheel or a pocket watch: first understand the principles, then use human intelligence to devise a design based on the principles, and finally build a system in accordance with the design. At the dawn of the 19th century William Paley made such assumptions explicit, arguing that intelligent designers are necessary for the production of complex adaptive systems. And then, of course, Paley was soundly refuted by Charles Darwin in 1859. Darwin showed how complex and adaptive systems can arise naturally from a process of selection acting on random variation. That is, he showed that complex and adaptive design could be created without an intelligent designer. On the basis of evidence from paleontology, molecular biology, and evolutionary theory we now understand that nearly all of the interesting features of biological agents, including intelligence, have arisen through roughly Darwinian evolutionary processes (with a few important refinements, some of which are mentioned below). But there are still some holdouts for the pre-Darwinian view. A recent survey in the United States found that 42% of respondents expressed a belief that “Life on Earth has existed in its present form since the beginning of time” [7], and these views are supported by powerful political forces including a stridently anti-science President. These shocking political realities are, however, beyond the scope of the present essay. This essay addresses a more subtle form of pre-Darwinian thinking that occurs even among the scientifically literate, and indeed even among highly trained scientists conducting advanced AI research. Those who engage in this form of pre-Darwinian thinking accept the evidence for the evolution of terrestrial life but ignore or even explicitly deny the power of evolutionary processes to produce adaptive complexity in other contexts. Within the artificial intelligence research community those who engage in this form of thinking ignore or deny the power of evolutionary processes to create machine intelligence. Before exploring this complaint further it is worth asking whether an evolved artificial intelligence would even serve the broader goals of AI as a field. Every AI text opens by defining the field, and some of the proffered definitions are explicitly oriented toward design—presumably design by intelligent humans. For example Dean et al. define AI as “the design and study of computer programs that behave intelligently” [2, p. 1]. Would the field, so defined, be served by the demonstration of an evolved artificial intelligence? It would insofar as we could study the evolved system and particularly if we could use our resulting understanding as the basis for future designs. So even the most design-oriented AI researchers should be interested in evolved artificial intelligence if it can in fact be created.", "title": "" }, { "docid": "8d176debd26505d424dcbf8f5cfdb4d1", "text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.", "title": "" }, { "docid": "97b578720957155514ca9fbe68c03eed", "text": "Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors.", "title": "" }, { "docid": "52c1300a818340065ca16d02343f13fe", "text": "Article history: Received 9 September 2014 Received in revised form 25 January 2015 Accepted 9 February 2015 Available online xxxx", "title": "" }, { "docid": "419499ced8902a00909c32db352ea7f5", "text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.", "title": "" }, { "docid": "186d9fc899fdd92c7e74615a2a054a03", "text": "In this paper, we propose an illumination-robust face recognition system via local directional pattern images. Usually, local pattern descriptors including local binary pattern and local directional pattern have been used in the field of the face recognition and facial expression recognition, since local pattern descriptors have important properties to be robust against the illumination changes and computational simplicity. Thus, this paper represents the face recognition approach that employs the local directional pattern descriptor and twodimensional principal analysis algorithms to achieve enhanced recognition accuracy. In particular, we propose a novel methodology that utilizes the transformed image obtained from local directional pattern descriptor as the direct input image of two-dimensional principal analysis algorithms, unlike that most of previous works employed the local pattern descriptors to acquire the histogram features. The performance evaluation of proposed system was performed using well-known approaches such as principal component analysis and Gabor-wavelets based on local binary pattern, and publicly available databases including the Yale B database and the CMU-PIE database were employed. Through experimental results, the proposed system showed the best recognition accuracy compared to different approaches, and we confirmed the effectiveness of the proposed method under varying lighting conditions.", "title": "" }, { "docid": "6fc870c703611e07519ce5fe956c15d1", "text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.", "title": "" } ]
scidocsrr
626257ecf74e8d0fa7476b5f12b7c2ff
ERNN: A Biologically Inspired Feedforward Neural Network to Discriminate Emotion From EEG Signal
[ { "docid": "4b284736c51435f9ab6f52f174dc7def", "text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.", "title": "" }, { "docid": "908716e7683bdc78283600f63bd3a1b0", "text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.", "title": "" }, { "docid": "34257e8924d8f9deec3171589b0b86f2", "text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.", "title": "" } ]
[ { "docid": "c23a86bc6d8011dab71ac5e1e2051c3b", "text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.", "title": "" }, { "docid": "3f9139e961f00e6f2cec14dbb0e94683", "text": "WebQuestions and SimpleQuestions are two benchmark data-sets commonly used in recent knowledge-based question answering (KBQA) work. Most questions in them are ‘simple’ questions which can be answered based on a single relation in the knowledge base. Such data-sets lack the capability of evaluating KBQA systems on complicated questions. Motivated by this issue, we release a new data-set, namely ComplexQuestions1, aiming to measure the quality of KBQA systems on ‘multi-constraint’ questions which require multiple knowledge base relations to get the answer. Beside, we propose a novel systematic KBQA approach to solve multi-constraint questions. Compared to state-of-the-art methods, our approach not only obtains comparable results on the two existing benchmark data-sets, but also achieves significant improvements on the ComplexQuestions.", "title": "" }, { "docid": "99ddcb898895b04f4e86337fe35c1713", "text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.", "title": "" }, { "docid": "2319dccdb7635a23ab702f10788ea09f", "text": "The molecular basis of obligate anaerobiosis is not well established. Bacteroides thetaiotaomicron is an opportunistic pathogen that cannot grow in fully aerobic habitats. Because microbial niches reflect features of energy-producing strategies, we suspected that aeration would interfere with its central metabolism. In anaerobic medium, this bacterium fermented carbohydrates to a mixture of succinate, propionate and acetate. When cultures were exposed to air, the formation of succinate and propionate ceased abruptly. In vitro analysis demonstrated that the fumarase of the succinate-propionate pathway contains an iron-sulphur cluster that is sensitive to superoxide. In vivo, fumarase activity fell to < 5% when cells were aerated; virtually all activity was recovered after extracts were chemically treated to rebuild iron-sulphur clusters. Aeration minimally affected the remainder of this pathway. However, aeration reduced pyruvate:ferredoxin oxidoreductase (PFOR), the first enzyme in the acetate fermentation branch, to 3% of its anaerobic activity. This cluster-containing enzyme was damaged in vitro by molecular oxygen but not by superoxide. Thus, aerobic growth is precluded by the vulnerability of these iron-sulphur cluster enzymes to oxidation. Importantly, both enzymes were maintained in a stable, inactive form for long periods in aerobic cells; they were then rapidly repaired when the bacterium was returned to anaerobic medium. This result explains how this pathogen can easily recover from occasional exposure to oxygen.", "title": "" }, { "docid": "c138108f567d7f2dd130b6209b11caef", "text": "Autotuning using relay feedback is widely used to identify low order integrating plus dead time (IPDT) systems as the method is simple and is operated in closed-loop without interrupting the production process. Oscillatory responses from the process due to ideal relay input are collected to calculate ultimate properties of the system that in turn are used to model the responses as functions of system model parameters. These theoretical models of relay response are validated. After adjusting the phase shift, input and output responses are used to find land mark points that are used to formulate algorithms for parameter estimation of the process model. The method is even applicable to distorted relay responses due to load disturbance or measurement noise. Closed-loop simulations are carried out using model based control strategy and performances are calculated.", "title": "" }, { "docid": "5c3137529a63c0c1ba45c22b292f3008", "text": "Information extraction by text segmentation (IETS) applies to cases in which data values of interest are organized in implicit semi-structured records available in textual sources (e.g. postal addresses, bibliographic information, ads). It is an important practical problem that has been frequently addressed in the recent literature. In this paper we introduce ONDUX (On Demand Unsupervised Information Extraction), a new unsupervised probabilistic approach for IETS. As other unsupervised IETS approaches, ONDUX relies on information available on pre-existing data to associate segments in the input string with attributes of a given domain. Unlike other approaches, we rely on very effective matching strategies instead of explicit learning strategies. The effectiveness of this matching strategy is also exploited to disambiguate the extraction of certain attributes through a reinforcement step that explores sequencing and positioning of attribute values directly learned on-demand from test data, with no previous human-driven training, a feature unique to ONDUX. This assigns to ONDUX a high degree of flexibility and results in superior effectiveness, as demonstrated by the experimental evaluation we report with textual sources from different domains, in which ONDUX is compared with a state-of-art IETS approach.", "title": "" }, { "docid": "2d1290b3cee0bcbc3a1448046bea10aa", "text": "Photometric stereo using unorganized Internet images is very challenging, because the input images are captured under unknown general illuminations, with uncontrolled cameras. We propose to solve this difficult problem by a simple yet effective approach that makes use of a coarse shape prior. The shape prior is obtained from multi-view stereo and will be useful in twofold: resolving the shape-light ambiguity in uncalibrated photometric stereo and guiding the estimated normals to produce the high quality 3D surface. By assuming the surface albedo is not highly contrasted, we also propose a novel linear approximation of the nonlinear camera responses with our normal estimation algorithm. We evaluate our method using synthetic data and demonstrate the surface improvement on real data over multi-view stereo results.", "title": "" }, { "docid": "f1c80c3e266029012390c6ac47765cc6", "text": "Whenever clients shop in the Internet, they provide identifying data of themselves to parties like the webshop, shipper and payment system. These identifying data merged with their shopping history might be misused for targeted advertisement up to possible manipulations of the clients. The data also contains credit card or bank account numbers, which may be used for unauthorized money transactions by the involved parties or by criminals hacking the parties’ computing infrastructure. In order to minimize these risks, we propose an approach for anonymous shopping by separation of data. We argue for the feasibility of our approach by discussing important operations like simple reclamation cases and criminal investigations. TYPE OF PAPER AND", "title": "" }, { "docid": "5aab6cd36899f3d5e3c93cf166563a3e", "text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.", "title": "" }, { "docid": "c71a5f23d9d8b9093ca1b2ccdb3d396a", "text": "1 M.Tech. Student 2 Assistant Professor 1,2 Department of Computer Science and Engineering 1,2 Don Bosco Institute of Technology, Affiliated by VTU Abstract— In the recent years Sentiment analysis (SA) has gained momentum by the increase of social networking sites. Sentiment analysis has been an important topic for data mining, social media for classifying reviews and thereby rating the entities such as products, movies etc. This paper represents a comparative study of sentiment classification of lexicon based approach and naive bayes classifier of machine learning in sentiment analysis.", "title": "" }, { "docid": "17cdb26d3fd4e915341b21fcf85606c8", "text": "Persistent occiput posterior (OP) is associated with increased rates of maternal and newborn morbidity. Its diagnosis by physical examination is challenging but is improved with bedside ultrasonography. Occiput posterior discovered in the active phase or early second stage of labor usually resolves spontaneously. When it does not, prophylactic manual rotation may decrease persistent OP and its associated complications. When delivery is indicated for arrest of descent in the setting of persistent OP, a pragmatic approach is suggested. Suspected fetal macrosomia, a biparietal diameter above the pelvic inlet or a maternal pelvis with android features should prompt cesarean delivery. Nonrotational operative vaginal delivery is appropriate when the maternal pelvis has a narrow anterior segment but ample room posteriorly, like with anthropoid features. When all other conditions are met and the fetal head arrests in an OP position in a patient with gynecoid pelvic features and ample room anteriorly, options include cesarean delivery, nonrotational operative vaginal delivery, and rotational procedures, either manual or with the use of rotational forceps. Recent literature suggests that maternal and fetal outcomes with rotational forceps are better than those reported in older series. Although not without significant challenges, a role remains for teaching and practicing selected rotational forceps operations in contemporary obstetrics.", "title": "" }, { "docid": "663342554879c5464a7e1aff969339b7", "text": "Esthetic surgery of external female genitalia remains an uncommon procedure. This article describes a novel, de-epithelialized, labial rim flap technique for labia majora augmentation using de-epithelialized labia minora tissue otherwise to be excised as an adjunct to labia minora reduction. Ten patients were included in the study. The protruding segments of the labia minora were de-epithelialized with a fine scissors or scalpel instead of being excised, and a bulky section of subcutaneous tissue was obtained. Between the outer and inner surfaces of the labia minora, a flap with a subcutaneous pedicle was created in continuity with the de-epithelialized marginal tissue. A pocket was dissected in the labium majus, and the flap was transposed into the pocket to augment the labia majora. Mean patient age was 39.9 (±13.9) years, mean operation time was 60 min, and mean follow-up period was 14.5 (±3.4) months. There were no major complications (hematoma, wound dehiscence, infection) following surgery. No patient complained of postoperative difficulty with coitus or dyspareunia. All patients were satisfied with the final appearance. Several methods for labia minora reduction have been described. Auxiliary procedures are required with labia minora reduction for better results. Nevertheless, few authors have taken into account the final esthetic appearance of the whole female external genitalia. The described technique in this study is indicated primarily for mild atrophy of the labia majora with labia minora hypertrophy; the technique resulted in perfect patient satisfaction with no major complications or postoperative coital problems. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" }, { "docid": "0a335ec3a17c202e92341b51a90d9f61", "text": "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new stateof-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "title": "" }, { "docid": "28c03f6fb14ed3b7d023d0983cb1e12b", "text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "title": "" }, { "docid": "3487dcd4c0e609b3683175ce5b056563", "text": "Various surgical techniques are available in the management of pilonidal sinus, but controversy concerning the optimal surgical approach persists. The present study analyzes the outcome of unroofing and curettage as the primary intervention for acute and chronic pilonidal disease. A total of 297 consecutive patients presenting with chronic disease, acute abscess, or recurrent disease were treated with unroofing and curettage. The wound was left open to heal by secondary intention. Hospitalization, time required to resume daily activities and return to work, healing time, and recurrence rates were recorded. All patients were discharged within the first 24 h after operation. The median period before returning to work was 3.2 ± 1.2 days, and the mean time for wound healing was 5.4 ± 1.1 weeks. Six patients were readmitted with recurrence of the disease within the first six postoperative months. All recurrences were in patients who did not follow the wound care advice and who did not come to regular weekly appointments. Patients with recurrence underwent repeat surgery by the same technique with good results. Unroofing and curettage for pilonidal sinus disease is an easy and effective technique. The vast majority of the patients, including those with abscess as well as those with chronic disease, will heal with this simple procedure, after which even recurrences can be managed successfully with the same procedure. Relying on these results, we advocate unroofing and curettage as the procedure of choice in the management of pilonidal disease.", "title": "" }, { "docid": "ce463006a11477c653c15eb53f673837", "text": "This paper presents a meaning-based statistical math word problem (MWP) solver with understanding, reasoning and explanation. It comprises a web user interface and pipelined modules for analysing the text, transforming both body and question parts into their logic forms, and then performing inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating the extracted math quantity with its associated syntactic and semantic information (which specifies the physical meaning of that quantity). Those role-tags are then used to identify the desired operands and filter out irrelevant quantities (so that the answer can be obtained precisely). Since the physical meaning of each quantity is explicitly represented with those role-tags and used in the inference process, the proposed approach could explain how the answer is obtained in a human comprehensible way.", "title": "" }, { "docid": "0aa7a61ae2d73b017b5acdd885d7c0ef", "text": "3GPP Long Term Evolution-Advanced (LTE-A) aims at enhancement of LTE performance in many respects including the system capacity and network coverage. This enhancement can be accomplished by heterogeneous networks (HetNets) where additional micro-nodes that require lower transmission power are efficiently deployed. More careful management of mobility and handover (HO) might be required in HetNets compared to homogeneous networks where all nodes require the same transmission power. In this article, we provide a technical overview of mobility and HO management for HetNets in LTEA. Moreover, we investigate the A3-event which requires a certain criterion to be met for HO. The criterion involves the reference symbol received power/quality of user equipment (UE), hysteresis margin, and a number of offset parameters based on proper HO timing, i.e., time-to-trigger (TTT). Optimum setting of these parameters are not trivial task, and has to be determined depending on UE speed, propagation environment, system load, deployed HetNets configuration, etc. Therefore, adaptive TTT values with given hysteresis margin for the lowest ping pong rate within 2 % of radio link failure rate depending on UE speed and deployed HetNets configuration are investigated in this article.", "title": "" }, { "docid": "fd11fbed7a129e3853e73040cbabb56c", "text": "A digitally modulated power amplifier (DPA) in 1.2 V 0.13 mum SOI CMOS is presented, to be used as a building block in multi-standard, multi-band polar transmitters. It performs direct amplitude modulation of an input RF carrier by digitally controlling an array of 127 unary-weighted and three binary-weighted elementary gain cells. The DPA is based on a novel two-stage topology, which allows seamless operation from 800 MHz through 2 GHz, with a full-power efficiency larger than 40% and a 25.2 dBm maximum envelope power. Adaptive digital predistortion is exploited for DPA linearization. The circuit is thus able to reconstruct 21.7 dBm WCDMA/EDGE signals at 1.9 GHz with 38% efficiency and a higher than 10 dB margin on all spectral specifications. As a result of the digital modulation technique, a higher than 20.1 % efficiency is guaranteed for WCDMA signals with a peak-to-average power ratio as high as 10.8 dB. Furthermore, a 15.3 dBm, 5 MHz WiMAX OFDM signal is successfully reconstructed with a 22% efficiency and 1.53% rms EVM. A high 10-bit nominal resolution enables a wide-range TX power control strategy to be implemented, which greatly minimizes the quiescent consumption down to 10 mW. A 16.4% CDMA average efficiency is thus obtained across a > 70 dB power control range, while complying with all the spectral specifications.", "title": "" }, { "docid": "d5bc3147e23f95a070bce0f37a96c2a8", "text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.", "title": "" } ]
scidocsrr
1a954f582f8660d7acb410cebfe1a9d1
Big Data: Understanding Big Data
[ { "docid": "f35d164bd1b19f984b10468c41f149e3", "text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.", "title": "" }, { "docid": "cd35602ecb9546eb0f9a0da5f6ae2fdf", "text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language HiveQL, which are compiled into map-reduce jobs executed on Hadoop. In addition, HiveQL supports custom map-reduce scripts to be plugged into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog, Hive-Metastore, containing schemas and statistics, which is useful in data exploration and query optimization. In Facebook, the Hive warehouse contains several thousand tables with over 700 terabytes of data and is being used extensively for both reporting and ad-hoc analyses by more than 100 users. The rest of the paper is organized as follows. Section 2 describes the Hive data model and the HiveQL language with an example. Section 3 describes the Hive system architecture and an overview of the query life cycle. Section 4 provides a walk-through of the demonstration. We conclude with future work in Section 5.", "title": "" }, { "docid": "0281c96d3990df1159d58c6b5707b1ad", "text": "In the Big Data community, MapReduce has been seen as one of the key enabling approaches for meeting continuously increasing demands on computing resources imposed by massive data sets. The reason for this is the high scalability of the MapReduce paradigm which allows for massively parallel and distributed execution over a large number of computing nodes. This paper identifies MapReduce issues and challenges in handling Big Data with the objective of providing an overview of the field, facilitating better planning and management of Big Data projects, and identifying opportunities for future research in this field. The identified challenges are grouped into four main categories corresponding to Big Data tasks types: data storage (relational databases and NoSQL stores), Big Data analytics (machine learning and interactive analytics), online processing, and security and privacy. Moreover, current efforts aimed at improving and extending MapReduce to address identified challenges are presented. Consequently, by identifying issues and challenges MapReduce faces when handling Big Data, this study encourages future Big Data research.", "title": "" } ]
[ { "docid": "ce1db3eefae52f447eaac1b0e923054f", "text": "Agriculture and urban activities are major sources of phosphorus and nitrogen to aquatic ecosystems. Atmospheric deposition further contributes as a source of N. These nonpoint inputs of nutrients are difficult to measure and regulate because they derive from activities dispersed over wide areas of land and are variable in time due to effects of weather. In aquatic ecosystems, these nutrients cause diverse problems such as toxic algal blooms, loss of oxygen, fish kills, loss of biodiversity (including species important for commerce and recreation), loss of aquatic plant beds and coral reefs, and other problems. Nutrient enrichment seriously degrades aquatic ecosystems and impairs the use of water for drinking, industry, agriculture, recreation, and other purposes. Based on our review of the scientific literature, we are certain that (1) eutrophication is a widespread problem in rivers, lakes, estuaries, and coastal oceans, caused by overenrichment with P and N; (2) nonpoint pollution, a major source of P and N to surface waters of the United States, results primarily from agriculture and urban activity, including industry; (3) inputs of P and N to agriculture in the form of fertilizers exceed outputs in produce in the United States and many other nations; (4) nutrient flows to aquatic ecosystems are directly related to animal stocking densities, and under high livestock densities, manure production exceeds the needs of crops to which the manure is applied; (5) excess fertilization and manure production cause a P surplus to accumulate in soil, some of which is transported to aquatic ecosystems; and (6) excess fertilization and manure production on agricultural lands create surplus N, which is mobile in many soils and often leaches to downstream aquatic ecosystems, and which can also volatilize to the atmosphere, redepositing elsewhere and eventually reaching aquatic ecosystems. If current practices continue, nonpoint pollution of surface waters is virtually certain to increase in the future. Such an outcome is not inevitable, however, because a number of technologies, land use practices, and conservation measures are capable of decreasing the flow of nonpoint P and N into surface waters. From our review of the available scientific information, we are confident that: (1) nonpoint pollution of surface waters with P and N could be reduced by reducing surplus nutrient flows in agricultural systems and processes, reducing agricultural and urban runoff by diverse methods, and reducing N emissions from fossil fuel burning; and (2) eutrophication can be reversed by decreasing input rates of P and N to aquatic ecosystems, but rates of recovery are highly variable among water bodies. Often, the eutrophic state is persistent, and recovery is slow.", "title": "" }, { "docid": "dbe2d8bcbebfe3747b977ab5216d277e", "text": "Zero-shot methods in language, vision and other domains rely on a cross-space mapping function that projects vectors from the relevant feature space (e.g., visualfeature-based image representations) to a large semantic word space (induced in an unsupervised way from corpus data), where the entities of interest (e.g., objects images depict) are labeled with the words associated to the nearest neighbours of the mapped vectors. Zero-shot cross-space mapping methods hold great promise as a way to scale up annotation tasks well beyond the labels in the training data (e.g., recognizing objects that were never seen in training). However, the current performance of cross-space mapping functions is still quite low, so that the strategy is not yet usable in practical applications. In this paper, we explore some general properties, both theoretical and empirical, of the cross-space mapping function, and we build on them to propose better methods to estimate it. In this way, we attain large improvements over the state of the art, both in cross-linguistic (word translation) and cross-modal (image labeling) zero-shot experiments.", "title": "" }, { "docid": "4825ada359be4788a52f1fd616142a19", "text": "Attachment theory is extended to pertain to developmental changes in the nature of children's attachments to parents and surrogate figures during the years beyond infancy, and to the nature of other affectional bonds throughout the life cycle. Various types of affectional bonds are examined in terms of the behavioral systems characteristic of each and the ways in which these systems interact. Specifically, the following are discussed: (a) the caregiving system that underlies parents' bonds to their children, and a comparison of these bonds with children's attachments to their parents; (b) sexual pair-bonds and their basic components entailing the reproductive, attachment, and caregiving systems; (c) friendships both in childhood and adulthood, the behavioral systems underlying them, and under what circumstances they may become enduring bonds; and (d) kinship bonds (other than those linking parents and their children) and why they may be especially enduring.", "title": "" }, { "docid": "1927e46cd9a198b59b83dedd13881388", "text": "Vehicle automation has been one of the fundamental applications within the field of intelligent transportation systems (ITS) since the start of ITS research in the mid-1980s. For most of this time, it has been generally viewed as a futuristic concept that is not close to being ready for deployment. However, recent development of “self-driving” cars and the announcement by car manufacturers of their deployment by 2020 show that this is becoming a reality. The ITS industry has already been focusing much of its attention on the concepts of “connected vehicles” (United States) or “cooperative ITS” (Europe). These concepts are based on communication of data among vehicles (V2V) and/or between vehicles and the infrastructure (V2I/I2V) to provide the information needed to implement ITS applications. The separate threads of automated vehicles and cooperative ITS have not yet been thoroughly woven together, but this will be a necessary step in the near future because the cooperative exchange of data will provide vital inputs to improve the performance and safety of the automation systems. Thus, it is important to start thinking about the cybersecurity implications of cooperative automated vehicle systems. In this paper, we investigate the potential cyberattacks specific to automated vehicles, with their special needs and vulnerabilities. We analyze the threats on autonomous automated vehicles and cooperative automated vehicles. This analysis shows the need for considerably more redundancy than many have been expecting. We also raise awareness to generate discussion about these threats at this early stage in the development of vehicle automation systems.", "title": "" }, { "docid": "81d82cd481ee3719c74d381205a4a8bb", "text": "Consider a set of <italic>S</italic> of <italic>n</italic> data points in real <italic>d</italic>-dimensional space, R<supscrpt>d</supscrpt>, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess <italic>S</italic> into a data structure, so that given any query point <italic>q</italic><inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, is the closest point of S to <italic>q</italic> can be reported quickly. Given any positive real ε, data point <italic>p</italic> is a (1 +ε)-<italic>approximate nearest neighbor</italic> of <italic>q</italic> if its distance from <italic>q</italic> is within a factor of (1 + ε) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of <italic>n</italic> points in R<supscrpt>d</supscrpt> in <italic>O(dn</italic> log <italic>n</italic>) time and <italic>O(dn)</italic> space, so that given a query point <italic> q</italic> <inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, and ε > 0, a (1 + ε)-approximate nearest neighbor of <italic>q</italic> can be computed in <italic>O</italic>(<italic>c</italic><subscrpt><italic>d</italic>, ε</subscrpt> log <italic>n</italic>) time, where <italic>c<subscrpt>d,ε</subscrpt></italic>≤<italic>d</italic> <inline-equation> <f><fen lp=\"ceil\">1 + 6d/<g>e</g><rp post=\"ceil\"></fen></f></inline-equation>;<supscrpt>d</supscrpt> is a factor depending only on dimension and ε. In general, we show that given an integer <italic>k</italic> ≥ 1, (1 + ε)-approximations to the <italic>k</italic> nearest neighbors of <italic>q</italic> can be computed in additional <italic>O(kd</italic> log <italic>n</italic>) time.", "title": "" }, { "docid": "8b7f55a5e86e9eac08b3a9cf21f378e9", "text": "In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius’ goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one’s privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy.", "title": "" }, { "docid": "638f7bf2f47895274995df166564ecc1", "text": "In recent years, the video game market has embraced augmented reality video games, a class of video games that is set to grow as gaming technologies develop. Given the widespread use of video games among children and adolescents, the health implications of augmented reality technology must be closely examined. Augmented reality technology shows a potential for the promotion of healthy behaviors and social interaction among children. However, the full immersion and physical movement required in augmented reality video games may also put users at risk for physical and mental harm. Our review article and commentary emphasizes both the benefits and dangers of augmented reality video games for children and adolescents.", "title": "" }, { "docid": "4bf6c59cdd91d60cf6802ae99d84c700", "text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.", "title": "" }, { "docid": "09380650b0af3851e19f18de4a2eacb2", "text": "This paper presents a novel self-assembly modular robot (Sambot) that also shares characteristics with self-reconfigurable and self-assembly and swarm robots. Each Sambot can move autonomously and connect with the others. Multiple Sambot can be self-assembled to form a robotic structure, which can be reconfigured into different configurable robots and can locomote. A novel mechanical design is described to realize function of autonomous motion and docking. Introducing embedded mechatronics integrated technology, whole actuators, sensors, microprocessors, power and communication unit are embedded in the module. The Sambot is compact and flexble, the overall size is 80×80×102mm. The preliminary self-assembly and self-reconfiguration of Sambot is discussed, and several possible configurations consisting of multiple Sambot are designed in simulation environment. At last, the experiment of self-assembly and self-reconfiguration and locomotion of multiple Sambot has been implemented.", "title": "" }, { "docid": "df7fc38a7c832273e884d2bad078ca93", "text": "OBJECTIVES\nTo provide UK normative data for the Depression Anxiety and Stress Scale (DASS) and test its convergent, discriminant and construct validity.\n\n\nDESIGN\nCross-sectional, correlational and confirmatory factor analysis (CFA).\n\n\nMETHODS\nThe DASS was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,771) in terms of demographic variables. Competing models of the latent structure of the DASS were derived from theoretical and empirical sources and evaluated using confirmatory factor analysis. Correlational analysis was used to determine the influence of demographic variables on DASS scores. The convergent and discriminant validity of the measure was examined through correlating the measure with two other measures of depression and anxiety (the HADS and the sAD), and a measure of positive and negative affectivity (the PANAS).\n\n\nRESULTS\nThe best fitting model (CFI =.93) of the latent structure of the DASS consisted of three correlated factors corresponding to the depression, anxiety and stress scales with correlated error permitted between items comprising the DASS subscales. Demographic variables had only very modest influences on DASS scores. The reliability of the DASS was excellent, and the measure possessed adequate convergent and discriminant validity Conclusions: The DASS is a reliable and valid measure of the constructs it was intended to assess. The utility of this measure for UK clinicians is enhanced by the provision of large sample normative data.", "title": "" }, { "docid": "7100fea85ba7c88f0281f11e7ddc04a9", "text": "This paper reports the spoof surface plasmons polaritons (SSPPs) based multi-band bandpass filter. An efficient back to back transition from Quasi TEM mode of microstrip line to SSPP mode has been designed by etching a gradient corrugated structure on the metal strip; while keeping ground plane unaltered. SSPP wave is found to be highly confined within the teeth part of corrugation. Complementary split ring resonator has been etched in the ground plane to obtained multiband bandpass filter response. Excellent conversion from QTEM mode to SSPP mode has been observed.", "title": "" }, { "docid": "7760a3074983f36e385299706ed9a927", "text": "A reflectarray antenna monolithically integrated with 90 RF MEMS switches has been designed and fabricated to achieve switching of the main beam. Aperture coupled microstrip patch antenna (ACMPA) elements are used to form a 10 × 10 element reconfigurable reflectarray antenna operating at 26.5 GHz. The change in the progressive phase shift between the elements is obtained by adjusting the length of the open ended transmission lines in the elements with the RF MEMS switches. The reconfigurable reflectarray is monolithically fabricated with the RF MEMS switches in an area of 42.46 cm2 using an in-house surface micromachining and wafer bonding process. The measurement results show that the main beam can be switched between broadside and 40° in the H-plane at 26.5 GHz.", "title": "" }, { "docid": "cf2018b0fc4e61202696386e2be48d93", "text": "We carry out an analysis of typability of terms in ML. Our main result is that this problem is DEXPTIME-hard, where by DEXPTIME we mean DTIME(2n0(1)). This, together with the known exponential-time algorithm that solves the problem, yields the DEXPTIME-completeness result. This settles an open problem of P. Kanellakis and J. C. Mitchell.\nPart of our analysis is an algebraic characterization of ML typability in terms of a restricted form of semi-unification, which we identify as acyclic semi-unification. We prove that ML typability and acyclic semi-unification can be reduced to each other in polynomial time. We believe this result is of independent interest.", "title": "" }, { "docid": "26eff65c0a642fd36d4c37560b8d5cda", "text": "Dual-striplines are gaining popularity in the high-density computer system designs to save printed circuit board (PCB) cost and achieve smaller form factor. However, broad-side near-end/far-end crosstalk (NEXT/FEXT) between dualstriplines is a major concern that potentially has a significant impact to the signal integrity. In this paper, the broadside coupling between two differential pairs, and a differential pair and a single-ended trace in a dual-stripline design, is investigated and characterized. An innovative design methodology and routing strategy are proposed to effectively mitigate the broad-side coupling without additional routing space.", "title": "" }, { "docid": "0014cb14c7acf1dfad67b3f8f50f69dc", "text": "Latency to end-users and regulatory requirements push large companies to build data centers all around the world. The resulting data is “born” geographically distributed. On the other hand, many Machine Learning applications require a global view of such data in order to achieve the best results. These types of applications form a new class of learning problems, which we call Geo-Distributed Machine Learning (GDML). Such applications need to cope with: 1) scarce and expensive cross-data center bandwidth, and 2) growing privacy concerns that are pushing for stricter data sovereignty regulations. Current solutions to learning from geo-distributed data sources revolve around the idea of first centralizing the data in one data center, and then training locally. As Machine Learning algorithms are communication-intensive, the cost of centralizing the data is thought to be offset by the lower cost of intra-data center communication during training. In this work, we show that the current centralized practice can be far from optimal, and propose a system architecture for doing geo-distributed training. Furthermore, we argue that the geo-distributed approach is structurally more amenable to dealing with regulatory constraints, as raw data never leaves the source data center. Our empirical evaluation on three real datasets confirms the general validity of our approach, and shows that GDML is not only possible but also advisable in many scenarios.", "title": "" }, { "docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2", "text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "title": "" }, { "docid": "bd6115cbcf62434f38ca4b43480b7c5a", "text": "Most existing person re-identification methods focus on finding similarities between persons between pairs of cameras (camera pairwise re-identification) without explicitly maintaining consistency of the results across the network. This may lead to infeasible associations when results from different camera pairs are combined. In this paper, we propose a network consistent re-identification (NCR) framework, which is formulated as an optimization problem that not only maintains consistency in re-identification results across the network, but also improves the camera pairwise re-identification performance between all the individual camera pairs. This can be solved as a binary integer programing problem, leading to a globally optimal solution. We also extend the proposed approach to the more general case where all persons may not be present in every camera. Using two benchmark datasets, we validate our approach and compare against state-of-the-art methods.", "title": "" }, { "docid": "9e8a1a70af4e52de46d773cec02f99a7", "text": "In this paper, we build a corpus of tweets from Twitter annotated with keywords using crowdsourcing methods. We identify key differences between this domain and the work performed on other domains, such as news, which makes existing approaches for automatic keyword extraction not generalize well on Twitter datasets. These datasets include the small amount of content in each tweet, the frequent usage of lexical variants and the high variance of the cardinality of keywords present in each tweet. We propose methods for addressing these issues, which leads to solid improvements on this dataset for this task.", "title": "" }, { "docid": "e40eb32613ed3077177d61ac14e82413", "text": "Preamble. Billions of people are using cell phone devices on the planet, essentially in poor posture. The purpose of this study is to assess the forces incrementally seen by the cervical spine as the head is tilted forward, into worsening posture. This data is also necessary for cervical spine surgeons to understand in the reconstruction of the neck.", "title": "" }, { "docid": "6c1a3792b9f92a4a1abd2135996c5419", "text": "Artificial neural networks (ANNs) have been applied in many areas successfully because of their ability to learn, ease of implementation and fast real-time operation. In this research, there are proposed two algorithms. The first is cellular neural network (CNN) with noise level estimation. While the second is modify cellular neural network with noise level estimation. The proposed CNN modification is by adding the Rossler chaos to the CNN fed. Noise level algorithm were used to image noise removal approach in order to get a good image denoising processing with high quality image visual and statistical measures. The results of the proposed system show that the combination of chaos CNN with noise level estimation gives acceptable PSNR and RMSE with a best quality visual vision and small computational time.", "title": "" } ]
scidocsrr
18148f5dc3b0b61ca640477c84dcd70e
Algorithms for Quantum Computers
[ { "docid": "8eac34d73a2bcb4fa98793499d193067", "text": "We review here the recent success in quantum annealing, i.e., optimization of the cost or energy functions of complex systems utilizing quantum fluctuations. The concept is introduced in successive steps through the studies of mapping of such computationally hard problems to the classical spin glass problems. The quantum spin glass problems arise with the introduction of quantum fluctuations, and the annealing behavior of the systems as these fluctuations are reduced slowly to zero. This provides a general framework for realizing analog quantum computation.", "title": "" } ]
[ { "docid": "6d825778d5d2cb935aab35c60482a267", "text": "As the workforce ages rapidly in industrialized countries, a phenomenon known as the graying of the workforce, new challenges arise for firms as they have to juggle this dramatic demographical change (Trend 1) in conjunction with the proliferation of increasingly modern information and communication technologies (ICTs) (Trend 2). Although these two important workplace trends are pervasive, their interdependencies have remained largely unexplored. While Information Systems (IS) research has established the pertinence of age to IS phenomena from an empirical perspective, it has tended to model the concept merely as a control variable with limited understanding of its conceptual nature. In fact, even the few IS studies that used the concept of age as a substantive variable have mostly relied on stereotypical accounts alone to justify their age-related hypotheses. Further, most of these studies have examined the role of age in the same phenomenon (i.e., initial adoption of ICTs), implying a marked lack of diversity with respect to the phenomena under investigation. Overall, IS research has yielded only limited insight into the role of age in phenomena involving ICTs. In this essay, we argue for the importance of studying agerelated impacts more carefully and across various IS phenomena, and we enable such research by providing a research agenda that IS scholars can use. In doing so, we hope that future research will further both our empirical and conceptual understanding of the managerial challenges arising from the interplay of a graying workforce and rapidly evolving ICTs. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "514afc7846a1d9c3ce60c2ae392b3e43", "text": "Scientific workflows facilitate automation, reuse, and reproducibility of scientific data management and analysis tasks. Scientific workflows are often modeled as dataflow networks, chaining together processing components (called actors) that query, transform, analyse, and visualize scientific datasets. Semantic annotations relate data and actor schemas with conceptual information from a shared ontology, to support scientific workflow design, discovery, reuse, and validation in the presence of thousands of potentially useful actors and datasets. However, the creation of semantic annotations is complex and time-consuming. We present a calculus and two inference algorithms to automatically propagate semantic annotations through workflow actors described by relational queries. Given an input annotation α and a query q, forward propagation computes an output annotation α′; conversely, backward propagation infers α from q and α′.", "title": "" }, { "docid": "c7f0a749e38b3b7eba871fca80df9464", "text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.", "title": "" }, { "docid": "8be33fad66b25a9d3a4b05dbfc1aac5d", "text": "A question-answering system needs to be able to reason about unobserved causes in order to answer questions of the sort that people face in everyday conversations. Recent neural network models that incorporate explicit memory and attention mechanisms have taken steps towards this capability. However, these models have not been tested in scenarios for which reasoning about the unobservable mental states of other agents is necessary to answer a question. We propose a new set of tasks inspired by the well-known false-belief test to examine how a recent question-answering model performs in situations that require reasoning about latent mental states. We find that the model is only successful when the training and test data bear substantial similarity, as it memorizes how to answer specific questions and cannot reason about the causal relationship between actions and latent mental states. We introduce an extension to the model that explicitly simulates the mental representations of different participants in a reasoning task, and show that this capacity increases the model’s performance on our theory of mind test.", "title": "" }, { "docid": "8b57c1f4c865c0a414b2e919d19959ce", "text": "A microstrip HPF with sharp attenuation by using cross-coupling is proposed in this paper. The HPF consists of parallel plate- and gap type- capacitors and inductor lines. The one block of the HPF has two sections of a constant K filter in the bridge T configuration. Thus the one block HPF is first coarsely designed and the performance is optimized by circuit simulator. With the gap capacitor adjusted the proposed HPF illustrates the sharp attenuation characteristics near the cut-off frequency made by cross-coupling between the inductor lines. In order to improve the stopband performance, the cascaded two block HPF is examined. Its measured results show the good agreement with the simulated ones giving the sharper attenuation slope.", "title": "" }, { "docid": "288f8a2dab0c32f85c313f5a145e47a5", "text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input", "title": "" }, { "docid": "3cb0bddb1ed916cffdff3624e61d49cd", "text": "Thh paper presents a new method for computing the configuration-space map of obstacles that is used in motion-planning algorithms. The method derives from the observation that, when the robot is a rigid object that can only translate, the configuration space is a convolution of the workspace and the robot. This convolution is computed with the use of the Fast Fourier Transform (FFT) algorithm. The method is particularly promising for workspaces with many andlor complicated obstacles, or when the shape of the robot is not simple. It is an inherently parallel method that can significantly benefit from existing experience and hardware on the FFT.", "title": "" }, { "docid": "6989ae9a7e6be738d0d2e8261251a842", "text": "A single-feed reconfigurable square-ring patch antenna with pattern diversity is presented. The antenna structure has four shorting walls placed respectively at each edge of the square-ring patch, in which two shorting walls are directly connected to the patch and the others are connected to the patch via pin diodes. By controlling the states of the pin diodes, the antenna can be operated at two different modes: monopolar plat-patch and normal patch modes; moreover, the 10 dB impedance bandwidths of the two modes are overlapped. Consequently, the proposed antenna allows its radiation pattern to be switched electrically between conical and broadside radiations at a fixed frequency. Detailed design considerations of the proposed antenna are described. Experimental and simulated results are also shown and discussed", "title": "" }, { "docid": "a408e25435dded29744cf2af0f7da1e5", "text": "Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.", "title": "" }, { "docid": "a67f7593ea049be1e2785108b6181f7d", "text": "This paper describes torque characteristics of the interior permanent magnet synchronous motor (IPMSM) using the inexpensive ferrite magnets. IPMSM model used in this study has the spoke and the axial type magnets in the rotor, and torque characteristics are analyzed by the three-dimensional finite element method (3D-FEM). As a result, torque characteristics can be improved by using both the spoke type magnets and the axial type magnets in the rotor.", "title": "" }, { "docid": "241542e915e51ce1505c7d24641e4e0b", "text": "Over the past decade, research has increased our understanding of the effects of physical activity at opposite ends of the spectrum. Sedentary behaviour—too much sitting—has been shown to increase risk of chronic disease, particularly diabetes and cardiovascular disease. There is now a clear need to reduce prolonged sitting. Secondly, evidence on the potential of high intensity interval training inmanaging the same chronic diseases, as well as reducing indices of cardiometabolic risk in healthy adults, has emerged. This vigorous training typically comprises multiple 3-4 minute bouts of high intensity exercise interspersed with several minutes of low intensity recovery, three times a week. Between these two extremes of the activity spectrum is the mainstream public health recommendation for aerobic exercise, which is similar in many developed countries. The suggested target for older adults (≥65) is the same as for other adults (18-64): 150 minutes a week of moderate intensity activity in bouts of 10 minutes or more. It is often expressed as 30 minutes of brisk walking or equivalent activity five days a week, although 75 minutes of vigorous intensity activity spread across the week, or a combination of moderate and vigorous activity are sometimes suggested. Physical activity to improve strength should also be done at least two days a week. The 150 minute target is widely disseminated to health professionals and the public. However, many people, especially in older age groups, find it hard to achieve this level of activity. We argue that when advising patients on exercise doctors should encourage people to increase their level of activity by small amounts rather than focus on the recommended levels. The 150 minute target, although warranted, may overshadow other less concrete elements of guidelines. These include finding ways to do more lower intensity lifestyle activity. As people get older, activity may become more relevant for sustaining the strength, flexibility, and balance required for independent living in addition to the strong associations with hypertension, coronary heart disease, stroke, diabetes, breast cancer, and colon cancer. Observational data have confirmed associations between increased physical activity and reduction in musculoskeletal conditions such as arthritis, osteoporosis, and sarcopenia, and better cognitive acuity and mental health. Although these links may be modest and some lack evidence of causality, they may provide sufficient incentives for many people to be more active. Research into physical activity", "title": "" }, { "docid": "ca19a74fde1b9e3a0ab76995de8b0f36", "text": "Sensors on (or attached to) mobile phones can enable attractive sensing applications in different domains, such as environmental monitoring, social networking, healthcare, transportation, etc. We introduce a new concept, sensing as a service (S2aaS), i.e., providing sensing services using mobile phones via a cloud computing system. An S2aaS cloud needs to meet the following requirements: 1) it must be able to support various mobile phone sensing applications on different smartphone platforms; 2) it must be energy-efficient; and 3) it must have effective incentive mechanisms that can be used to attract mobile users to participate in sensing activities. In this vision paper, we identify unique challenges of designing and implementing an S2aaS cloud, review existing systems and methods, present viable solutions, and point out future research directions.", "title": "" }, { "docid": "361e874cccb263b202155ef92e502af3", "text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.", "title": "" }, { "docid": "88660d823f1c20cf0b75b665c66af696", "text": "A pectus index can be derived from dividing the transverse diameter of the chest by the anterior-posterior diameter on a simple CT scan. In a preliminary report, all patients who required operative correction for pectus excavatum had a pectus index greater than 3.25 while matched normal controls were all less than 3.25. A simple CT scan may be a useful adjunct in objective evaluation of children and teenagers for surgery of pectus excavatum.", "title": "" }, { "docid": "65bea826c88408b87ce2e2c17944835c", "text": "The broad spectrum of clinical signs in canine cutaneous epitheliotropic T-cell lymphoma mimics many inflammatory skin diseases and is a diagnostic challenge. A 13-year-old-male castrated golden retriever crossbred dog presented with multifocal flaccid bullae evolving into deep erosions. A shearing force applied to the skin at the periphery of the erosions caused the epidermis to further slide off the dermis suggesting intraepidermal or subepidermal separation. Systemic signs consisted of profound weight loss and marked respiratory distress. Histologically, the superficial and deep dermis were infiltrated by large, CD3-positive neoplastic lymphocytes and mild epitheliotropism involved the deep epidermis, hair follicle walls and epitrichial sweat glands. There was partial loss of the stratum basale. Bullous lesions consisted of large dermoepidermal and intraepidermal clefts that contained loose accumulations of neutrophils mixed with fewer neoplastic cells in proteinaceous fluid. The lifted epidermis was often devitalized and bordered by hydropic degeneration and partial epidermal collapse. Similar neoplastic lymphocytes formed small masses in the lungs associated with broncho-invasion. Clonal rearrangement analysis of antigen receptor genes in samples from skin and lung lesions using primers specific for canine T-cell receptor gamma (TCRgamma) produced a single-sized amplicon of identical sequence, indicating that both lesions resulted from the expansion of the same neoplastic T-cell population. Macroscopic vesiculobullous lesions with devitalization of the lesional epidermis should be included in the broad spectrum of clinical signs presented by canine cutaneous epitheliotropic T-cell lymphoma.", "title": "" }, { "docid": "ead6596d7f368da713f36f572c79bf94", "text": "The total variation (TV) model is a classical and effective model in image denoising, but the weighted total variation (WTV) model has not attracted much attention. In this paper, we propose a new constrained WTV model for image denoising. A fast denoising dual method for the new constrained WTV model is also proposed. To achieve this task, we combines the well known gradient projection (GP) and the fast gradient projection (FGP) methods on the dual approach for the image denoising problem. Experimental results show that the proposed method outperforms currently known GP andFGP methods, and canbe applicable to both the isotropic and anisotropic WTV functions.", "title": "" }, { "docid": "89aa13fe76bf48c982e44b03acb0dd3d", "text": "Stock trading strategy plays a crucial role in investment companies. However, it is challenging to obtain optimal strategy in the complex and dynamic stock market. We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return. 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment. We train a deep reinforcement learning agent and obtain an adaptive trading strategy. The agent’s performance is evaluated and compared with Dow Jones Industrial Average and the traditional min-variance portfolio allocation strategy. The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns.", "title": "" }, { "docid": "04fe2706a8da54365e4125867613748b", "text": "We consider a sequence of multinomial data for which the probabilities associated with the categories are subject to abrupt changes of unknown magnitudes at unknown locations. When the number of categories is comparable to or even larger than the number of subjects allocated to these categories, conventional methods such as the classical Pearson’s chi-squared test and the deviance test may not work well. Motivated by high-dimensional homogeneity tests, we propose a novel change-point detection procedure that allows the number of categories to tend to infinity. The null distribution of our test statistic is asymptotically normal and the test performs well with finite samples. The number of change-points is determined by minimizing a penalized objective function based on segmentation, and the locations of the change-points are estimated by minimizing the objective function with the dynamic programming algorithm. Under some mild conditions, the consistency of the estimators of multiple change-points is established. Simulation studies show that the proposed method performs satisfactorily for identifying change-points in terms of power and estimation accuracy, and it is illustrated with an analysis of a real data set.", "title": "" }, { "docid": "2cebd2fd12160d2a3a541989293f10be", "text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.", "title": "" }, { "docid": "6f94a57f7ae1a818c3bd5e7f6f2cea0f", "text": "We propose a novel hybrid metric learning approach to combine multiple heterogenous statistics for robust image set classification. Specifically, we represent each set with multiple statistics – mean, covariance matrix and Gaussian distribution, which generally complement each other for set modeling. However, it is not trivial to fuse them since the mean vector with d-dimension often lies in Euclidean space R, whereas the covariance matrix typically resides on Riemannian manifold Sym+d . Besides, according to information geometry, the space of Gaussian distribution can be embedded into another Riemannian manifold Sym+d+1. To fuse these statistics from heterogeneous spaces, we propose a Hybrid Euclidean-and-Riemannian Metric Learning (HERML) method to exploit both Euclidean and Riemannian metrics for embedding their original spaces into high dimensional Hilbert spaces and then jointly learn hybrid metrics with discriminant constraint. The proposed method is evaluated on two tasks: set-based object categorization and video-based face recognition. Extensive experimental results demonstrate that our method has a clear superiority over the state-of-the-art methods.", "title": "" } ]
scidocsrr
f37758f413116485b19c3bd274d4d426
Learning Beyond Human Expertise with Generative Models for Dental Restorations
[ { "docid": "7aedb5ffa83448c21c33e0573a9a41a2", "text": "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S-GAN). Our S-GAN has two components: the StructureGAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "title": "" } ]
[ { "docid": "4abceedb1f6c735a8bc91bc811ce4438", "text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.", "title": "" }, { "docid": "b861ea3b6ea6d29e1c225609db069fd5", "text": "A single probe feeding stacked microstrip antenna is presented to obtain dual-band circularly polarized (CP) characteristics using double layers of truncated square patches. The antenna operates at both the L1 and L2 frequencies of 1575 and 1227 MHz for the global positioning system (GPS). With the optimized design, the measured axial ratio (AR) bandwidths with the centre frequencies of L1 and L2 are both greater than 50 MHz, while the impedance characteristics within AR bandwidth satisfy the requirement of VSWR less than 2. At L1 and L2 frequencies, the AR measured is 0.7 dB and 0.3 dB, respectively.", "title": "" }, { "docid": "df4d0112eecfcc5c6c57784d1a0d010d", "text": "2 The design and measured results are reported on three prototype DC-DC converters which successfully demonstrate the design techniques of this thesis and the low-power enabling capabilities of DC-DC converters in portable applications. Voltage scaling for low-power throughput-constrained digital signal processing is reviewed and is shown to provide up to an order of magnitude power reduction compared to existing 3.3 V standards when enabled by high-efficiency low-voltage DC-DC conversion. A new ultra-low-swing I/O strategy, enabled by an ultra-low-voltage and low-power DCDC converter, is used to reduce the power of high-speed inter-chip communication by greater than two orders of magnitude. Dynamic voltage scaling is proposed to dynamically trade general-purpose processor throughput for energy-efficiency, yielding up to an order of magnitude improvement in the average energy per operation of the processor. This is made possible by a new class of voltage converter, called the dynamic DC-DC converter, whose primary performance objectives and design considerations are introduced in this thesis. Robert W. Brodersen, Chairman of Committee Table of", "title": "" }, { "docid": "01ccb35abf3eed71191dc8638e58f257", "text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.", "title": "" }, { "docid": "8c12dfc5fa23d5eabb8ae29101cb6161", "text": "Purpose – Using Internet Archive’s Wayback Machine, higher education web sites were retrospectively analyzed to study the effects that technological advances in web design have had on accessibility for persons with disabilities. Design/methodology/approach – A convenience sample of higher education web sites was studied for years 1997-2002. The homepage and pages 1-level down were evaluated. Web accessibility barrier (WAB) and complexity scores were calculated. Repeated measures analysis of variance (ANOVA) was used to determine trends in the data and Pearson’s correlation (r) was computed to evaluate the relationship between accessibility and complexity. Findings – Higher education web sites become progressively inaccessible as complexity increases. Research limitations/implications – The WAB score is a proxy of web accessibility. While the WAB score can give an indication of the accessibility of a web site, it cannot differentiate between barriers posing minimal limitations and those posing absolute inaccessibility. A future study is planned to have users with disabilities examine web sites with differing WAB scores to correlate how well the WAB score is gauging accessibility of web sites from the perspective of the user. Practical implications – Findings from studies such as this can lead to improved guidelines, policies, and overall awareness of web accessibility for persons with disabilities. Originality/value – There are limited studies that have taken a longitudinal look at the accessibility of web sites and explored the reasons for the trend of decreasing accessibility.", "title": "" }, { "docid": "b765a75438d9abd381038e1b84128004", "text": "Implementing a complex spelling program using a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) remains a challenge due to difficulties in stimulus presentation and target identification. This study aims to explore the feasibility of mixed frequency and phase coding in building a high-speed SSVEP speller with a computer monitor. A frequency and phase approximation approach was developed to eliminate the limitation of the number of targets caused by the monitor refresh rate, resulting in a speller comprising 32 flickers specified by eight frequencies (8-15 Hz with a 1 Hz interval) and four phases (0°, 90°, 180°, and 270°). A multi-channel approach incorporating Canonical Correlation Analysis (CCA) and SSVEP training data was proposed for target identification. In a simulated online experiment, at a spelling rate of 40 characters per minute, the system obtained an averaged information transfer rate (ITR) of 166.91 bits/min across 13 subjects with a maximum individual ITR of 192.26 bits/min, the highest ITR ever reported in electroencephalogram (EEG)-based BCIs. The results of this study demonstrate great potential of a high-speed SSVEP-based BCI in real-life applications.", "title": "" }, { "docid": "41cf1b873d69f15cbc5fa25e849daa61", "text": "Methods for controlling the bias/variance tradeoff typica lly assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural netwo rks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of we ight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary signifi cantly throughout the input space of the model. We show that overselection of the degrees of freedom for an MLP train ed with backpropagation can improve the approximation in regions of underfitting, while not significantly overfitti ng in other regions. This can be a significant advantage over other models. Furthermore, we show that “better” learning a lgorithms such as conjugate gradient can in fact lead to worse generalization, because they can be more prone to crea ting v rying degrees of overfitting in different regions of the input space. While experimental results cannot cover all practical situations, our results do help to explain common behavior that does not agree with theoretical expect ations. Our results suggest one important reason for the relative success of MLPs, bring into question common bel iefs about neural network training regarding training algorithms, overfitting, and optimal network size, suggest alternate guidelines for practical use (in terms of the trai ning algorithm and network size selection), and help to direct fu ture work (e.g. regarding the importance of the MLP/BP training bias, the possibility of worse performance for “be tter” training algorithms, local “smoothness” criteria, a nd further investigation of localized overfitting).", "title": "" }, { "docid": "6c6206e330f0d9b7f9ed68f8af78b117", "text": "This paper deals with the design, manufacture and test of a high efficiency power amplifier for L-band space borne applications. The circuit operates with a single 36 mm gate periphery GaN HEMT power bar die allowing both improved integration and performance as compared with standard HPA design in a similar RF power range. A huge effort dedicated to the device's characterization and modeling has eased the circuit optimization leaning on the multi-harmonics impedances synthesis. Test results demonstrate performance up to 140 W RF output power with an associated 60% PAE for a limited 3.9 dB gain compression under 50 V supply voltage using a single GaN power bar.", "title": "" }, { "docid": "751b853f780fc8047ff73ce646b68cd6", "text": "This paper builds on previous research in the light field area of image-based rendering. We present a new reconstruction filter that significantly reduces the “ghosting” artifacts seen in undersampled light fields, while preserving important high-fidelity features such as sharp object boundaries and view-dependent reflectance. By improving the rendering quality achievable from undersampled light fields, our method allows acceptable images to be generated from smaller image sets. We present both frequency and spatial domain justifications for our techniques. We also present a practical framework for implementing the reconstruction filter in multiple rendering passes. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation ― Viewing algorithms; I.3.6 [Computer Graphics]: Methodologies and Techniques ― Graphics data structures and data types; I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture ― Sampling", "title": "" }, { "docid": "2f389011aad9236f174b15e37dc73cd3", "text": "A new efficient optimization method, called ‘Teaching–Learning-Based Optimization (TLBO)’, is proposed in this paper for the optimization of mechanical design problems. This method works on the effect of influence of a teacher on learners. Like other nature-inspired algorithms, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. The population is considered as a group of learners or a class of learners. The process of TLBO is divided into two parts: the first part consists of the ‘Teacher Phase’ and the second part consists of the ‘Learner Phase’. ‘Teacher Phase’ means learning from the teacher and ‘Learner Phase’ means learning by the interaction between learners. The basic philosophy of the TLBO method is explained in detail. To check the effectiveness of the method it is tested on five different constrained benchmark test functions with different characteristics, four different benchmark mechanical design problems and six mechanical design optimization problems which have real world applications. The effectiveness of the TLBO method is compared with the other populationbased optimization algorithms based on the best solution, average solution, convergence rate and computational effort. Results show that TLBO is more effective and efficient than the other optimization methods for the mechanical design optimization problems considered. This novel optimization method can be easily extended to other engineering design optimization problems. © 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e8755da242b6252eb516aec6e74d42c0", "text": "Cloud data provenance, or \"what has happened to my data in the cloud\", is a critical data security component which addresses pressing data accountability and data governance issues in cloud computing systems. In this paper, we present Progger (Provenance Logger), a kernel-space logger which potentially empowers all cloud stakeholders to trace their data. Logging from the kernel space empowers security analysts to collect provenance from the lowest possible atomic data actions, and enables several higher-level tools to be built for effective end-to-end tracking of data provenance. Within the last few years, there has been an increasing number of proposed kernel space provenance tools but they faced several critical data security and integrity problems. Some of these prior tools' limitations include (1) the inability to provide log tamper-evidence and prevention of fake/manual entries, (2) accurate and granular timestamp synchronisation across several machines, (3) log space requirements and growth, and (4) the efficient logging of root usage of the system. Progger has resolved all these critical issues, and as such, provides high assurance of data security and data activity audit. With this in mind, the paper will discuss these elements of high-assurance cloud data provenance, describe the design of Progger and its efficiency, and present compelling results which paves the way for Progger being a foundation tool used for data activity tracking across all cloud systems.", "title": "" }, { "docid": "a490c396ff6d47e11f35d2f08776b7fc", "text": "The present study examined the nature of social support exchanged within an online HIV/AIDS support group. Content analysis was conducted with reference to five types of social support (information support, tangible assistance, esteem support, network support, and emotional support) on 85 threads (1,138 messages). Our analysis revealed that many of the messages offered informational and emotional support, followed by esteem support and network support, with tangible assistance the least frequently offered. Results suggest that this online support group is a popular forum through which individuals living with HIV/AIDS can offer social support. Our findings have implications for health care professionals who support individuals living with HIV/AIDS.", "title": "" }, { "docid": "ecab65461852051278a59482ad49c225", "text": "We show that a set of gates that consists of all one-bit quantum gates (U(2)) and the two-bit exclusive-or gate (that maps Boolean values (x; y) to (x; xy)) is universal in the sense that all unitary operations on arbitrarily many bits n (U(2 n)) can be expressed as compositions of these gates. We investigate the number of the above gates required to implement other gates, such as generalized Deutsch-Toooli gates, that apply a speciic U(2) transformation to one input bit if and only if the logical AND of all remaining input bits is satissed. These gates play a central role in many proposed constructions of quantum computational networks. We derive upper and lower bounds on the exact number of elementary gates required to build up a variety of two-and three-bit quantum gates, the asymptotic number required for n-bit Deutsch-Toooli gates, and make some observations about the number required for arbitrary n-bit unitary operations.", "title": "" }, { "docid": "3f2312e385fc1c9aafc6f9f08e2e2d4f", "text": "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.", "title": "" }, { "docid": "191a81ae4b60c48a01deb8d64d5c7f42", "text": "This paper elaborates a study conducted to propose a folktale conceptual model based on folktale classification systems of type, motif, and function. Globally, three distinguish folktale classification systems exist and have been used for many years nonetheless not actually converge and achieve agreement on classification issues. The study aims to develop a conceptual model that visually depicts the combination and connection of the three folktale classification systems. The method opted for the conceptual model development is pictorial representation. It is hoped that the conceptual model developed would be an early platform to subsequently catalyze more robust and cohesive folktale classification system.", "title": "" }, { "docid": "c91df82c01cbf7d1f2666c43e96a5787", "text": "The past few years have witnessed an explosion in the availability of data from multiple sources and modalities. For example, millions of cameras have been installed in buildings, streets, airports and cities around the world. This has generated extraordinary advances on how to acquire, compress, store, transmit and process massive amounts of complex high-dimensional data. Many of these advances have relied on the observation that, even though these data sets are high-dimensional, their intrinsic dimension is often much smaller than the dimension of the ambient space. In computer vision, for example, the number of pixels in an image can be rather large, yet most computer vision models use only a few parameters to describe the appearance, geometry and dynamics of a scene. This has motivated the development of a number of techniques for finding a low-dimensional representation of a high-dimensional data set. Conventional techniques, such as Principal Component Analysis (PCA), assume that the data is drawn from a single low-dimensional subspace of a high-dimensional space. Such approaches have found widespread applications in many fields, e.g., pattern recognition, data compression, image processing, bioinformatics, etc. In practice, however, the data points could be drawn from multiple subspaces and the membership of the data points to the subspaces might be unknown. For instance, a video sequence could contain several moving objects and different subspaces might be needed to describe the motion of different objects in the scene. Therefore, there is a need to simultaneously cluster the data into multiple subspaces and find a low-dimensional subspace fitting each group of points. This problem, known as subspace clustering, has found numerous applications in computer vision (e.g., image segmentation [1], motion segmentation [2] and face clustering [3]), image processing (e.g., image representation and compression [4]) and systems theory (e.g., hybrid system identification [5]). A number of approaches to subspace clustering have been proposed in the past two decades. A review of methods from the data mining community can be found in [6]. This article will present methods from the machine learning and computer vision communities, including algebraic methods [7, 8, 9, 10], iterative methods [11, 12, 13, 14, 15], statistical methods [16, 17, 18, 19, 20], and spectral clustering-based methods [7, 21, 22, 23, 24, 25, 26, 27]. We review these methods, discuss their advantages and disadvantages, and evaluate their performance on the motion segmentation and face clustering problems. P", "title": "" }, { "docid": "3155879d5264ad723de6051075d47ee2", "text": "We have shown that there is a difference between individuals in their tendency to deposit DNA on an item when it is touched. While a good DNA shedder may leave behind a full DNA profile immediately after hand washing, poor DNA shedders may only do so when their hands have not been washed for a period of 6h. We have also demonstrated that transfer of DNA from one individual (A) to another (B) and subsequently to an object is possible under specific laboratory conditions using the AMPFISTR SGM Plus multiplex at both 28 and 34 PCR cycles. This is a form of secondary transfer. If a 30 min or 1h delay was introduced before contact of individual B with the object then at 34 cycles a mixture of profiles from both individuals was recovered. We have also determined that the quantity and quality of DNA profiles recovered is dependent upon the particular individuals involved in the transfer process. The findings reported here are preliminary and further investigations are underway in order to further add to understanding of the issues of DNA transfer and persistence.", "title": "" }, { "docid": "5c2f115e0159d15a87904e52879c1abf", "text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.", "title": "" }, { "docid": "dcd2bb092e7d4325a64d8f3b9d729f94", "text": "Uninterruptible power supplies (UPS) are widely used to provide reliable and high-quality power to critical loads in all grid conditions. This paper proposes a nonisolated online UPS system. The proposed system consists of bridgeless PFC boost rectifier, battery charger/discharger, and an inverter. A new battery charger/discharger has been implemented which ensures the bidirectional flow of power between dc link and battery bank, reducing the battery bank voltage to only 24V, and regulates the dc-link voltage during the battery power mode. Operating batteries in parallel improves the battery performance and resolve the problems related to conventional battery banks that arrange batteries in series. A new control method, integrating slide mode and proportional-resonant control, for the inverter has been proposed which regulates the output voltage for both linear and nonlinear loads. The controller exhibits excellent performance during transients and step changes in load. The operating principle and experimental results of 1-kVA prototype have been presented for validation of the proposed system.", "title": "" }, { "docid": "3509f90848c45ad34ebbd30b9d357c29", "text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.", "title": "" } ]
scidocsrr
10e2cbfa32f8e2e6759561c28dfd1938
Constructing Thai Opinion Mining Resource: A Case Study on Hotel Reviews
[ { "docid": "8a2586b1059534c5a23bac9c1cc59906", "text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.", "title": "" } ]
[ { "docid": "b1bb5751e409d0fe44754624a4145e70", "text": "Capacity planning determines the optimal product mix based on the available tool sets and allocates production capacity according to the forecasted demands for the next few months. MaxIt is the previous capacity planning system for Intel's Flash Product Group (FPG) Assembly & Test Manufacturing (ATM). It only applied to single product family scenarios with simple process routing. However, new Celluar Handhold Group (CHG) products need to go through flexible and reentrant ATM routes. In this paper, we introduce MaxItPlus, which is an enhanced MaxIt using MILP (mixed integer linear programming) to conduct capacity planning of multiple product families with mixed process routes in a multifactory ATM environment. We also present the detailed mathematical formulation, the system architecture, and implementation results. The project will help Intel global Flash ATM to achieve a single and efficient capacity planning process for all FPG and CHG products and gain $10 M in marginal profit (as determined by the finance department)", "title": "" }, { "docid": "dd11d7291d8f0ee2313b74dc5498acfa", "text": "Going further At this point, the theorem is proved. While for every summarizer σ there exists at least one tuple (θ,O), in practice there exist multiple tuples, and the one proposed by the proof would not be useful to rank models of summary quality. We can formulate an algorithm which constructs θ from σ and which yields an ordering of candidate summaries. Let σD\\{s1,...,sn} be the summarizer σ which still uses D as initial document collection, but which is not allowed to output sentences from {s1, . . . , sn} in the final summary. For a given summary S to score, let Rσ,S be the smallest set of sentences {s1, . . . , sn} that one has to remove fromD such that σD\\R outputs S. Then the definition of θσ follows:", "title": "" }, { "docid": "11b20602fc9d6e97a5bcc857da7902d0", "text": "This research investigates the Quality of Service (QoS) interaction at the edge of differentiated service (DiffServ) domain, denoted by video gateway (VG). VG is responsible for coordinating the QoS mapping between video applications and DiffServ enabled network. To accomplish the goal of achieving economical and high-quality end-to-end video streaming, which utilizes its awareness of relative service differentiation, the proposed QoS control framework includes the following three components: 1) the relative priority based indexing and categorization of streaming video content at sender, 2) the differentiated QoS levels with load variation in DiffServ networks, and 3) the feedforward and feedback mechanisms assisting QoS mapping of categorized index to DS level at the proposed VG. Especially, we focus on building a framework for dynamic QoS mapping, which intends to overcome both the QoS demand variations of CM applications (e.g., varying priorities from aggregated/categorized packets) and the QoS supply variations of DiffServ network (e.g., varying loss/delay due to fluctuating network loads). Thus, with the proposed QoS controls in both feedforward and feedback fashion, enhanced quality provisioning for CM applications (especially video streaming) is investigated under the given pricing model (e.g., DS level differentiated price/packet).", "title": "" }, { "docid": "c4f9c924963cadc658ad9c97560ea252", "text": "A novel broadband circularly polarized (CP) antenna is proposed. The operating principle of this CP antenna is different from those of conventional CP antennas. An off-center-fed dipole is introduced to achieve the 90° phase difference required for circular polarization. The new CP antenna consists of two off-center-fed dipoles. Combining such two new CP antennas leads to a bandwidth enhancement for circular polarization. A T-shaped microstrip probe is used to excite the broadband CP antenna, featuring a simple planar configuration. It is shown that the new broadband CP antenna achieves an axial ratio (AR) bandwidth of 55% (1.69-3.0 GHz) for AR <; 3 dB, an impedance bandwidth of 60% (1.7-3.14 GHz) for return loss (RL) > 15 dB, and an antenna gain of 6-9 dBi. The new mechanism for circular polarization is described and an experimental verification is presented.", "title": "" }, { "docid": "5268fd63c99f43d1a155c0078b2e5df5", "text": "With Docker gaining widespread popularity in the recent years, the container scheduler becomes a crucial role for the exploding containerized applications and services. In this work, the container host energy conservation, the container image pulling costs from the image registry to the container hosts and the workload network transition costs from the clients to the container hosts are evaluated in combination. By modeling the scheduling problem as an integer linear programming, an effective and adaptive scheduler is proposed. Impressive cost savings were achieved compared to Docker Swarm scheduler. Moreover, it can be easily integrated into the open-source container orchestration frameworks.", "title": "" }, { "docid": "4645d0d7b1dfae80657f75d3751ef72a", "text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.", "title": "" }, { "docid": "203312195c3df688a594d0c05be72b5a", "text": "Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback.", "title": "" }, { "docid": "4ab58e47f1f523ba3f48c37bc918696e", "text": "In this work, we design a neural network for recognizing emotions in speech, using the standard IEMOCAP dataset. Following the latest advances in audio analysis, we use an architecture involving both convolutional layers, for extracting highlevel features from raw spectrograms, and recurrent ones for aggregating long-term dependencies. Applying techniques of data augmentation, layerwise learning rate adjustment and batch normalization, we obtain highly competitive results, with 64.5% weighted accuracy and 61.7% unweighted accuracy on four emotions. Moreover, we show that the model performance is strongly correlated with the labeling confidence, which highlights a fundamental difficulty in emotion recognition.", "title": "" }, { "docid": "858a5ed092f02d057437885ad1387c9f", "text": "The current state-of-the-art singledocument summarization method generates a summary by solving a Tree Knapsack Problem (TKP), which is the problem of finding the optimal rooted subtree of the dependency-based discourse tree (DEP-DT) of a document. We can obtain a gold DEP-DT by transforming a gold Rhetorical Structure Theory-based discourse tree (RST-DT). However, there is still a large difference between the ROUGE scores of a system with a gold DEP-DT and a system with a DEP-DT obtained from an automatically parsed RST-DT. To improve the ROUGE score, we propose a novel discourse parser that directly generates the DEP-DT. The evaluation results showed that the TKP with our parser outperformed that with the state-of-the-art RST-DT parser, and achieved almost equivalent ROUGE scores to the TKP with the gold DEP-DT.", "title": "" }, { "docid": "ef95b5b3a0ff0ab0907565305d597a9d", "text": "Control flow defenses against ROP either use strict, expensive, but strong protection against redirected RET instructions with shadow stacks, or much faster but weaker protections without. In this work we study the inherent overheads of shadow stack schemes. We find that the overhead is roughly 10% for a traditional shadow stack. We then design a new scheme, the parallel shadow stack, and show that its performance cost is significantly less: 3.5%. Our measurements suggest it will not be easy to improve performance on current x86 processors further, due to inherent costs associated with RET and memory load/store instructions. We conclude with a discussion of the design decisions in our shadow stack instrumentation, and possible lighter-weight alternatives.", "title": "" }, { "docid": "64306a76b61bbc754e124da7f61a4fbe", "text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.", "title": "" }, { "docid": "9d2a73c8eac64ed2e1af58a5883229c3", "text": "Tetyana Sydorenko Michigan State University This study examines the effect of input modality (video, audio, and captions, i.e., onscreen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8) saw video with audio and captions (VAC); group two (N = 9) saw video with audio (VA); group three (N = 9) saw video with captions (VC). All participants completed written and aural vocabulary tests and a final questionnaire.", "title": "" }, { "docid": "236d3cb8566d4ae72add4a4b8b1f1fcc", "text": "SAP HANA is a pioneering, and one of the best performing, data platform designed from the grounds up to heavily exploit modern hardware capabilities, including SIMD, and large memory and CPU footprints. As a comprehensive data management solution, SAP HANA supports the complete data life cycle encompassing modeling, provisioning, and consumption. This extended abstract outlines the vision and planned next step of the SAP HANA evolution growing from a core data platform into an innovative enterprise application platform as the foundation for current as well as novel business applications in both on-premise and on-demand scenarios. We argue that only a holistic system design rigorously applying co-design at di↵erent levels may yield a highly optimized and sustainable platform for modern enterprise applications. 1. THE BEGINNING: SAP HANA DATA PLATFORM A comprehensive data management solution has become one of the most critical assets in large enterprises. Modern data management solutions must cover a wide spectrum of additional data structures ranging from simple keyvalues models to complex graph structured data sets and document-centric data stores. Complex query and manipulation patterns are issued against the database reflecting the algorithmic side of complex enterprise applications. Additionally, data consumption activities with analytical query patterns are no longer reserved for decision makers or specialized data scientists but are increasingly becoming an integral part of complex operational business processes requiring support for analytical as well as transactional workloads managed within the same system [4]. Dealing with these challenges [5] demanded a complete re-thinking of traditional database architectures and data management approaches now made possible by advances in hardware architectures. The development of SAP HANA accepted this challenge head on and started a new generation Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were invited to present their results at The 39th International Conference on Very Large Data Bases, August 26th 30th 2013, Riva del Garda, Trento, Italy. Proceedings of the VLDB Endowment, Vol. 6, No. 11 Copyright 2013 VLDB Endowment 2150-8097/13/09... $ 10.00. Figure 1: The SAP HANA platform of database system design. The SAP HANA database server now comprises a centrally, and tightly, orchestrated collection of di↵erent processing capabilities, e.g., an in-memory columnar relational store, a graph engine, native support for text processing, comprehensive spatial support, etc., all running within a single system environment and, therefore, within a single transactional sphere of control without the need for data replication and synchronization [2]. Secondly, and most importantly, SAP HANA has triggered a major shift in the database industry from the classical disk-centric database system design to a ground breaking main-memory centric system design [3]. The mainstream availability of very large main memory and CPU core footprints within single compute nodes, combined with SIMD architectures and sophisticated cluster systems based on high speed interconnects, was and remains, the central design guideline of the SAP HANA database server. SAP HANA was the first commercial system to systematically reflect, and exploit, the shift in memory hierarchies and CPU architectures in order to optimize data structures and access paths. As a result, SAP HANA has yielded orders of magnitude performance gains thereby opening up completely novel application opportunities. Most of the core design advances behind SAP HANA are now finding their way into mainstream database system research and development, thereby reflecting its pioneering role. As a foundational tenet, we see rigorous application of Hardware/Database co-design principles as the main success factor to systematically exploit the underlying hardware platform: Literally every core SAP HANA data structure and routine has been systematically inspected, redesigned", "title": "" }, { "docid": "23583b155fc8ec3301cfef805f568e57", "text": "We address the problem of covering an environment with robots equipped with sensors. The robots are heterogeneous in that the sensor footprints are different. Our work uses the location optimization framework in with three significant extensions. First, we consider robots with different sensor footprints, allowing, for example, aerial and ground vehicles to collaborate. We allow for finite size robots which enables implementation on real robotic systems. Lastly, we extend the previous work allowing for deployment in non convex environments.", "title": "" }, { "docid": "cf0b98dfd188b7612577c975e08b0c92", "text": "Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.", "title": "" }, { "docid": "cbc6bd586889561cc38696f758ad97d2", "text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.", "title": "" }, { "docid": "88285b058e6b93c2b31e9b1b8d6b657e", "text": "Corporate incubators for technology development are a recent phenomenon whose functioning and implications are not yet well understood. The resource-based view can offer an explanatory model on how corporate incubators function as specialised corporate units that hatch new businesses. While tangible resources, such as the financial, physical and even explicit knowledge flow, are all visible, and therefore easy to measure, intangible resources such as tacit knowledge and branding flow are harder to detect and localise. Managing the resource flow requires the initial allocation of resources to the corporate incubator during its set-up as well as a continuous resource flow to the technology venture and, during the harvest phase, also from it. Two levels of analysis need to be distinguished: (1) the resource flow between the corporate incubator and the technology venture and (2) the resource flow interface between the corporate incubator and the technology venture. Our empirical findings are based on two phases: First, in-depth case studies of 22 companies through 47 semi-structured interviews that were conducted with managers of large technology-intensive corporations’ corporate incubators in Europe and the U.S., and second, an analysis of the European Commission’s benchmarking survey of 77 incubators.", "title": "" }, { "docid": "e6e6eb1f1c0613a291c62064144ff0ba", "text": "Mobile phones have become the most popular way to communicate with other individuals. While cell phones have become less of a status symbol and more of a fashion statement, they have created an unspoken social dependency. Adolescents and young adults are more likely to engage in SMS messing, making phone calls, accessing the internet from their phone or playing a mobile driven game. Once pervaded by boredom, teenagers resort to instant connection, to someone, somewhere. Sensation seeking behavior has also linked adolescents and young adults to have the desire to take risks with relationships, rules and roles. Individuals seek out entertainment and avoid boredom at all times be it appropriate or inappropriate. Cell phones are used for entertainment, information and social connectivity. It has been demonstrated that individuals with low self – esteem use cell phones to form and maintain social relationships. They form an attachment with cell phone which molded their mind that they cannot function without their cell phone on a day-to-day basis. In this context, the study attempts to examine the extent of use of mobile phone and its influence on the academic performance of the students. A face to face survey using structured questionnaire was the method used to elicit the opinions of students between the age group of 18-25 years in three cities covering all the three regions the State of Andhra Pradesh in India. The survey was administered among 1200 young adults through two stage random sampling to select the colleges and respondents from the selected colleges, with 400 from each city. In Hyderabad, 201 males and 199 females participated in the survey. In Visakhapatnam, 192 males and 208 females participated. In Tirupati, 220 males and 180 females completed the survey. Two criteria were taken into consideration while choosing the participants for the survey. The participants are college-going and were mobile phone users. Each of the survey responses was entered and analyzed using SPSS software. The Statistical Package for Social Sciences (SPSS 16) had been used to work out the distribution of samples in terms of percentages for each specified parameter.", "title": "" }, { "docid": "4b3c69e446dcf1d237db63eb4f106dd7", "text": "Creating linguistic annotations requires more than just a reliable annotation scheme. Annotation can be a complex endeavour potentially involving many people, stages, and tools. This chapter outlines the process of creating end-toend linguistic annotations, identifying specific tasks that researchers often perform. Because tool support is so central to achieving high quality, reusable annotations with low cost, the focus is on identifying capabilities that are necessary or useful for annotation tools, as well as common problems these tools present that reduce their utility. Although examples of specific tools are provided in many cases, this chapter concentrates more on abstract capabilities and problems because new tools appear continuously, while old tools disappear into disuse or disrepair. The two core capabilities tools must have are support for the chosen annotation scheme and the ability to work on the language under study. Additional capabilities are organized into three categories: those that are widely provided; those that often useful but found in only a few tools; and those that have as yet little or no available tool support. 1 Annotation: More than just a scheme Creating manually annotated linguistic corpora requires more than just a reliable annotation scheme. A reliable scheme, of course, is a central ingredient to successful annotation; but even the most carefully designed scheme will not answer a number of practical questions about how to actually create the annotations, progressing from raw linguistic data to annotated linguistic artifacts that can be used to answer interesting questions or do interesting things. Annotation, especially high-quality annotation of large language datasets, can be a complex process potentially involving many people, stages, and tools, and the scheme only specifies the conceptual content of the annotation. By way of example, the following questions are relevant to a text annotation project and are not answered by a scheme:  How should linguistic artifacts be prepared? Will the originals be annotated directly, or will their textual content be extracted into separate files for annotation? In the latter case, what layout or formatting will be kept (lines, paragraphs page breaks, section headings, highlighted text)? What file format will be used? How will typographical errors be handled? Will typos be ignored, changed in the original, changed in extracted content, or encoded as an additional annotation? Who will be allowed to make corrections: the annotators themselves, adjudicators, or perhaps only the project manager?  How will annotators be provided artifacts to annotate? How will the order of annotation be specified (if at all), and how will this order be enforced? How will the project manager ensure that each document is annotated the appropriate number of times (e.g., by two different people for double annotation).  What inter-annotator agreement measures (IAAs) will be measured, and when? Will IAAs be measured continuously, on batches, or on other subsets of the corpus? How will their measurement at the right time be enforced? Will IAAs be used to track annotator training? If so, what level of IAA will be considered to indicate that training has succeeded? These questions are only a small selection of those that arise during the practical process of conducting annotation. The first goal of this chapter is to give an overview of the process of annotation from start to finish, pointing out these sorts of questions and subtasks for each stage. We will start with a known conceptual framework for the annotation process, the MATTER framework (Pustejovsky & Stubbs, 2013) and expand upon it. Our expanded framework is not guaranteed to be complete, but it will give a reader a very strong flavor of the kind of issues that arise so that they can start to anticipate them in the design of their own annotation project. The second goal is to explore the capabilities required by annotation tools. Tool support is central to effecting high quality, reusable annotations with low cost. The focus will be on identifying capabilities that are necessary or useful for annotation tools. Again, this list will not be exhaustive but it will be fairly representative, as the majority of it was generated by surveying a number of annotation experts about their opinions of available tools. Also listed are common problems that reduce tool utility (gathered during the same survey). Although specific examples of tools will be provided in many cases, the focus will be on more abstract capabilities and problems because new tools appear all the time while old tools disappear into disuse or disrepair. Before beginning, it is well to first introduce a few terms. By linguistic artifact, or just artifact, we mean the object to which annotations are being applied. These could be newspaper articles, web pages, novels, poems, TV 2 Mark A. Finlayson and Tomaž Erjavec shows, radio broadcasts, images, movies, or something else that involves language being captured in a semipermanent form. When we use the term document we will generally mean textual linguistic artifacts such as books, articles, transcripts, and the like. By annotation scheme, or just scheme, we follow the terminology as given in the early chapters of this volume, where a scheme comprises a linguistic theory, a derived model of a phenomenon of interest, a specification that defines the actual physical format of the annotation, and the guidelines that explain to an annotator how to apply the specification to linguistic artifacts. (citation to Chapter III by Ide et al.) By computing platform, or just platform, we mean any computational system on which an annotation tool can be run; classically this has meant personal computers, either desktops or laptops, but recently the range of potential computing platforms has expanded dramatically, to include on the one hand things like web browsers and mobile devices, and, on the other, internet-connected annotation servers and service oriented architectures. Choice of computing platform is driven by many things, including the identity of the annotators and their level of sophistication. We will speak of the annotation process or just process within an annotation project. By process, we mean any procedure or activity, at any level of granularity, involved in the production of annotation. This potentially encompasses everything from generating the initial idea, applying the annotation to the artifacts, to archiving the annotated documents for distribution. Although traditionally not considered part of annotation per se, we might also include here writing academic papers about the results of the annotation, as these activities also sometimes require annotation-focused tool support. We will also speak of annotation tools. By tool we mean any piece of computer software that runs on a computing platform that can be used to implement or carry out a process in the annotation project. Classically conceived annotation tools include software such as the Alembic workbench, Callisto, or brat (Day et al., 1997; Day, McHenry, Kozierok, & Riek, 2004; Stenetorp et al., 2012), but tools can also include software like Microsoft Word or Excel, Apache Tomcat (to run web servers), Subversion or Git (for document revision control), or mobile applications (apps). Tools usually have user interfaces (UIs), but they are not always graphical, fully functional, or even all that helpful. There is a useful distinction between a tool and a component (also called an NLP component, or an NLP algorithm; in UIMA (Apache, 2014) called an annotator), which are pieces of software that are intended to be integrated as libraries into software and can often be strung together in annotation pipelines for applying automatic annotations to linguistic artifacts. Software like tokenizers, part of speech taggers, parsers (Manning et al., 2014), multiword expression detectors (Kulkarni & Finlayson, 2011) or coreference resolvers (Pradhan et al., 2011) are all components. Sometimes the distinction between a tool and a component is not especially clear cut, but it is a useful one nonetheless. The main reason a chapter like this one is needed is that there is no one tool that does everything. There are multiple stages and tasks within every annotation project, typically requiring some degree of customization, and no tool does it all. That is why one needs multiple tools in annotation, and why a detailed consideration of the tool capabilities and problems is needed. 2 Overview of the Annotation Process The first step in an annotation project is, naturally, defining the scheme, but many other tasks must be executed to go from an annotation scheme to an actual set of cleanly annotated files useful for other tasks. 2.1 MATTER & MAMA A good starting place for organizing our conception of the various stages of the process of annotation is the MATTER cycle, proposed by Pustejovsky & Stubbs (2013). This framework outlines six major stages to annotation, corresponding to each letter in the word, defined as follows: M = Model: In this stage, the first of the process, the project leaders set up the conceptual framework for the project. Subtasks may include:  Search background work to understand existing theories of the phenomena  Create or adopt an abstract model of the phenomenon  Define an annotation scheme based on the model Overview of Annotation Creation: Processes & Tools 3  Search libraries, the web, and online repositories for potential linguistic artifacts  Create corpus artifacts if appropriate artifacts do not exist  Measure overall characteristics of artifacts to ground estimates of representativeness and balance  Collect the artifacts on which the annotation will be performed  Track artifact licenses  Measure various statistics of the collected corpus  Choose an annotation specification language  Build an annotation specification that disti", "title": "" }, { "docid": "9d5c258e4a2d315d3e462ab333f3a6df", "text": "The modern smart phone and car concepts provide a fertile ground for new location-aware applications, ranging from traffic management to social services. While the functionality is partly implemented at the mobile terminal, there is a rising need for efficient backend processing of high-volume, high update rate location streams. It is in this environment that geofencing, the detection of objects traversing virtual fences, is becoming a universal primitive required by an ever-growing number of applications. To satisfy the functionality and performance requirements of large-scale geofencing applications, we present in this work a backend system for indexing massive quantities of mobile objects and geofences. Our system runs on a cluster of servers, achieving a throughput of location updates that scales linearly with number of machines. The key ingredients to achieve a high performance are a specialized spatial index, a dynamic caching mechanism, and a load-sharing principle that reduces communication overhead to a minimum and enables a shared-nothing architecture. The throughput of the spatial index as well as the performance of the overall system are demonstrated by experiments using simulations of large-scale geofencing applications.", "title": "" } ]
scidocsrr
913cbf1c706a47094aabf3fc2f764150
The Impacts of Social Media on Bitcoin Performance
[ { "docid": "c02d207ed8606165e078de53a03bf608", "text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: anand.bodapati@anderson.ucla.edu), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: rbucklin@anderson.ucla.edu), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*", "title": "" } ]
[ { "docid": "7e40c98b9760e1f47a0140afae567b7f", "text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.", "title": "" }, { "docid": "b78f1e6a5e93c1ad394b1cade293829f", "text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing", "title": "" }, { "docid": "fb31ead676acdd048d699ddfb4ddd17a", "text": "Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.", "title": "" }, { "docid": "8e654ace264f8062caee76b0a306738c", "text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.", "title": "" }, { "docid": "06672f6316878c80258ad53988a7e953", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/astata.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "fe57e844c12f7392bdd29a2e2396fc50", "text": "With the help of modern information communication technology, mobile banking as a new type of financial services carrier can provide efficient and effective financial services for clients. Compare with Internet banking, mobile banking is more secure and user friendly. The implementation of wireless communication technologies may result in more complicated information security problems. Based on the principles of information security, this paper presented issues of information security of mobile banking and discussed the security protection measures such as: encryption technology, identity authentication, digital signature, WPKI technology.", "title": "" }, { "docid": "64ba4467dc4495c6828f2322e8f415f2", "text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.", "title": "" }, { "docid": "10f3cafc05b3fb3b235df34aebbe0e23", "text": "To cope with monolithic controller replicas and the current unbalance situation in multiphase converters, a pseudo-ramp current balance technique is proposed to achieve time-multiplexing current balance in voltage-mode multiphase DC-DC buck converter. With only one modulation controller, silicon area and power consumption caused by the replicas of controller can be reduced significantly. Current balance accuracy can be further enhanced since the mismatches between different controllers caused by process, voltage, and temperature variations are removed. Moreover, the offset cancellation control embedded in the current matching unit is used to eliminate intrinsic offset voltage existing at the operational transconductance amplifier for improved current balance. An explicit model, which contains both voltage and current balance loops with non-ideal effects, is derived for analyzing system stability. Experimental results show that current difference between each phase can be decreased by over 83% under both heavy and light load conditions.", "title": "" }, { "docid": "358faa358eb07b8c724efcdb72334dc7", "text": "We present a novel simple technique for rapidly creating and presenting interactive immersive 3D exploration experiences of 2D pictures and images of natural and artificial landscapes. Various application domains, ranging from virtual exploration of works of art to street navigation systems, can benefit from the approach. The method, dubbed PEEP, is motivated by the perceptual characteristics of the human visual system in interpreting perspective cues and detecting relative angles between lines. It applies to the common perspective images with zero or one vanishing points, and does not require the extraction of a precise geometric description of the scene. Taking as input a single image without other information, an automatic analysis technique fits a simple but perceptually consistent parametric 3D representation of the viewed space, which is used to drive an indirect constrained exploration method capable to provide the illusion of 3D exploration with realistic monocular (perspective and motion parallax) and binocular (stereo) depth cues. The effectiveness of the method is demonstrated on a variety of casual pictures and exploration configurations, including mobile devices.", "title": "" }, { "docid": "c0440776fdd2adab39e9a9ba9dd56741", "text": "Corynebacterium glutamicum is an important industrial metabolite producer that is difficult to genetically engineer. Although the Streptococcus pyogenes (Sp) CRISPR-Cas9 system has been adapted for genome editing of multiple bacteria, it cannot be introduced into C. glutamicum. Here we report a Francisella novicida (Fn) CRISPR-Cpf1-based genome-editing method for C. glutamicum. CRISPR-Cpf1, combined with single-stranded DNA (ssDNA) recombineering, precisely introduces small changes into the bacterial genome at efficiencies of 86-100%. Large gene deletions and insertions are also obtained using an all-in-one plasmid consisting of FnCpf1, CRISPR RNA, and homologous arms. The two CRISPR-Cpf1-assisted systems enable N iterative rounds of genome editing in 3N+4 or 3N+2 days. A proof-of-concept, codon saturation mutagenesis at G149 of γ-glutamyl kinase relieves L-proline inhibition using Cpf1-assisted ssDNA recombineering. Thus, CRISPR-Cpf1-based genome editing provides a highly efficient tool for genetic engineering of Corynebacterium and other bacteria that cannot utilize the Sp CRISPR-Cas9 system.", "title": "" }, { "docid": "9a6ce56536585e54d3e15613b2fa1197", "text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.", "title": "" }, { "docid": "de63a161a9539931f834908477fb5ad1", "text": "Network function virtualization introduces additional complexity for network management through the use of virtualization environments. The amount of managed data and the operational complexity increases, which makes service assurance and failure recovery harder to realize. In response to this challenge, the paper proposes a distributed management function, called virtualized network management function (vNMF), to detect failures related to virtualized services. vNMF detects the failures by monitoring physical-layer statistics that are processed with a self-organizing map algorithm. Experimental results show that memory leaks and network congestion failures can be successfully detected and that and the accuracy of failure detection can be significantly improved compared to common k-means clustering.", "title": "" }, { "docid": "5c40b6fadf2f8f4b39c7adf1e894e600", "text": "Monitoring the flow of traffic along network paths is essential for SDN programming and troubleshooting. For example, traffic engineering requires measuring the ingress-egress traffic matrix; debugging a congested link requires determining the set of sources sending traffic through that link; and locating a faulty device might involve detecting how far along a path the traffic makes progress. Past path-based monitoring systems operate by diverting packets to collectors that perform \"after-the-fact\" analysis, at the expense of large data-collection overhead. In this paper, we show how to do more efficient \"during-the-fact\" analysis. We introduce a query language that allows each SDN application to specify queries independently of the forwarding state or the queries of other applications. The queries use a regular-expression-based path language that includes SQL-like \"groupby\" constructs for count aggregation. We track the packet trajectory directly on the data plane by converting the regular expressions into an automaton, and tagging the automaton state (i.e., the path prefix) in each packet as it progresses through the network. The SDN policies that implement the path queries can be combined with arbitrary packet-forwarding policies supplied by other elements of the SDN platform. A preliminary evaluation of our prototype shows that our \"during-the-fact\" strategy reduces data-collection overhead over \"after-the-fact\" strategies.", "title": "" }, { "docid": "0499618380bc33d376160a770683e807", "text": "As multicore and manycore processor architectures are emerging and the core counts per chip continue to increase, it is important to evaluate and understand the performance and scalability of Parallel Discrete Event Simulation (PDES) on these platforms. Most existing architectures are still limited to a modest number of cores, feature simple designs and do not exhibit heterogeneity, making it impossible to perform comprehensive analysis and evaluations of PDES on these platforms. Instead, in this paper we evaluate PDES using a full-system cycle-accurate simulator of a multicore processor and memory subsystem. With this approach, it is possible to flexibly configure the simulator and perform exploration of the impact of architecture design choices on the performance of PDES. In particular, we answer the following four questions with respect to PDES performance and scalability: (1) For the same total chip area, what is the best design point in terms of the number of cores and the size of the on-chip cache? (2) What is the impact of using in-order vs. out-of-order cores? (3) What is the impact of a heterogeneous system with a mix of in-order and out-of-order cores? (4) What is the impact of object partitioning on PDES performance in heterogeneous systems? To answer these questions, we use MARSSx86 simulator for evaluating performance, and rely on Cacti and McPAT tools to derive the area and latency estimates for cores and caches.", "title": "" }, { "docid": "5a601e08824185bafeb94ac432b6e92e", "text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.", "title": "" }, { "docid": "e58882a41c4335caf957105df192edc5", "text": "Credit card fraud is a serious problem in financial services. Billions of dollars are lost due to credit card fraud every year. There is a lack of research studies on analyzing real-world credit card data owing to confidentiality issues. In this paper, machine learning algorithms are used to detect credit card fraud. Standard models are first used. Then, hybrid methods which use AdaBoost and majority voting methods are applied. To evaluate the model efficacy, a publicly available credit card data set is used. Then, a real-world credit card data set from a financial institution is analyzed. In addition, noise is added to the data samples to further assess the robustness of the algorithms. The experimental results positively indicate that the majority voting method achieves good accuracy rates in detecting fraud cases in credit cards.", "title": "" }, { "docid": "3d5bbe4dcdc3ad787e57583f7b621e36", "text": "A miniaturized antenna employing a negative index metamaterial with modified split-ring resonator (SRR) and capacitance-loaded strip (CLS) unit cells is presented for Ultra wideband (UWB) microwave imaging applications. Four left-handed (LH) metamaterial (MTM) unit cells are located along one axis of the antenna as the radiating element. Each left-handed metamaterial unit cell combines a modified split-ring resonator (SRR) with a capacitance-loaded strip (CLS) to obtain a design architecture that simultaneously exhibits both negative permittivity and negative permeability, which ensures a stable negative refractive index to improve the antenna performance for microwave imaging. The antenna structure, with dimension of 16 × 21 × 1.6 mm³, is printed on a low dielectric FR4 material with a slotted ground plane and a microstrip feed. The measured reflection coefficient demonstrates that this antenna attains 114.5% bandwidth covering the frequency band of 3.4-12.5 GHz for a voltage standing wave ratio of less than 2 with a maximum gain of 5.16 dBi at 10.15 GHz. There is a stable harmony between the simulated and measured results that indicate improved nearly omni-directional radiation characteristics within the operational frequency band. The stable surface current distribution, negative refractive index characteristic, considerable gain and radiation properties make this proposed negative index metamaterial antenna optimal for UWB microwave imaging applications.", "title": "" }, { "docid": "406e06e00799733c517aff88c9c85e0b", "text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.", "title": "" }, { "docid": "cef4c47b512eb4be7dcadcee35f0b2ca", "text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.", "title": "" }, { "docid": "986a0b910a4674b3c4bf92a668780dd6", "text": "One of the most important attributes of the polymerase chain reaction (PCR) is its exquisite sensitivity. However, the high sensitivity of PCR also renders it prone to falsepositive results because of, for example, exogenous contamination. Good laboratory practice and specific anti-contamination strategies are essential to minimize the chance of contamination. Some of these strategies, for example, physical separation of the areas for the handling samples and PCR products, may need to be taken into consideration during the establishment of a laboratory. In this chapter, different strategies for the detection, avoidance, and elimination of PCR contamination will be discussed.", "title": "" } ]
scidocsrr
530f3888d99b1b7dd8a7446b3dfabb97
Requirements and languages for the semantic representation of manufacturing systems
[ { "docid": "2464b1f28815b6f502f06ce6b45ef8ed", "text": "In this paper we review and compare the main methodologies, tools and languages for building ontologies that have been reported in the literature, as well as the main relationships among them. Ontology technology is nowadays mature enough: many methodologies, tools and languages are already available. The future work in this field should be driven towards the creation of a common integrated workbench for ontology developers to facilitate ontology development, exchange, evaluation, evolution and management, to provide methodological support for these tasks, and translations to and from different ontology languages. This workbench should not be created from scratch, but instead integrating the technology components that are currently available. 2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "204df6c32bde81851ebdb0a0b4d18b93", "text": "Language experience systematically constrains perception of speech contrasts that deviate phonologically and/or phonetically from those of the listener’s native language. These effects are most dramatic in adults, but begin to emerge in infancy and undergo further development through at least early childhood. The central question addressed here is: How do nonnative speech perception findings bear on phonological and phonetic aspects of second language (L2) perceptual learning? A frequent assumption has been that nonnative speech perception can also account for the relative difficulties that late learners have with specific L2 segments and contrasts. However, evaluation of this assumption must take into account the fact that models of nonnative speech perception such as the Perceptual Assimilation Model (PAM) have focused primarily on naïve listeners, whereas models of L2 speech acquisition such as the Speech Learning Model (SLM) have focused on experienced listeners. This chapter probes the assumption that L2 perceptual learning is determined by nonnative speech perception principles, by considering the commonalities and complementarities between inexperienced listeners and those learning an L2, as viewed from PAM and SLM. Among the issues examined are how language learning may affect perception of phonetic vs. phonological information, how monolingual vs. multiple language experience may impact perception, and what these may imply for attunement of speech perception to changes in the listener’s language environment. Commonalities and complementarities 3", "title": "" }, { "docid": "702df543119d648be859233bfa2b5d03", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "8230003e8be37867e0e4fc7320e24448", "text": "This document was approved as policy of the American Psychological Association (APA) by the APA Council of Representatives in August, 2002. This document was drafted by a joint Task Force of APA Divisions 17 (Counseling Psychology) and 45 (The Society for the Psychological Study of Ethnic Minority Issues). These guidelines have been in the process of development for 22 years, so many individuals and groups require acknowledgement. The Divisions 17/45 writing team for the present document included Nadya Fouad, PhD, Co–Chair, Patricia Arredondo, EdD, Co– Chair, Michael D'Andrea, EdD and Allen Ivey, EdD. These guidelines build on work related to multicultural counseling competencies by Division 17 (Sue et al., 1982) and the Association of Multicultural Counseling and Development (Arredondo et al., 1996; Sue, Arredondo, & McDavis, 1992). The Task Force acknowledges Allen Ivey, EdD, Thomas Parham, PhD, and Derald Wing Sue, PhD for their leadership related to the work on competencies. The Divisions 17/45 writing team for these guidelines was assisted in reviewing the relevant literature by Rod Goodyear, PhD, Jeffrey S. Mio, PhD, Ruperto (Toti) Perez, PhD, William Parham, PhD, and Derald Wing Sue, PhD. Additional writing contributions came from Gail Hackett, PhD, Jeanne Manese, PhD, Louise Douce, PhD, James Croteau, PhD, Janet Helms, PhD, Sally Horwatt, PhD, Kathleen Boggs, PhD, Gerald Stone, PhD, and Kathleen Bieschke, PhD. Editorial contributions were provided by Nancy Downing Hansen, PhD, Patricia Perez, Tiffany Rice, and Dan Rosen. The Task Force is grateful for the active support and contributions of a series of presidents of APA Divisions 17, 35, and 45, including Rosie Bingham, PhD, Jean Carter, PhD, Lisa Porche Burke, PhD, Gerald Stone, PhD, Joseph Trimble, PhD, Melba Vasquez, PhD, and Jan Yoder, PhD. Other individuals who contributed through their advocacy include Guillermo Bernal, PhD, Robert Carter, PhD, J. Manuel Casas, PhD, Don Pope–Davis, PhD, Linda Forrest, PhD, Margaret Jensen, PhD, Teresa LaFromboise, PhD, Joseph G. Ponterotto, PhD, and Ena Vazquez Nuttall, EdD.", "title": "" }, { "docid": "1314f4c6bafefd229f2a8b192ba881f7", "text": "Face recognition is an area that has attracted a l ot of interest. Much of the research in this field was conducted using visible images. With visible cameras the recognition is prone to errors due to illumination changes. To avoid the problems encountered in the visible spectrum many authors ha ve proposed the use of infrared. In this paper we give an overview of the state of the art in face recognition using infrared images. Emphasis is given to more recent works. A growing fi eld n this area is multimodal fusion; work conducted in this field is also presented in th is paper and publicly available Infrared face image databases are introduced.", "title": "" }, { "docid": "e90755afe850d597ad7b3f4b7e590b66", "text": "Privacy is considered to be a fundamental human right (Movius and Krup, 2009). Around the world this has led to a large amount of legislation in the area of privacy. Nearly all national governments have imposed local privacy legislation. In the United States several states have imposed their own privacy legislation. In order to maintain a manageable scope this paper only addresses European Union wide and federal United States laws. In addition several US industry (self) regulations are also considered. Privacy regulations in emerging technologies are surrounded by uncertainty. This paper aims to clarify the uncertainty relating to privacy regulations with respect to Cloud Computing and to identify the main open issues that need to be addressed for further research. This paper is based on existing literature and a series of interviews and questionnaires with various Cloud Service Providers (CSPs) that have been performed for the first author’s MSc thesis (Ruiter, 2009). The interviews and questionnaires resulted in data on privacy and security procedures from ten CSPs and while this number is by no means large enough to make any definite conclusions the results are, in our opinion, interesting enough to publish in this paper. The remainder of the paper is organized as follows: the next section gives some basic background on Cloud Computing. Section 3 provides", "title": "" }, { "docid": "2e3cee13657129d26ec236f9d2641e6c", "text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds", "title": "" }, { "docid": "3fce18c6e1f909b91f95667a563aa194", "text": "In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.", "title": "" }, { "docid": "a91a57326a2d961e24d13b844a3556cf", "text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.", "title": "" }, { "docid": "1e1706e1bd58a562a43cc7719f433f4f", "text": "In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.", "title": "" }, { "docid": "d3a8457c4c65652855e734556652c6be", "text": "We consider a supervised learning problem in which data are revealed sequentially and the goal is to determine what will next be revealed. In the context of this problem, algorithms based on association rules have a distinct advantage over classical statistical and machine learning methods; however, there has not previously been a theoretical foundation established for using association rules in supervised learning. We present two simple algorithms that incorporate association rules, and provide generalization guarantees on these algorithms based on algorithmic stability analysis from statistical learning theory. We include a discussion of the strict minimum support threshold often used in association rule mining, and introduce an “adjusted confidence” measure that provides a weaker minimum support condition that has advantages over the strict minimum support. The paper brings together ideas from statistical learning theory, association rule mining and Bayesian analysis.", "title": "" }, { "docid": "a88c0d45ca7859c050e5e76379f171e6", "text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.", "title": "" }, { "docid": "5227c1679d83168eeb4d82d9a94a3a0f", "text": "Driver decisions and behaviors regarding the surrounding traffic are critical to traffic safety. It is important for an intelligent vehicle to understand driver behavior and assist in driving tasks according to their status. In this paper, the consumer range camera Kinect is used to monitor drivers and identify driving tasks in a real vehicle. Specifically, seven common tasks performed by multiple drivers during driving are identified in this paper. The tasks include normal driving, left-, right-, and rear-mirror checking, mobile phone answering, texting using a mobile phone with one or both hands, and the setup of in-vehicle video devices. The first four tasks are considered safe driving tasks, while the other three tasks are regarded as dangerous and distracting tasks. The driver behavior signals collected from the Kinect consist of a color and depth image of the driver inside the vehicle cabin. In addition, 3-D head rotation angles and the upper body (hand and arm at both sides) joint positions are recorded. Then, the importance of these features for behavior recognition is evaluated using random forests and maximal information coefficient methods. Next, a feedforward neural network (FFNN) is used to identify the seven tasks. Finally, the model performance for task recognition is evaluated with different features (body only, head only, and combined). The final detection result for the seven driving tasks among five participants achieved an average of greater than 80% accuracy, and the FFNN tasks detector is proved to be an efficient model that can be implemented for real-time driver distraction and dangerous behavior recognition.", "title": "" }, { "docid": "222b853f23cbcea9794c83c1471273b8", "text": "Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.", "title": "" }, { "docid": "84f1cdf2729e206bf56d336e0c09d9d9", "text": "Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net [30] for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO [20], DeepFashion [21, 23], shoes [43], Market-1501 [47] and handbags [49] the approach demonstrates significant improvements over the state-of-the-art.", "title": "" }, { "docid": "ccd5f02b97643b3c724608a4e4a67fdb", "text": "Modular robotic systems that integrate distally with commercially available endoscopic equipment have the potential to improve the standard-of-care in therapeutic endoscopy by granting clinicians with capabilities not present in commercial tools, such as precision dexterity and feedback sensing. With the desire to integrate both sensing and actuation distally for closed-loop position control in fully deployable, endoscope-based robotic modules, commercial sensor and actuator options that acquiesce to the strict form-factor requirements are sparse or nonexistent. Herein, we describe a proprioceptive angle sensor for potential closed-loop position control applications in distal robotic modules. Fabricated monolithically using printed-circuit MEMS, the sensor employs a kinematic linkage and the principle of light intensity modulation to sense the angle of articulation with a high degree of fidelity. Onboard temperature and environmental irradiance measurements, coupled with linear regression techniques, provide robust angle measurements that are insensitive to environmental disturbances. The sensor is capable of measuring $\\pm$45 degrees of articulation with an RMS error of 0.98 degrees. An ex vivo demonstration shows that the sensor can give real-time proprioceptive feedback when coupled with an actuator module, opening up the possibility of fully distal closed-loop control.", "title": "" }, { "docid": "17797efad4f13f961ed300316eb16b6b", "text": "Cellular senescence, which has been linked to age-related diseases, occurs during normal aging or as a result of pathological cell stress. Due to their incapacity to proliferate, senescent cells cannot contribute to normal tissue maintenance and tissue repair. Instead, senescent cells disturb the microenvironment by secreting a plethora of bioactive factors that may lead to inflammation, regenerative dysfunction and tumor progression. Recent understanding of stimuli and pathways that induce and maintain cellular senescence offers the possibility to selectively eliminate senescent cells. This novel strategy, which so far has not been tested in humans, has been coined senotherapy or senolysis. In mice, senotherapy proofed to be effective in models of accelerated aging and also during normal chronological aging. Senotherapy prolonged lifespan, rejuvenated the function of bone marrow, muscle and skin progenitor cells, improved vasomotor function and slowed down atherosclerosis progression. While initial studies used genetic approaches for the killing of senescent cells, recent approaches showed similar effects with senolytic drugs. These observations open up exciting possibilities with a great potential for clinical development. However, before the integration of senotherapy into patient care can be considered, we need further research to improve our insight into the safety and efficacy of this strategy during short- and long-term use.", "title": "" }, { "docid": "9f037fd53e6547b689f88fc1c1bed10a", "text": "We study feature selection as a means to optimize the baseline clickbait detector employed at the Clickbait Challenge 2017 [6]. The challenge’s task is to score the “clickbaitiness” of a given Twitter tweet on a scale from 0 (no clickbait) to 1 (strong clickbait). Unlike most other approaches submitted to the challenge, the baseline approach is based on manual feature engineering and does not compete out of the box with many of the deep learning-based approaches. We show that scaling up feature selection efforts to heuristically identify better-performing feature subsets catapults the performance of the baseline classifier to second rank overall, beating 12 other competing approaches and improving over the baseline performance by 20%. This demonstrates that traditional classification approaches can still keep up with deep learning on this task.", "title": "" }, { "docid": "81fc9abd3e2ad86feff7bd713cff5915", "text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.", "title": "" }, { "docid": "b4cb716b235ece6ee647fc17b6bb13b6", "text": "Prof. Jay W. Forrester pioneered industrial Dynamics. It enabled the management scientists to understand well enough the dynamics of change in economics/business systems. Four basic foundations on which System Dynamics rest were discussed. The thought process prevailing and their shortcomings are pointed out and the success story of System Dynamics was explained with the help of Production-Distribution model. System Dynamics graduated to Learning Organisations. Senge with his concept of integrating five distinct disciplines of Systems Thinking, Personal Mastery, Mental Models, Shared Vision and Team Learning succeeded in bringing forth the System Dynamics to the reach of large number of practitioners and teachers of management. However, Systems Thinking part of the Learning Organisation fails to reach out because it lacks the architecture needed to support it. Richmond provided the much-needed architectural support. It enables the mapping language to be economical, consistent and relate to the dynamic behaviour of the system. Progression from Industrial Dynamics to Systems Thinking has been slow due to different postures taken by the professionals. It is suggested that Systems Thinking has a lot to adopt from different disciplines and should celebrate synergies and avail cross-fertilisation or opportunities. Systems Thinking is transparent and can seamlessly leverage the way the business is performed. ★ A. K. Rao is Member of Faculty at Administrative Staff College of India, Bellavista, Hyderabad 500 082, India. E-mail: akrao@ascihyd.org and A.Subash Babu is Professor in Industrial Engineering and Operations Research at Indian Institute of Technology, Bombay 400 076, India E-mail: subash@me.iitb.ernet.in Industrial Dynamics to Systems Thinking A.K.Rao & A. Subash Babu Introduction: In the year 1958, the first words penned down by the pioneer of System Dynamics (then Industrial Dynamics) Jay W. Forrester were “Management is on the verge of a major breakthrough in understanding how industrial company success depends on the interaction between the flows of information, materials, manpower and capital equipment”. The article titled “Industrial Dynamics: A Major Breakthrough for Decision Makers” in Harvard Business Review attracted attention of management scientists. Several controversies arose when further articles appeared subsequently. Today, 40 years since the first article in the field of System Dynamics appeared in print, the progress when evaluated evokes mixed response. If it were a major breakthrough for decisionmakers, then why did it not proliferate into the curriculum of business schools as common as that of Principles of Management or Business Statistics or any other standard subjects of study? The purpose of this article is to critically review three seminal works in the field of System Dynamics: Industrial Dynamics by Jay W. Forrester (1960), Fifth Discipline: The Art and Practice of Learning Organisations by Peter Senge (1990) and Systems Thinking by Barry Richmond (1997) and to understand the pitfalls in reaching out to the large body of academia and practising managers. Forrester in his work raised a few fundamental issues way back in early 60’s that most of the corporate managers are able to comprehend only now. He clearly answered the question on what is the next frontier of our knowledge. The great advances and opportunities in the future he predicted would appear in the field of management and economics. The shift from technical to the social front was evidenced in the way global competition and the rules of the game changed. The test of leadership is to show the way to economic development and stability. The leading question therefore is whether we understand well enough the dynamics of change in economic/business systems to pioneer this new frontier? Forrester offered the much-needed solution: System Dynamics. The foundations for a body of knowledge called system dynamics were the concepts of servomechanism, controlled experiments, and digital computing and better understanding of control theory. Servomechanism of information feedback theory was evolved during the World War II. Till then, time delays, amplification effects and the structure of the system were taken for granted. The realisation that interaction between components is more crucial to the system behaviour than the components themselves are of recent origin. The thesis out of information-feedback study led to the conclusion that information-feedback system is all pervasive in the nature. It exists whenever the environment changes, and leads to a decision that results in action, which in-turn affects the environment. This leads us to an axiom that everything that we do as an individual, as an organisation, as an industry, as a nation, or even as a society irrespective of the divisibility of the unit is done in the context of information-feedback system. This is the bedrock philosophy of system dynamics. The second foundation is the realisation of the importance of the experimental approach to understanding of system dynamics. The standard acceptable format of research study of going from general analytical solution to the particular special case was reversed to the empirical approach. In this format a number of particular situations were studied and from these generalisations were inferred. This is the basis for learning. The activity basis for learning is experience. Some of these generalisations were given a name by Senge (1990) as Nature’s Templates. The third foundation for progress of system dynamics was digital computing machines. By 1945, systems of twenty variables were difficult to handle. By 1955, the digital computer appeared, opening the way to the simulation of systems far beyond the capability of analogue machines. Models of 2000 and more variables with out any restrictions on representing non-linear phenomena could easily be simulated on a digital computer at costs within the reach of the academia and the research organisations. The simulation of information feedback models of important managerial and economic questions is an area demanding high efficiency. A cost reduction factor of ten thousand or more in computation infrastructure placed one in a completely different environment than that existed a few years ago. The fourth foundation was better appreciation of policy and decision. There is an orderly basis that prescribes most of our present managerial decisions. These decisions are not entirely adhoc but are strongly conditioned by the environment. This being so, policies governing decisions can be laid down and their effect on economic/business behaviour can be studied. Forrester’s Postulates and Applications : The idea that economic and industrial systems could be depicted through linear analysis was the major stumbling block to begin thinking dynamically. Most of the policy analysis goes on to define the problem on hand as narrowly as possible in the name of attaining the objective of being specific and crisp. On one hand, it enables the mathematics of such analysis tractable but unfortunately, it ignores the fact that almost every factor in the economic or industrial systems is non-linear. Much of the important behaviour of the system is the direct manifestation of nonlinear characteristic of the system components. Social systems are assumed to be inherently stable and that they constantly seek to achieve the equilibrium status. While it is the system’s tendency to r each the equ ilibrium in its inanimate consideration, the players in the system keep working towards disturbing the equilibrium conditions. Perfect market is the stated goal of the simple economic system with its most important components the supply and the demand trying to equal each other in the long run. But during this period, the players in the market disturb the initial conditions by several means such as inducing technology, introducing substitutes, differentiating the products etc. which makes that the seemingly achievable perfect market an impossible dream. Therefore, the notion of sustainable competitive advantage is only fleeting in nature. The analysis used for solving the market problems with an assumption of stable systems in thus not valid. There appears ample evidence that much of our industrial and economic systems exhibit behaviours characterised by instability. Mathematical economics and management science have often been more closely allied to formal mathematics than to economics or management. The difference of orientation is glaringly evident on comparison of business literature with publications on management science. Another evidence of the bias towards mathematical rather than managerial motivation is seen in preoccupation in optimum solutions. In the linear analysis, the first action that is performed is to define the objective function. Thus specifying the purpose of a model of an economic system being its ability to predict specific future action. Further, it is used to validate the model. Models are required to predict the character and the nature of the system in question so that redesign could take place in congruence with the desired state. This is entirely different and more useful than the objective functions, which provide the events such as specific future times of peaks or valleys such as in a sales curve. It is a belief that a model must be limited to considering those variables, which have generally accepted definitions and must have objective value, attached to them. Many undefined concepts are known to be of crucial importance to business systems, which are known as soft variables. Linear models are not capable of capturing these details in the traditional methodology of problem solving. If the subjective matters are considered to be of crucial importance to the business system behaviour, it must be conceded that they must some how be incorporated in the model. Therefore, it is necessary to provide legi", "title": "" }, { "docid": "6416eb9235954730b8788b7b744d9e5b", "text": "This paper presents a machine learning based handover management scheme for LTE to improve the Quality of Experience (QoE) of the user in the presence of obstacles. We show that, in this scenario, a state-of-the-art handover algorithm is unable to select the appropriate target cell for handover, since it always selects the target cell with the strongest signal without taking into account the perceived QoE of the user after the handover. In contrast, our scheme learns from past experience how the QoE of the user is affected when the handover was done to a certain eNB. Our performance evaluation shows that the proposed scheme substantially improves the number of completed downloads and the average download time compared to state-of-the-art. Furthermore, its performance is close to an optimal approach in the coverage region affected by an obstacle.", "title": "" } ]
scidocsrr
d509b37e02c7bac38510425ee7e46dd1
Appearance-based gaze estimation in the wild
[ { "docid": "3ce39c23ef5be4dd8fd10152ded95a6e", "text": "Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.", "title": "" } ]
[ { "docid": "4650411615ad68be9596e5de3c0613f1", "text": "Based on the limitations of traditional English class, an English listening class was designed by Edmodo platform through making use of the advantages of flipped classroom. On this class, students will carry out online autonomous learning before class, teacher will guide students learning collaboratively in class, as well as after-school reflection and summary will be realized. By analyzing teaching effect on flipped classroom, it can provide reference and teaching model for English listening classes in local universities.", "title": "" }, { "docid": "e78d88143d6a83ab5f43f06e406e5326", "text": "The mother–infant bond provides the foundation for the infant's future mental health and adaptation and depends on the provision of species-typical maternal behaviors that are supported by neuroendocrine and motivation-affective neural systems. Animal research has demonstrated that natural variations in patterns of maternal care chart discrete profiles of maternal brain–behavior relationships that uniquely shape the infant's lifetime capacities for stress regulation and social affiliation. Such patterns of maternal care are mediated by the neuropeptide Oxytocin and by stress- and reward-related neural systems. Human studies have similarly shown that maternal synchrony—the coordination of maternal behavior with infant signals—and intrusiveness—the excessive expression of maternal behavior—describe distinct and stable maternal styles that bear long-term consequences for infant well-being. To integrate brain, hormones, and behavior in the study of maternal–infant bonding, we examined the fMRI responses of synchronous vs intrusive mothers to dynamic, ecologically valid infant videos and their correlations with plasma Oxytocin. In all, 23 mothers were videotaped at home interacting with their infants and plasma OT assayed. Sessions were micro-coded for synchrony and intrusiveness. Mothers were scanned while observing several own and standard infant-related vignettes. Synchronous mothers showed greater activations in the left nucleus accumbens (NAcc) and intrusive mothers exhibited higher activations in the right amygdala. Functional connectivity analysis revealed that among synchronous mothers, left NAcc and right amygdala were functionally correlated with emotion modulation, theory-of-mind, and empathy networks. Among intrusive mothers, left NAcc and right amygdala were functionally correlated with pro-action areas. Sorting points into neighborhood (SPIN) analysis demonstrated that in the synchronous group, left NAcc and right amygdala activations showed clearer organization across time, whereas among intrusive mothers, activations of these nuclei exhibited greater cross-time disorganization. Correlations between Oxytocin with left NAcc and right amygdala activations were found only in the synchronous group. Well-adapted parenting appears to be underlay by reward-related motivational mechanisms, temporal organization, and affiliation hormones, whereas anxious parenting is likely mediated by stress-related mechanisms and greater neural disorganization. Assessing the integration of motivation and social networks into unified neural activity that reflects variations in patterns of parental care may prove useful for the study of optimal vs high-risk parenting.", "title": "" }, { "docid": "6e30761b695e22a29f98a051dbccac6f", "text": "This paper explores the use of clickthrough data for query spelling correction. First, large amounts of query-correction pairs are derived by analyzing users' query reformulation behavior encoded in the clickthrough data. Then, a phrase-based error model that accounts for the transformation probability between multi-term phrases is trained and integrated into a query speller system. Experiments are carried out on a human-labeled data set. Results show that the system using the phrase-based error model outperforms significantly its baseline systems.", "title": "" }, { "docid": "ed72c4d4bd7b4e063ebddf75127bb7db", "text": "Microfabrication of graphene devices used in many experimental studies currently relies on the fact that graphene crystallites can be visualized using optical microscopy if prepared on top of Si wafers with a certain thickness of SiO2. The authors study graphene’s visibility and show that it depends strongly on both thickness of SiO2 and light wavelength. They have found that by using monochromatic illumination, graphene can be isolated for any SiO2 thickness, albeit 300 nm the current standard and, especially, 100 nm are most suitable for its visual detection. By using a Fresnel-law-based model, they quantitatively describe the experimental data. © 2007 American Institute of Physics. DOI: 10.1063/1.2768624", "title": "" }, { "docid": "c5231a58c294d8580723070e638d3f44", "text": "This study employed Aaker's brand personality framework to empirically investigate the personality of denim jeans brands and to examine the impact of brand personality on consumer satisfaction and brand loyalty based on data collected from 474 college students. Results revealed that the personality of denim jeans brands can be described in six dimensions with 51 personality traits: attractiveness, practicality, ruggedness, flexibility, friendliness, and honesty. The results indicated that consumers associate particular brand personality dimensions with denim jeans brands. Also, the various dimensions of brand personality have different effects on consumer satisfaction and consumer brand loyalty.", "title": "" }, { "docid": "11afe3e3e94ca2ec411f38bf1b0b2e82", "text": "The requirements engineering program at Siemens Corporate Research has been involved with process improvement, training and project execution across many of the Siemens operating companies. We have been able to observe and assist with process improvement in mainly global software development efforts. Other researchers have reported extensively on various aspects of distributed requirements engineering, but issues specific to organizational structure have not been well categorized. Our experience has been that organizational and other management issues can overshadow technical problems caused by globalization. This paper describes some of the different organizational structures we have encountered, the problems introduced into requirements engineering processes by these structures, and techniques that were effective in mitigating some of the negative effects of global software development.", "title": "" }, { "docid": "c3f81c5e4b162564b15be399b2d24750", "text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.", "title": "" }, { "docid": "fe2594f98faa2ceda8b2c25bddc722d1", "text": "This study aimed at investigating the effect of a suggested EFL Flipped Classroom Teaching Model (EFL-FCTM) on graduate students' English higher-order thinking skills (HOTS), engagement and satisfaction. Also, it investigated the relationship between higher-order thinking skills, engagement and satisfaction. The sample comprised (67) graduate female students; an experimental group (N=33) and a control group (N=34), studying an English course at Taif University, KSA. The study used mixed method design; a pre-post HOTS test was carried out and two 5-Likert scale questionnaires had been designed and distributed; an engagement scale and a satisfaction scale. The findings of the study revealed statistically significant differences between the two group in HOTS in favor of the experimental group. Also, there was significant difference between the pre and post administration of the engagement scale in favor of the post administration. Moreover, students satisfaction on the (EFL-FCTM) was high. Finally, there were high significant relationships between HOTS and student engagement, HOTS and satisfaction and between student engagement and satisfaction.", "title": "" }, { "docid": "ab2e9a230c9aeec350dff6e3d239c7d8", "text": "Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this paper, we aim to endow state of the art face recognition SDKs with robustness to facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression image from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases, namely Multi-PIE and AR, show significant performance improvement of the commercial SDK to deal with expression and pose variations and demonstrates the effectiveness of the proposed approach.", "title": "" }, { "docid": "dc3495ec93462e68f606246205a8416d", "text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "title": "" }, { "docid": "64cefd949f61afe81fbbb9ca1159dd4a", "text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR", "title": "" }, { "docid": "4b09424630d5e27f1ed32b5798674595", "text": "Tampering detection has been increasingly attracting attention in the field of digital forensics. As a popular nonlinear smoothing filter, median filtering is often used as a post-processing operation after image forgeries such as copy-paste forgery (including copy-move and image splicing), which is of particular interest to researchers. To implement the blind detection of median filtering, this paper proposes a novel approach based on a frequency-domain feature coined the annular accumulated points (AAP). Experimental results obtained on widely used databases, which consists of various real-world photos, show that the proposed method achieves outstanding performance in distinguishing median-filtered images from original images or images that have undergone other types of manipulations, especially in the scenarios of low resolution and JPEG compression with a low quality factor. Moreover, our approach remains reliable even when the feature dimension decreases to 5, which is significant to save the computing time required for classification, demonstrating its great advantage to be applied in real-time processing of big multimedia data.", "title": "" }, { "docid": "6f4fe7bc805c4b635d6c201d8ea1f53c", "text": "In this paper we focus on the automatic identification of bird species from their audio recorded song. Bird monitoring is important to perform several tasks, such as to evaluate the quality of their living environment or to monitor dangerous situations to planes caused by birds near airports. We deal with the bird species identification problem using signal processing and machine learning techniques. First, features are extracted from the bird recorded songs using specific audio treatment, next the problem is performed according to a classical machine learning scenario, where a labeled database of previously known bird songs are employed to create a decision procedure that is used to predict the species of a new bird song. Experiments are conducted in a dataset of recorded songs of bird species which appear in a specific region. The experimental results compare the performance obtained in different situations, encompassing the complete audio signals, as recorded in the field, and short audio segments (pulses) obtained from the signals by a split procedure. The influence of the number of classes (bird species) in the identification accuracy is also evaluated.", "title": "" }, { "docid": "7696178f143665fa726706e39b133cb8", "text": "This article describes the essential components of oral health information systems for the analysis of trends in oral disease and the evaluation of oral health programmes at the country, regional and global levels. Standard methodology for the collection of epidemiological data on oral health has been designed by WHO and used by countries worldwide for the surveillance of oral disease and health. Global, regional and national oral health databanks have highlighted the changing patterns of oral disease which primarily reflect changing risk profiles and the implementation of oral health programmes oriented towards disease prevention and health promotion. The WHO Oral Health Country/Area Profile Programme (CAPP) provides data on oral health from countries, as well as programme experiences and ideas targeted to oral health professionals, policy-makers, health planners, researchers and the general public. WHO has developed global and regional oral health databanks for surveillance, and international projects have designed oral health indicators for use in oral health information systems for assessing the quality of oral health care and surveillance systems. Modern oral health information systems are being developed within the framework of the WHO STEPwise approach to surveillance of noncommunicable, chronic disease, and data stored in the WHO Global InfoBase may allow advanced health systems research. Sound knowledge about progress made in prevention of oral and chronic disease and in health promotion may assist countries to implement effective public health programmes to the benefit of the poor and disadvantaged population groups worldwide.", "title": "" }, { "docid": "553de71fcc3e4e6660015632eee751b1", "text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.", "title": "" }, { "docid": "fa246c15531c6426cccaf4d216dc8375", "text": "Proboscis lateralis is a rare craniofacial malformation characterized by absence of nasal cavity on one side with a trunk-like nasal appendage protruding from superomedial portion of the ipsilateral orbit. High-resolution computed tomography and magnetic resonance imaging are extremely useful in evaluating this congenital condition and the wide spectrum of associated anomalies occurring in the surrounding anatomical regions and brain. We present a case of proboscis lateralis in a 2-year-old girl with associated ipsilateral sinonasal aplasia, orbital cyst, absent olfactory bulb and olfactory tract. Absence of ipsilateral olfactory pathway in this rare disorder has been documented on high-resolution computed tomography and magnetic resonance imaging by us for the first time in English medical literature.", "title": "" }, { "docid": "fe8d20422454f095c5a14bce3523748d", "text": "This paper Put forward a glass crack detection algorithm based on digital image processing technology, obtain identification information of glass surface crack image by making use of pre-processing, image segmentation, feature extraction on the glass crack image, calculate the target area and perimeter of the roundness index to judge whether this image with a crack, make use of Visual Basic6.0 programming language to impolder the crack detection system, achieve the function of each part in crack detection process.", "title": "" }, { "docid": "53afafd2fc1087989a975675ff4098d8", "text": "The sixth generation of IEEE 802.11 wireless local area networks is under developing in the Task Group 802.11ax. One main physical layer (PHY) novel feature in the IEEE 802.11ax amendment is the specification of orthogonal frequency division multiplexing (OFDM) uplink multi-user multiple-input multiple-output (UL MU-MIMO) techniques. A challenge issue to implement UL MU-MIMO in OFDM PHY is the mitigation of the relative carrier frequency offset (CFO), which can cause intercarrier interference and rotation of the constellation of received symbols, and, consequently, degrading the system performance dramatically if it is not properly mitigated. In this paper, we show that a frequency domain CFO estimation and correction scheme implemented at both transmitter (Tx) and receiver (Rx) coupled with pre-compensation approach at the Tx can decrease the negative effects of the relative CFO.", "title": "" }, { "docid": "51da4d5923b30db560227155edd0621d", "text": "The fifth generation wireless 5G development initiative is based upon 4G, which at present is struggling to meet its performance goals. The comparison between 3G and 4G wireless communication systems in relation to its architecture, speed, frequency band, switching design basis and forward error correction is studied, and were discovered that their performances are still unable to solve the unending problems of poor coverage, bad interconnectivity, poor quality of service and flexibility. An ideal 5G model to accommodate the challenges and shortfalls of 3G and 4G deployments is discussed as well as the significant system improvements on the earlier wireless technologies. The radio channel propagation characteristics for 4G and 5G systems is discussed. Major advantages of 5G network in providing myriads of services to end users personalization, terminal and network heterogeneity, intelligence networking and network convergence among other benefits are highlighted.The significance of the study is evaluated for a fast and effective connection and communication of devices like mobile phones and computers, including the capability of supporting and allowing a highly flexible network connectivity.", "title": "" }, { "docid": "7b552767a37a7d63591471195b2e002b", "text": "Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin.", "title": "" } ]
scidocsrr
e1640b20b57f2db83b41db76947416dc
Data Mining in the Dark : Darknet Intelligence Automation
[ { "docid": "22bdd2c36ef72da312eb992b17302fbe", "text": "In this paper, we present an operational system for cyber threat intelligence gathering from various social platforms on the Internet particularly sites on the darknet and deepnet. We focus our attention to collecting information from hacker forum discussions and marketplaces offering products and services focusing on malicious hacking. We have developed an operational system for obtaining information from these sites for the purposes of identifying emerging cyber threats. Currently, this system collects on average 305 high-quality cyber threat warnings each week. These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack. This provides a significant service to cyber-defenders. The system is significantly augmented through the use of various data mining and machine learning techniques. With the use of machine learning models, we are able to recall 92% of products in marketplaces and 80% of discussions on forums relating to malicious hacking with high precision. We perform preliminary analysis on the data collected, demonstrating its application to aid a security expert for better threat analysis.", "title": "" }, { "docid": "6d31ee4b0ad91e6500c5b8c7e3eaa0ca", "text": "A host of tools and techniques are now available for data mining on the Internet. The explosion in social media usage and people reporting brings a new range of problems related to trust and credibility. Traditional media monitoring systems have now reached such sophistication that real time situation monitoring is possible. The challenge though is deciding what reports to believe, how to index them and how to process the data. Vested interests allow groups to exploit both social media and traditional media reports for propaganda purposes. The importance of collecting reports from all sides in a conflict and of balancing claims and counter-claims becomes more important as ease of publishing increases. Today the challenge is no longer accessing open source information but in the tagging, indexing, archiving and analysis of the information. This requires the development of general-purpose and domain specific knowledge bases. Intelligence tools are needed which allow an analyst to rapidly access relevant data covering an evolving situation, ranking sources covering both facts and opinions.", "title": "" } ]
[ { "docid": "a854ee8cf82c4bd107e93ed0e70ee543", "text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.", "title": "" }, { "docid": "bc6877a5a83531a794ac1c8f7a4c7362", "text": "A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the 'holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an 'inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of out-of-sample predictions) generates 'holdout' predictions with perfectly opposite rankings on random data. While this would raise immediate suspicion upon inspection, we would like to caution the data mining community against using CV for stacking or in currently popular ensemble methods. They can reverse the predictions by assigning negative weights and produce in the end a model that appears to have close to perfect predictability while in reality the data was random.", "title": "" }, { "docid": "a33486dfec199cd51e885d6163082a96", "text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.", "title": "" }, { "docid": "7394f3000da8af0d4a2b33fed4f05264", "text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.", "title": "" }, { "docid": "2216f853543186e73b1149bb5a0de297", "text": "Scaffolds have been utilized in tissue regeneration to facilitate the formation and maturation of new tissues or organs where a balance between temporary mechanical support and mass transport (degradation and cell growth) is ideally achieved. Polymers have been widely chosen as tissue scaffolding material having a good combination of biodegradability, biocompatibility, and porous structure. Metals that can degrade in physiological environment, namely, biodegradable metals, are proposed as potential materials for hard tissue scaffolding where biodegradable polymers are often considered as having poor mechanical properties. Biodegradable metal scaffolds have showed interesting mechanical property that was close to that of human bone with tailored degradation behaviour. The current promising fabrication technique for making scaffolds, such as computation-aided solid free-form method, can be easily applied to metals. With further optimization in topologically ordered porosity design exploiting material property and fabrication technique, porous biodegradable metals could be the potential materials for making hard tissue scaffolds.", "title": "" }, { "docid": "501f9cb511e820c881c389171487f0b4", "text": "An omnidirectional circularly polarized (CP) antenna array is proposed. The antenna array is composed of four identical CP antenna elements and one parallel strip-line feeding network. Each of CP antenna elements comprises a dipole and a zero-phase-shift (ZPS) line loop. The in-phase fed dipole and the ZPS line loop generate vertically and horizontally polarized omnidirectional radiation, respectively. Furthermore, the vertically polarized dipole is positioned in the center of the horizontally polarized ZPS line loop. The size of the loop is designed such that a 90° phase difference is realized between the two orthogonal components because of the spatial difference and, therefore, generates CP omnidirectional radiation. A 1 × 4 antenna array at 900 MHz is prototyped and targeted to ultra-high frequency (UHF) radio frequency identification (RFID) applications. The measurement results show that the antenna array achieves a 10-dB return loss over a frequency range of 900-935 MHz and 3-dB axial-ratio (AR) from 890 to 930 MHz. At the frequency of 915 MHz, the measured maximum AR of 1.53 dB, maximum gain of 5.4 dBic, and an omnidirectionality of ±1 dB are achieved.", "title": "" }, { "docid": "58d19a5460ce1f830f7a5e2cb1c5ebca", "text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.", "title": "" }, { "docid": "54bdabea83e86d21213801c990c60f4d", "text": "A method of depicting crew climate using a group diagram based on behavioral ratings is described. Behavioral ratings were made of twelve three-person professional airline cockpit crews in full-mission simulations. These crews had been part of an earlier study in which captains had been had been grouped into three personality types, based on pencil and paper pre-tests. We found that low error rates were related to group climate variables as well as positive captain behaviors.", "title": "" }, { "docid": "b5babae9b9bcae4f87f5fe02459936de", "text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.", "title": "" }, { "docid": "19b8acf4e5c68842a02e3250c346d09b", "text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.", "title": "" }, { "docid": "fe903498e0c3345d7e5ebc8bf3407c2f", "text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.", "title": "" }, { "docid": "07a6de40826f4c5bab4a8b8c51aba080", "text": "Prior studies on alternative work schedules have focused primarily on the main effects of compressed work weeks and shift work on individual outcomes. This study explores the combined effects of alternative and preferred work schedules on nurses' satisfaction with their work schedules, perceived patient care quality, and interferences with their personal lives.", "title": "" }, { "docid": "62ff5888ad0c8065097603da8ff79cd6", "text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.", "title": "" }, { "docid": "3910a3317ea9ff4ea6c621e562b1accc", "text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.", "title": "" }, { "docid": "263c04402cfe80649b1d3f4a8578e99b", "text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.", "title": "" }, { "docid": "06755f8680ee8b43e0b3d512b4435de4", "text": "Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.", "title": "" }, { "docid": "cc9f566eb8ef891d76c1c4eee7e22d47", "text": "In this study, a hybrid artificial intelligent (AI) system integrating neural network and expert system is proposed to support foreign exchange (forex) trading decisions. In this system, a neural network is used to predict the forex price in terms of quantitative data, while an expert system is used to handle qualitative factor and to provide forex trading decision suggestions for traders incorporating experts' knowledge and the neural network's results. The effectiveness of the proposed hybrid AI system is illustrated by simulation experiments", "title": "" }, { "docid": "3b5340113d583b138834119614046151", "text": "This paper presents the recent advancements in the control of multiple-degree-of-freedom hydraulic robotic manipulators. A literature review is performed on their control, covering both free-space and constrained motions of serial and parallel manipulators. Stability-guaranteed control system design is the primary requirement for all control systems. Thus, this paper pays special attention to such systems. An objective evaluation of the effectiveness of different methods and the state of the art in a given field is one of the cornerstones of scientific research and progress. For this purpose, the maximum position tracking error <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math></inline-formula> and a performance indicator <inline-formula><tex-math notation=\"LaTeX\">$\\rho$ </tex-math></inline-formula> (the ratio of <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math> </inline-formula> with respect to the maximum velocity) are used to evaluate and benchmark different free-space control methods in the literature. These indicators showed that stability-guaranteed nonlinear model based control designs have resulted in the most advanced control performance. In addition to stable closed-loop control, lack of energy efficiency is another significant challenge in hydraulic robotic systems. This paper pays special attention to these challenges in hydraulic robotic systems and discusses their reciprocal contradiction. Potential solutions to improve the system energy efficiency without control performance deterioration are discussed. Finally, for hydraulic robotic systems, open problems are defined and future trends are projected.", "title": "" }, { "docid": "3ea021309fd2e729ffced7657e3a6038", "text": "Physiological and pharmacological research undertaken on sloths during the past 30 years is comprehensively reviewed. This includes the numerous studies carried out upon the respiratory and cardiovascular systems, anesthesia, blood chemistry, neuromuscular responses, the brain and spinal cord, vision, sleeping and waking, water balance and kidney function and reproduction. Similarities and differences between the physiology of sloths and that of other mammals are discussed in detail.", "title": "" }, { "docid": "637e73416c1a6412eeeae63e1c73c2c3", "text": "Disgust, an emotion related to avoiding harmful substances, has been linked to moral judgments in many behavioral studies. However, the fact that participants report feelings of disgust when thinking about feces and a heinous crime does not necessarily indicate that the same mechanisms mediate these reactions. Humans might instead have separate neural and physiological systems guiding aversive behaviors and judgments across different domains. The present interdisciplinary study used functional magnetic resonance imaging (n = 50) and behavioral assessment to investigate the biological homology of pathogen-related and moral disgust. We provide evidence that pathogen-related and sociomoral acts entrain many common as well as unique brain networks. We also investigated whether morality itself is composed of distinct neural and behavioral subdomains. We provide evidence that, despite their tendency to elicit similar ratings of moral wrongness, incestuous and nonsexual immoral acts entrain dramatically separate, while still overlapping, brain networks. These results (i) provide support for the view that the biological response of disgust is intimately tied to immorality, (ii) demonstrate that there are at least three separate domains of disgust, and (iii) suggest strongly that morality, like disgust, is not a unified psychological or neurological phenomenon.", "title": "" } ]
scidocsrr
af2ea562f86464b226a770038a6a57b4
Automatic Liver Lesion Segmentation Using A Deep Convolutional Neural Network Method
[ { "docid": "257afbcb213cd7c1733bb31fea4aa25d", "text": "Automatic segmentation of the liver and its lesion is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT abdomen images using cascaded fully convolutional neural networks (CFCNs) and dense 3D conditional random fields (CRFs). We train and cascade two FCNs for a combined segmentation of the liver and its lesions. In the first step, we train a FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions from the predicted liver ROIs of step 1. We refine the segmentations of the CFCN using a dense 3D CRF that accounts for both spatial coherence and appearance. CFCN models were trained in a 2-fold cross-validation on the abdominal CT dataset 3DIRCAD comprising 15 hepatic tumor volumes. Our results show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over 94% for liver with computation times below 100s per volume. We experimentally demonstrate the robustness of the proposed method as a decision support system with a high accuracy and speed for usage in daily clinical routine.", "title": "" } ]
[ { "docid": "848c8ffaa9d58430fbdebd0e9694d531", "text": "This paper presents an application for studying the death records of WW2 casualties from a prosopograhical perspective, provided by the various local military cemeteries where the dead were buried. The idea is to provide the end user with a global visual map view on the places in which the casualties were buried as well as with a local historical perspective on what happened to the casualties that lay within a particular cemetery of a village or town. Plenty of data exists about the Second World War (WW2), but the data is typically archived in unconnected, isolated silos in different organizations. This makes it difficult to track down, visualize, and study information that is contained within multiple distinct datasets. In our work, this problem is solved using aggregated Linked Open Data provided by the WarSampo Data Service and SPARQL endpoint.", "title": "" }, { "docid": "611fdf1451bdd5c683c5be00f46460b8", "text": "Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.", "title": "" }, { "docid": "ea2e03fc8e273e9d3627086ce4bd6bde", "text": "Augmented Reality (AR), a concept where the real word is being enhanced with computer generated objects and text, has evolved and become a popular tool to communicate information through. Research on how the technique can be optimized regarding the technical aspects has been made, but not regarding how typography in three dimensions should be designed and used in AR applications. Therefore this master’s thesis investigates three different design attributes of typography in three dimensions. The three attributes are: typeface style, color, and weight including depth, and how they affect the visibility of the text in an indoor AR environment. A user study was conducted, both with regular users but also with users that were considered experts in the field of typography and design, to investigate differences of the visibility regarding the typography’s design attributes. The result shows noteworthy differences between two pairs of AR simulations containing different typography among the regular users. This along with a slight favoritism of bright colored text against dark colored text, even though no notable different could be seen regarding color alone. Discussions regarding the design attributes of the typography affect the legibility of the text, and what could have been done differently to achieve an even more conclusive result. To summarize this thesis, the objective resulted in design guidelines regarding typography for indoor mobile AR applications. Skapande och användande av 3D-typografi i mobila Augmented Reality-applikationer för inomhusbruk", "title": "" }, { "docid": "58f6247a0958bf0087620921c99103b1", "text": "This paper addresses an information-theoretic aspect of k-means and spectral clustering. First, we revisit the k-means clustering and show that its objective function is approximately derived from the minimum entropy principle when the Renyi's quadratic entropy is used. Then we present a maximum within-clustering association that is derived using a quadratic distance measure in the framework of minimum entropy principle, which is very similar to a class of spectral clustering algorithms that is based on the eigen-decomposition method.", "title": "" }, { "docid": "0857e32201b675c3e971c6caba8d2087", "text": "Western tonal music relies on a formal geometric structure that determines distance relationships within a harmonic or tonal space. In functional magnetic resonance imaging experiments, we identified an area in the rostromedial prefrontal cortex that tracks activation in tonal space. Different voxels in this area exhibited selectivity for different keys. Within the same set of consistently activated voxels, the topography of tonality selectivity rearranged itself across scanning sessions. The tonality structure was thus maintained as a dynamic topography in cortical areas known to be at a nexus of cognitive, affective, and mnemonic processing.", "title": "" }, { "docid": "6cb0c739d4cb0b8d59f17d2d37cb5caa", "text": "In this work, a context-based multisensor system, applied for pedestrian detection in urban environment, is presented. The proposed system comprises three main processing modules: (i) a LIDAR-based module acting as primary object detection, (ii) a module which supplies the system with contextual information obtained from a semantic map of the roads, and (iii) an image-based detection module, using sliding-window detectors, with the role of validating the presence of pedestrians in regions of interest (ROIs) generated by the LIDAR module. A Bayesian strategy is used to combine information from sensors on-board the vehicle (‘local’ information) with information contained in a digital map of the roads (‘global’ information). To support experimental analysis, a multisensor dataset, named Laser and Image Pedestrian Detection dataset (LIPD), is used. The LIPD dataset was collected in an urban environment, at day light conditions, using an electrical vehicle driven at low speed. A down sampling method, using support vectors extracted from multiple linear-SVMs, was used to reduce the cardinality of the training set and, as consequence, to decrease the CPU-time during the training process of image-based classifiers. The performance of the system is evaluated, in terms of true positive rate and false positives per frame, using three image-detectors: a linear-SVM, a SVM-cascade, and a benchmark method. Additionally, experiments are performed to assess the impact of contextual information on the performance of the detection system.", "title": "" }, { "docid": "5132cf4fdbe55a47214f66738599df78", "text": "Users may strive to formulate an adequate textual query for their information need. Search engines assist the users by presenting query suggestions. To preserve the original search intent, suggestions should be context-aware and account for the previous queries issued by the user. Achieving context awareness is challenging due to data sparsity. We present a novel hierarchical recurrent encoder-decoder architecture that makes possible to account for sequences of previous queries of arbitrary lengths. As a result, our suggestions are sensitive to the order of queries in the context while avoiding data sparsity. Additionally, our model can suggest for rare, or long-tail, queries. The produced suggestions are synthetic and are sampled one word at a time, using computationally cheap decoding techniques. This is in contrast to current synthetic suggestion models relying upon machine learning pipelines and hand-engineered feature sets. Results show that our model outperforms existing context-aware approaches in a next query prediction setting. In addition to query suggestion, our architecture is general enough to be used in a variety of other applications.", "title": "" }, { "docid": "090af7b180f3e9d289d158f8ee385da9", "text": "Natural medicines were the only option for the prevention and treatment of human diseases for thousands of years. Natural products are important sources for drug development. The amounts of bioactive natural products in natural medicines are always fairly low. Today, it is very crucial to develop effective and selective methods for the extraction and isolation of those bioactive natural products. This paper intends to provide a comprehensive view of a variety of methods used in the extraction and isolation of natural products. This paper also presents the advantage, disadvantage and practical examples of conventional and modern techniques involved in natural products research.", "title": "" }, { "docid": "78e8f84224549b75584c59591a8febef", "text": "Our goal is to design architectures that retain the groundbreaking performance of Convolutional Neural Networks (CNNs) for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. (e) We further provide additional results for the problem of facial part segmentation. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks.", "title": "" }, { "docid": "c9a18fc3919462cc232b0840a4844ae2", "text": "Systematic gene expression analyses provide comprehensive information about the transcriptional response to different environmental and developmental conditions. With enough gene expression data points, computational biologists may eventually generate predictive computer models of transcription regulation. Such models will require computational methodologies consistent with the behavior of known biological systems that remain tractable. We represent regulatory relationships between genes as linear coefficients or weights, with the \"net\" regulation influence on a gene's expression being the mathematical summation of the independent regulatory inputs. Test regulatory networks generated with this approach display stable and cyclically stable gene expression levels, consistent with known biological systems. We include variables to model the effect of environmental conditions on transcription regulation and observed various alterations in gene expression patterns in response to environmental input. Finally, we use a derivation of this model system to predict the regulatory network from simulated input/output data sets and find that it accurately predicts all components of the model, even with noisy expression data.", "title": "" }, { "docid": "388101f40ff79f2543b111aad96c4180", "text": "Based on available literature, ecology and economy of light emitting diode (LED) lights in plant foods production were assessed and compared to high pressure sodium (HPS) and compact fluorescent light (CFL) lamps. The assessment summarises that LEDs are superior compared to other lamp types. LEDs are ideal in luminous efficiency, life span and electricity usage. Mercury, carbon dioxide and heat emissions are also lowest in comparison to HPS and CFL lamps. This indicates that LEDs are indeed economic and eco-friendly lighting devices. The present review indicates also that LEDs have many practical benefits compared to other lamp types. In addition, they are applicable in many purposes in plant foods production. The main focus of the review is the targeted use of LEDs in order to enrich phytochemicals in plants. This is an expedient to massive improvement in production efficiency, since it diminishes the number of plants per phytochemical unit. Consequently, any other production costs (e.g. growing space, water, nutrient and transport) may be reduced markedly. Finally, 24 research articles published between 2013 and 2017 were reviewed for targeted use of LEDs in the specific, i.e. blue range (400-500 nm) of spectrum. The articles indicate that blue light is efficient in enhancing the accumulation of health beneficial phytochemicals in various species. The finding is important for global food production. © 2017 Society of Chemical Industry.", "title": "" }, { "docid": "ad091e4f66adb26d36abfc40377ee6ab", "text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.", "title": "" }, { "docid": "60c36aa871aaa3a13ac3b51dbb12b668", "text": "We propose a novel approach for multi-view object detection in 3D scenes reconstructed from RGB-D sensor. We utilize shape based representation using local shape context descriptors along with the voting strategy which is supported by unsupervised object proposals generated from 3D point cloud data. Our algorithm starts with a single-view object detection where object proposals generated in 3D space are combined with object specific hypotheses generated by the voting strategy. To tackle the multi-view setting, the data association between multiple views enabled view registration and 3D object proposals. The evidence from multiple views is combined in simple bayesian setting. The approach is evaluated on the Washington RGB-D scenes datasets [1], [2] containing several classes of objects in a table top setting. We evaluated our approach against the other state-of-the-art methods and demonstrated superior performance on the same dataset.", "title": "" }, { "docid": "2214493b373886c02f67ad9e411cfe66", "text": "We identify emerging phenomena of distributed liveness, involving new relationships among performers, audiences, and technology. Liveness is a recent, technology-based construct, which refers to experiencing an event in real-time with the possibility for shared social realities. Distributed liveness entails multiple forms of physical, spatial, and social co-presence between performers and audiences across physical and virtual spaces. We interviewed expert performers about how they experience liveness in physically co-present and distributed settings. Findings show that distributed performances and technology need to support flexible social co-presence and new methods for sensing subtle audience responses and conveying engagement abstractly.", "title": "" }, { "docid": "eebca83626e8568e8b92019541466873", "text": "There is a need for new spectrum access protocols that are opportunistic, flexible and efficient, yet fair. Game theory provides a framework for analyzing spectrum access, a problem that involves complex distributed decisions by independent spectrum users. We develop a cooperative game theory model to analyze a scenario where nodes in a multi-hop wireless network need to agree on a fair allocation of spectrum. We show that in high interference environments, the utility space of the game is non-convex, which may make some optimal allocations unachievable with pure strategies. However, we show that as the number of channels available increases, the utility space becomes close to convex and thus optimal allocations become achievable with pure strategies. We propose the use of the Nash Bargaining Solution and show that it achieves a good compromise between fairness and efficiency, using a small number of channels. Finally, we propose a distributed algorithm for spectrum sharing and show that it achieves allocations reasonably close to the Nash Bargaining Solution.", "title": "" }, { "docid": "c7eca96393cfd88bda265fb9bcaa4630", "text": "According to the World Health Organization, around 28–35% of people aged 65 and older fall each year. This number increases to around 32–42% for people over 70 years old. For this reason, this research targets the exploration of the role of Convolutional Neural Networks(CNN) in human fall detection. There are a number of current solutions related to fall detection; however, remain low detection accuracy. Although CNN has proven a powerful technique for image recognition problems, and the CNN library in Matlab was designed to work with either images or matrices, this research explored how to apply CNN to streaming sensor data, collected from Body Sensor Networks (BSN), in order to improve the fall detection accuracy. The idea of this research is that given the stream data sets as input, we converted them into images before applying CNN. The final accuracy result achieved is, to the best of our knowledge, the highest compared to other proposed methods: 92.3%.", "title": "" }, { "docid": "1b11df93de6688a4176a7ad88232918a", "text": "Classification of data is difficult if the data is imbalanced and classes are overlapping. In recent years, more research has started to focus on classification of imbalanced data since real world data is often skewed. Traditional methods are more successful with classifying the class that has the most samples (majority class) compared to the other classes (minority classes). For the classification of imbalanced data sets, different methods are available, although each has some advantages and shortcomings. In this study, we propose a new hierarchical decomposition method for imbalanced data sets which is different from previously proposed solutions to the class imbalance problem. Additionally, it does not require any data pre-processing step as many other solutions need. The new method is based on clustering and outlier detection. The hierarchy is constructed using the similarity of labeled data subsets at each level of the hierarchy with different levels being built by different data and feature subsets. Clustering is used to partition the data while outlier detection is utilized to detect minority class samples. The comparison of the proposed method with state of art the methods using 20 public imbalanced data sets and 181 synthetic data sets showed that the proposed method’s classification performance is better than the state of art methods. It is especially successful if the minority class is sparser than the majority class. It has accurate performance even when classes have sub-varieties and minority and majority classes are overlapping. Moreover, its performance is also good when the class imbalance ratio is low, i.e. classes are more imbalanced.", "title": "" }, { "docid": "dc867c305130e728aaaa00fef5b8b688", "text": "Large scale surveillance video analysis is one of the most important components in the future artificial intelligent city. It is a very challenging but practical system, consists of multiple functionalities such as object detection, tracking, identification and behavior analysis. In this paper, we try to address three tasks hosted in NVIDIA AI City Challenge contest. First, a system that transforming the image coordinate to world coordinate has been proposed, which is useful to estimate the vehicle speed on the road. Second, anomalies like car crash event and stalled vehicles can be found by the proposed anomaly detector framework. Third, multiple camera vehicle re-identification problem has been investigated and a matching algorithm is explained. All these tasks are based on our proposed online single camera multiple object tracking (MOT) system, which has been evaluated on the widely used MOT16 challenge benchmark. We show that it achieves the best performance compared to the state-of-the-art methods. Besides of MOT, we evaluate the proposed vehicle re-identification model on VeRi-776 dataset and it outperforms all other methods with a large margin.", "title": "" }, { "docid": "bbecbf907a81e988379fe61d8d8f9f17", "text": "In this paper, we address the problem of visual question answering by proposing a novel model, called VIBIKNet. Our model is based on integrating Kernelized Convolutional Neural Networks and Long-Short Term Memory units to generate an answer given a question about an image. We prove that VIBIKNet is an optimal trade-off between accuracy and computational load, in terms of memory and time consumption. We validate our method on the VQA challenge dataset and compare it to the top performing methods in order to illustrate its performance and speed.", "title": "" }, { "docid": "7eb278200f80d5827b94cada79e54ac2", "text": "Thanks to the development of Mobile mapping systems (MMS), street object recognition, classification, modelling and related studies have become hot topics recently. There has been increasing interest in detecting changes between mobile laser scanning (MLS) point clouds in complex urban areas. A method based on the consistency between the occupancies of space computed from different datasets is proposed. First occupancy of scan rays (empty, occupied, unknown) are defined while considering the accuracy of measurement and registration. Then the occupancy of scan rays are fused using the Weighted Dempster–Shafer theory (WDST). Finally, the consistency between different datasets is obtained by comparing the occupancy at points from one dataset with the fused occupancy of neighbouring rays from the other dataset. Change detection results are compared with a conventional point to triangle (PTT) distance method. Changes at point level are detected fully automatically. The proposed approach allows to detect changes at large scales in urban scenes with fine detail and more importantly, distinguish real changes from occlusions.", "title": "" } ]
scidocsrr
290db40768e847f187e056e0fa70c177
A Pattern-Based Approach for Multi-Class Sentiment Analysis in Twitter
[ { "docid": "6a96678b14ec12cb4bb3db4e1c4c6d4e", "text": "Emoticons are widely used to express positive or negative sentiment on Twitter. We report on a study with live users to determine whether emoticons are used to merely emphasize the sentiment of tweets, or whether they are the main elements carrying the sentiment. We found that the sentiment of an emoticon is in substantial agreement with the sentiment of the entire tweet. Thus, emoticons are useful as predictors of tweet sentiment and should not be ignored in sentiment classification. However, the sentiment expressed by an emoticon agrees with the sentiment of the accompanying text only slightly better than random. Thus, using the text accompanying emoticons to train sentiment models is not likely to produce the best results, a fact that we show by comparing lexicons generated using emoticons with others generated using simple textual features.", "title": "" }, { "docid": "e59d1a3936f880233001eb086032d927", "text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.", "title": "" }, { "docid": "6f4479d224c1546040bee39d50eaba55", "text": "Bag-of-words (BOW) is now the most popular way to model text in statistical machine learning approaches in sentiment analysis. However, the performance of BOW sometimes remains limited due to some fundamental deficiencies in handling the polarity shift problem. We propose a model called dual sentiment analysis (DSA), to address this problem for sentiment classification. We first propose a novel data expansion technique by creating a sentiment-reversed review for each training and test review. On this basis, we propose a dual training algorithm to make use of original and reversed training reviews in pairs for learning a sentiment classifier, and a dual prediction algorithm to classify the test reviews by considering two sides of one review. We also extend the DSA framework from polarity (positive-negative) classification to 3-class (positive-negative-neutral) classification, by taking the neutral reviews into consideration. Finally, we develop a corpus-based method to construct a pseudo-antonym dictionary, which removes DSA's dependency on an external antonym dictionary for review reversion. We conduct a wide range of experiments including two tasks, nine datasets, two antonym dictionaries, three classification algorithms, and two types of features. The results demonstrate the effectiveness of DSA in supervised sentiment classification.", "title": "" } ]
[ { "docid": "0f11d0d1047a79ee63896f382ae03078", "text": "Much of the visual cortex is organized into visual field maps: nearby neurons have receptive fields at nearby locations in the image. Mammalian species generally have multiple visual field maps with each species having similar, but not identical, maps. The introduction of functional magnetic resonance imaging made it possible to identify visual field maps in human cortex, including several near (1) medial occipital (V1,V2,V3), (2) lateral occipital (LO-1,LO-2, hMT+), (3) ventral occipital (hV4, VO-1, VO-2), (4) dorsal occipital (V3A, V3B), and (5) posterior parietal cortex (IPS-0 to IPS-4). Evidence is accumulating for additional maps, including some in the frontal lobe. Cortical maps are arranged into clusters in which several maps have parallel eccentricity representations, while the angular representations within a cluster alternate in visual field sign. Visual field maps have been linked to functional and perceptual properties of the visual system at various spatial scales, ranging from the level of individual maps to map clusters to dorsal-ventral streams. We survey recent measurements of human visual field maps, describe hypotheses about the function and relationships between maps, and consider methods to improve map measurements and characterize the response properties of neurons comprising these maps.", "title": "" }, { "docid": "d4954bab5fc4988141c509a6d6ab79db", "text": "Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS). However, as they lack the ability to model global characteristics of speech (such as speaker individualities or speaking styles), particularly when these characteristics have not been labeled, making neural autoregressive SS systems more expressive is still an open issue. In this paper, we propose to combine VoiceLoop, an autoregressive SS model, with Variational Autoencoder (VAE). This approach, unlike traditional autoregressive SS systems, uses VAE to model the global characteristics explicitly, enabling the expressiveness of the synthesized speech to be controlled in an unsupervised manner. Experiments using the VCTK and Blizzard2012 datasets show the VAE helps VoiceLoop to generate higher quality speech and to control the expressions in its synthesized speech by incorporating global characteristics into the speech generating process.", "title": "" }, { "docid": "62d23e00d13903246cc7128fe45adf12", "text": "The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machines we call ‘computers’. In this paper, I want to suggest some ways that this might be done, using ideas from the mathematical theory of uncomputability (or Recursion Theory). And I want to suggest some uses to which the resulting models might be put. (The reader more interested in the models and their uses than the mathematics and its theorems, might want to skim or skip the mathematical parts.)", "title": "" }, { "docid": "8fd97add7e3b48bad9fd82dc01422e59", "text": "Anaerobic nitrate-dependent Fe(II) oxidation is widespread in various environments and is known to be performed by both heterotrophic and autotrophic microorganisms. Although Fe(II) oxidation is predominantly biological under acidic conditions, to date most of the studies on nitrate-dependent Fe(II) oxidation were from environments of circumneutral pH. The present study was conducted in Lake Grosse Fuchskuhle, a moderately acidic ecosystem receiving humic acids from an adjacent bog, with the objective of identifying, characterizing and enumerating the microorganisms responsible for this process. The incubations of sediment under chemolithotrophic nitrate-dependent Fe(II)-oxidizing conditions have shown the enrichment of TM3 group of uncultured Actinobacteria. A time-course experiment done on these Actinobacteria showed a consumption of Fe(II) and nitrate in accordance with the expected stoichiometry (1:0.2) required for nitrate-dependent Fe(II) oxidation. Quantifications done by most probable number showed the presence of 1 × 104 autotrophic and 1 × 107 heterotrophic nitrate-dependent Fe(II) oxidizers per gram fresh weight of sediment. The analysis of microbial community by 16S rRNA gene amplicon pyrosequencing showed that these actinobacterial sequences correspond to ∼0.6% of bacterial 16S rRNA gene sequences. Stable isotope probing using 13CO2 was performed with the lake sediment and showed labeling of these Actinobacteria. This indicated that they might be important autotrophs in this environment. Although these Actinobacteria are not dominant members of the sediment microbial community, they could be of functional significance due to their contribution to the regeneration of Fe(III), which has a critical role as an electron acceptor for anaerobic microorganisms mineralizing sediment organic matter. To the best of our knowledge this is the first study to show the autotrophic nitrate-dependent Fe(II)-oxidizing nature of TM3 group of uncultured Actinobacteria.", "title": "" }, { "docid": "60e06e3eebafa9070eecf1ab1e9654f8", "text": "In most enterprises, databases are deployed on dedicated database servers. Often, these servers are underutilized much of the time. For example, in traces from almost 200 production servers from different organizations, we see an average CPU utilization of less than 4%. This unused capacity can be potentially harnessed to consolidate multiple databases on fewer machines, reducing hardware and operational costs. Virtual machine (VM) technology is one popular way to approach this problem. However, as we demonstrate in this paper, VMs fail to adequately support database consolidation, because databases place a unique and challenging set of demands on hardware resources, which are not well-suited to the assumptions made by VM-based consolidation.\n Instead, our system for database consolidation, named Kairos, uses novel techniques to measure the hardware requirements of database workloads, as well as models to predict the combined resource utilization of those workloads. We formalize the consolidation problem as a non-linear optimization program, aiming to minimize the number of servers and balance load, while achieving near-zero performance degradation. We compare Kairos against virtual machines, showing up to a factor of 12× higher throughput on a TPC-C-like benchmark. We also tested the effectiveness of our approach on real-world data collected from production servers at Wikia.com, Wikipedia, Second Life, and MIT CSAIL, showing absolute consolidation ratios ranging between 5.5:1 and 17:1.", "title": "" }, { "docid": "c85c3ef7100714d6d08f726aa8768bb9", "text": "An adaptive Kalman filter algorithm is adopted to estimate the state of charge (SOC) of a lithium-ion battery for application in electric vehicles (EVs). Generally, the Kalman filter algorithm is selected to dynamically estimate the SOC. However, it easily causes divergence due to the uncertainty of the battery model and system noise. To obtain a better convergent and robust result, an adaptive Kalman filter algorithm that can greatly improve the dependence of the traditional filter algorithm on the battery model is employed. In this paper, the typical characteristics of the lithium-ion battery are analyzed by experiment, such as hysteresis, polarization, Coulomb efficiency, etc. In addition, an improved Thevenin battery model is achieved by adding an extra RC branch to the Thevenin model, and model parameters are identified by using the extended Kalman filter (EKF) algorithm. Further, an adaptive EKF (AEKF) algorithm is adopted to the SOC estimation of the lithium-ion battery. Finally, the proposed method is evaluated by experiments with federal urban driving schedules. The proposed SOC estimation using AEKF is more accurate and reliable than that using EKF. The comparison shows that the maximum SOC estimation error decreases from 14.96% to 2.54% and that the mean SOC estimation error reduces from 3.19% to 1.06%.", "title": "" }, { "docid": "cc7a9ea0641544182f2d56e7414617c3", "text": "Findings showed that the nonconscious activation of a goal in memory led to increased positive implicit attitudes toward stimuli that could facilitate the goal. This evaluative readiness to pursue the nonconscious goal emerged even when participants were consciously unaware of the goal-relevant stimuli. The effect emerged the most strongly for those with some skill at the goal and for those for whom the goal was most currently important. The effect of implicit goal activation on implicit attitudes emerged in both an immediate condition as well as a delay condition, suggesting that a goal rather than a nonmotivational construct was activated. Participants' implicit attitudes toward a nonconscious goal also predicted their goal-relevant behavior. These findings suggest that people can become evaluatively ready to pursue a goal whenever it has been activated--a readiness that apparently does not require conscious awareness or deliberation about either the goal or the goal-relevant stimuli. Theoretical implications of this type of implicit goal readiness are discussed.", "title": "" }, { "docid": "367782d15691c3c1dfd25220643752f0", "text": "Music streaming services increasingly incorporate additional music taxonomies (i.e., mood, activity, and genre) to provide users different ways to browse through music collections. However, these additional taxonomies can distract the user from reaching their music goal, and influence choice satisfaction. We conducted an online user study with an application called \"Tune-A-Find,\" where we measured participants' music taxonomy choice (mood, activity, and genre). Among 297 participants, we found that the chosen taxonomy is related to personality traits. We found that openness to experience increased the choice for browsing music by mood, while conscientiousness increased the choice for browsing music by activity. In addition, those high in neuroticism were most likely to browse for music by activity or genre. Our findings can support music streaming services to further personalize user interfaces. By knowing the user's personality, the user interface can adapt to the user's preferred way of music browsing.", "title": "" }, { "docid": "e13fc2c9f5aafc6c8eb1909592c07a70", "text": "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].", "title": "" }, { "docid": "4451f35b38f0b3af0ff006d8995b0265", "text": "Social media together with still growing social media communities has become a powerful and promising solution in crisis and emergency management. Previous crisis events have proved that social media and mobile technologies used by citizens (widely) and public services (to some extent) have contributed to the post-crisis relief efforts. The iSAR+ EU FP7 project aims at providing solutions empowering citizens and PPDR (Public Protection and Disaster Relief) organizations in online and mobile communications for the purpose of crisis management especially in search and rescue operations. This paper presents the results of survey aiming at identification of preliminary end-user requirements in the close interworking with end-users across Europe.", "title": "" }, { "docid": "be7cc41f9e8d3c9e08c5c5ff1ea79f59", "text": "A person’s emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: “The face is the portrait of the mind; the eyes, its informers.”. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and", "title": "" }, { "docid": "4a609cf0c9f862f1c20155b239629b90", "text": "Intuitive access to information in habitual real-world environments is a challenge for information technology. An important question is how can we enhance established and well-functioning everyday environments rather than replace them by virtual environments (VEs)? Augmented reality (AR) technology has a lot of potential in this respect because it augments real-world environments with computer-generated imagery. Today, most AR systems use see-through head-mounted displays, which share most of the disadvantages of other head-attached display devices.", "title": "" }, { "docid": "f174469e907b60cd481da6b42bafa5f9", "text": "A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.", "title": "" }, { "docid": "17055a66f80354bf5a614a510a4ef689", "text": "People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.", "title": "" }, { "docid": "70f672268ae0b3e0e344a4f515057e6b", "text": "Murder-suicide, homicide-suicide, and dyadic death all refer to an incident where a homicide is committed followed by the perpetrator's suicide almost immediately or soon after the homicide. Homicide-suicides are relatively uncommon and vary from region to region. In the selected literature that we reviewed, shooting was the common method of killing and suicide, and only 3 cases of homicidal hanging involving child victims were identified. We present a case of dyadic death where the method of killing and suicide was hanging, and the victim was a young woman.", "title": "" }, { "docid": "019854be19420ba5e6badcd9adbb7dea", "text": "We present a new shared-memory parallel algorithm and implementation called FASCIA for the problems of approximate sub graph counting and sub graph enumeration. The problem of sub graph counting refers to determining the frequency of occurrence of a given sub graph (or template) within a large network. This is a key graph analytic with applications in various domains. In bioinformatics, sub graph counting is used to detect and characterize local structure (motifs) in protein interaction networks. Exhaustive enumeration and exact counting is extremely compute-intensive, with running time growing exponentially with the number of vertices in the template. In this work, we apply the color coding technique to determine approximate counts of non-induced occurrences of the sub graph in the original network. Color coding gives a fixed-parameter algorithm for this problem, using a dynamic programming-based counting approach. Our new contributions are a multilevel shared-memory parallelization of the counting scheme and several optimizations to reduce the memory footprint. We show that approximate counts can be obtained for templates with up to 12 vertices, on networks with up to millions of vertices and edges. Prior work on this problem has only considered out-of-core parallelization on distributed platforms. With our new counting scheme, data layout optimizations, and multicore parallelism, we demonstrate a significant speedup over the current state-of-the-art for sub graph counting.", "title": "" }, { "docid": "ff24e5e100d26c9de2bde8ae8cd7fec4", "text": "The Global Positioning System (GPS) grows into a ubiquitous utility that provides positioning, navigation, and timing (PNT) services. As an essential element of the global information infrastructure, cyber security of GPS faces serious challenges. Some mission-critical systems even rely on GPS as a security measure. However, civilian GPS itself has no protection against malicious acts such as spoofing. GPS spoofing breaches authentication by forging satellite signals to mislead users with wrong location/timing data that threatens homeland security. In order to make civilian GPS secure and resilient for diverse applications, we must understand the nature of attacks. This paper proposes a novel attack modeling of GPS spoofing with event-driven simulation package. Simulation supplements usual experiments to limit incidental harms and to comprehend a surreptitious scenario. We also provide taxonomy of GPS spoofing through characterization. The work accelerates the development of defense technology against GPS-based attacks.", "title": "" }, { "docid": "411d3048bd13f48f0c31259c41ff2903", "text": "In computer vision, object detection is addressed as one of the most challenging problems as it is prone to localization and classification error. The current best-performing detectors are based on the technique of finding region proposals in order to localize objects. Despite having very good performance, these techniques are computationally expensive due to having large number of proposed regions. In this paper, we develop a high-confidence region-based object detection framework that boosts up the classification performance with less computational burden. In order to formulate our framework, we consider a deep network that activates the semantically meaningful regions in order to localize objects. These activated regions are used as input to a convolutional neural network (CNN) to extract deep features. With these features, we train a set of class-specific binary classifiers to predict the object labels. Our new region-based detection technique significantly reduces the computational complexity and improves the performance in object detection. We perform rigorous experiments on PASCAL, SUN, MIT-67 Indoor and MSRC datasets to demonstrate that our proposed framework outperforms other state-of-the-art methods in recognizing objects.", "title": "" }, { "docid": "5bb98a6655f823b38c3866e6d95471e9", "text": "This article describes the HR Management System in place at Sears. Key emphases of Sears' HR management infrastructure include : (1) formulating and communicating a corporate mission, vision, and goals, (2) employee education and development through the Sears University, (3) performance management and incentive compensation systems linked closely to the firm's strategy, (4) validated employee selection systems, and (5) delivering the \"HR Basics\" very competently. Key challenges for the future include : (1) maintaining momentum in the performance improvement process, (2) identifying barriers to success, and (3) clearly articulating HR's role in the change management process . © 1999 John Wiley & Sons, Inc .", "title": "" } ]
scidocsrr
1e92c8fb3c8b1435d830daeb255d9f41
DISTRIBUTED TRAINING
[ { "docid": "f2334ce1d717a8f6e91771f95a00b46e", "text": "High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1.", "title": "" }, { "docid": "6fdb3ae03e6443765c72197eb032f4a0", "text": "This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desk-top microphone (rather than a close-talking microphone) is used for speech input. Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy. Two kinds of environmental variability are introduced by the use of desk-top microphones and different training and testing conditions: additive noise and spectral tilt introduced by linear filtering. An important attribute of the novel compensation algorithms described in this thesis is that they provide joint rather than independent compensation for these two types of degradation. Acoustical compensation is applied in our algorithms as an additive correction in the cepstral domain. This allows a higher degree of integration within SPHINX, the Carnegie Mellon speech recognition system, that uses the cepstrum as its feature vector. Therefore, these algorithms can be implemented very efficiently. Processing in many of these algorithms is based on instantaneous signal-to-noise ratio (SNR), as the appropriate compensation represents a form of noise suppression at low SNRs and spectral equalization at high SNRs. The compensation vectors for additive noise and spectral transformations are estimated by minimizing the differences between speech feature vectors obtained from a \"standard\" training corpus of speech and feature vectors that represent the current acoustical environment. In our work this is accomplished by a minimizing the distortion of vector-quantized cepstra that are produced by the feature extraction module in SPHINX. In this dissertation we describe several algorithms including the SNR-Dependent Cepstral Normalization, (SDCN) and the Codeword-Dependent Cepstral Normalization (CDCN). With CDCN, the accuracy of SPHINX when trained on speech recorded with a close-talking microphone and tested on speech recorded with a desk-top microphone is essentially the same obtained when the system is trained and tested on speech from the desk-top microphone. An algorithm for frequency normalization has also been proposed in which the parameter of the bilinear transformation that is used by the signal-processing stage to produce frequency warping is adjusted for each new speaker and acoustical environment. The optimum value of this parameter is again chosen to minimize the vector-quantization distortion between the standard environment and the current one. In preliminary studies, use of this frequency normalization produced a moderate additional decrease in the observed error rate.", "title": "" } ]
[ { "docid": "538f1b131a9803db07ab20f202ecc96e", "text": "In this paper, we propose a direction-of-arrival (DOA) estimation method by combining multiple signal classification (MUSIC) of two decomposed linear arrays for the corresponding coprime array signal processing. The title “DECOM” means that, first, the nonlinear coprime array needs to be DECOMposed into two linear arrays, and second, Doa Estimation is obtained by COmbining the MUSIC results of the linear arrays, where the existence and uniqueness of the solution are proved. To reduce the computational complexity of DECOM, we design a two-phase adaptive spectrum search scheme, which includes a coarse spectrum search phase and then a fine spectrum search phase. Extensive simulations have been conducted and the results show that the DECOM can achieve accurate DOA estimation under different SNR conditions.", "title": "" }, { "docid": "6df12ee53551f4a3bd03bca4ca545bf1", "text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.", "title": "" }, { "docid": "ecda448df7b28ea5e453c179206e91a4", "text": "The cloud infrastructure provider (CIP) in a cloud computing platform must provide security and isolation guarantees to a service provider (SP), who builds the service(s) for such a platform. We identify last level cache (LLC) sharing as one of the impediments to finer grain isolation required by a service, and advocate two resource management approaches to provide performance and security isolation in the shared cloud infrastructure - cache hierarchy aware core assignment and page coloring based cache partitioning. Experimental results demonstrate that these approaches are effective in isolating cache interference impacts a VM may have on another VM. We also incorporate these approaches in the resource management (RM) framework of our example cloud infrastructure, which enables the deployment of VMs with isolation enhanced SLAs.", "title": "" }, { "docid": "d130c6eed44a863e8c8e3bb9c392eb32", "text": "This study presents narrow-band measurements of the mobile vehicle-to-vehicle propagation channel at 5.9 GHz, under realistic suburban driving conditions in Pittsburgh, Pennsylvania. Our system includes differential Global Positioning System (DGPS) receivers, thereby enabling dynamic measurements of how large-scale path loss, Doppler spectrum, and coherence time depend on vehicle location and separation. A Nakagami distribution is used for describing the fading statistics. The speed-separation diagram is introduced as a new tool for analyzing and understanding the vehicle-to-vehicle propagation environment. We show that this diagram can be used to model and predict channel Doppler spread and coherence time using vehicle speed and separation.", "title": "" }, { "docid": "43c3d477fdadea837f74897facf496e4", "text": "Aerial robots provide valuable support in several high-risk scenarios thanks to their capability to quickly fly to locations dangerous or even inaccessible to humans. In order to fully benefit from these features, aerial robots should be easy to transport and rapid to deploy. With this aim, this paper focuses on the development of a novel pocket sized quadrotor with foldable arms. The quadrotor can be packaged for transportation by folding its arms around the main frame. Before flight, the quadrotor's arms self-deploy in 0.3 seconds thanks to the torque generated by the propellers. The paper describes the design strategies used for developing lightweight, stiff and self-deployable foldable arms for miniature quadrotors. The arms are manufactured according to an origami technique with a foldable multi-layer material. A prototype of the quadrotor is presented as a proof of concept and performance of the system is assessed.", "title": "" }, { "docid": "1ace2a8a8c6b4274ac0891e711d13190", "text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.", "title": "" }, { "docid": "e0edda10185fcf75428d371116f37213", "text": "Building upon self-regulated learning theories, we examined the nature of student writing goals and the relationship of these writing goals to revision alone and in combination with two other important sources of students’ self-regulated revision—peer comments on their writing, and reflections for their own writing obtained from reviewing others’ writing. Data were obtained from a large introductory undergraduate class in the context of two 1000-word writing assignments involving online peer review and a required revision. We began with an investigation of students’ free response learning goals and a follow-up quantitative survey about the nature and structure of these writing goals. We found that: (a) students tended to create high-level substantive goals more often, (b) students change their writing goals across papers even for a very similar assignment, and (c) their writing goals divide into three dimensions: general writing goals, genre writing goals, and assignment goals. We then closely coded and analyzed the relative levels of association of revision changes with writing goals, peer comments, reflections from peer review, and combinations of these sources. Findings suggest that high-level revisions are commonly associated with writing goals, are especially likely to occur for combinations of the three sources, and peer comments alone appeared to make the largest contributions to revision.", "title": "" }, { "docid": "341b6ae3f5cf08b89fb573522ceeaba1", "text": "Neural parsers have benefited from automatically labeled data via dependencycontext word embeddings. We investigate training character embeddings on a word-based context in a similar way, showing that the simple method significantly improves state-of-the-art neural word segmentation models, beating tritraining baselines for leveraging autosegmented data.", "title": "" }, { "docid": "f028bf7bbaa4d182013771e9079b5e21", "text": "Hepatoblastoma (HB), a primary liver tumor in childhood, is often accompanied by alpha-fetoprotein (AFP) secretion, and sometimes by β-human chorionic gonadotropin hormone (β-hCG) secretion, and this can cause peripheral precocious puberty (PPP). We describe a case of PPP associated with HB. Laboratory tests showed an increase in AFP, β-hCG and testosterone values, and suppression of follicle-stimulating hormone and luteinizing hormone levels. After chemotherapy and surgery, AFP, β-hCG and testosterone levels normalized and signs of virilization did not progress further. The child did not show evidence for tumor recurrence after 16 months of follow-up. New therapeutic approaches and early diagnosis may ensure a better prognosis of virilizing HB, than reported in the past. Assessment of PPP should always take into account the possibility of a tumoral source.", "title": "" }, { "docid": "18beb6ddcc1c8bb3e45dbd56b34a8776", "text": "This paper discusses the minimization of the line- and motor-side harmonics in a high power current source drive system. The proposed control achieves speed regulation over the entire speed range with enhanced transient performance and minimal harmonic distortion of the line and motor currents while limiting the switching frequency of current source converters to a maximum of 540 Hz. To minimize the motor current harmonic distortion, space vector modulation (SVM) and selective harmonic elimination (SHE) schemes are optimally implemented according to different drive operating conditions. In order to suppress line- side resonant harmonics, an active damping method using a combination of a virtual harmonic resistor and a three-step modulation signal regulator is employed. The performance of the proposed current source drive is verified by simulation for a 1 MVA system and experiments on a 10 kVA gate-commutated thyristor (GCT) based laboratory drive system.", "title": "" }, { "docid": "648a1ff0ad5b2742ff54460555287c84", "text": "In the European academic and institutional debate, interoperability is predominantly seen as a means to enable public administrations to collaborate within Members State and across borders. The article presents a conceptual framework for ICT-enabled governance and analyses the role of interoperability in this regard. The article makes a specific reference to the exploratory research project carried out by the Information Society Unit of the Institute for Prospective Technological Studies (IPTS) of the European Commission’s Joint Research Centre on emerging ICT-enabled governance models in EU cities (EXPGOV). The aim of this project is to study the interplay between ICTs and governance processes at city level and formulate an interdisciplinary framework to assess the various dynamics emerging from the application of ICT-enabled service innovations in European cities. In this regard, the conceptual framework proposed in this article results from an action research perspective and investigation of e-governance experiences carried out in Europe. It aims to elicit the main value drivers that should orient how interoperable systems are implemented, considering the reciprocal influences that occur between these systems and different governance models in their specific context.", "title": "" }, { "docid": "6379d5330037a774f9ceed4c51bda1f6", "text": "Despite long-standing observations on diverse cytokinin actions, the discovery path to cytokinin signaling mechanisms was tortuous. Unyielding to conventional genetic screens, experimental innovations were paramount in unraveling the core cytokinin signaling circuitry, which employs a large repertoire of genes with overlapping and specific functions. The canonical two-component transcription circuitry involves His kinases that perceive cytokinin and initiate signaling, as well as His-to-Asp phosphorelay proteins that transfer phosphoryl groups to response regulators, transcriptional activators, or repressors. Recent advances have revealed the complex physiological functions of cytokinins, including interactions with auxin and other signal transduction pathways. This review begins by outlining the historical path to cytokinin discovery and then elucidates the diverse cytokinin functions and key signaling components. Highlights focus on the integration of cytokinin signaling components into regulatory networks in specific contexts, ranging from molecular, cellular, and developmental regulations in the embryo, root apical meristem, shoot apical meristem, stem and root vasculature, and nodule organogenesis to organismal responses underlying immunity, stress tolerance, and senescence.", "title": "" }, { "docid": "6b97884f9bc253e1291d816d38608093", "text": "The World Health Organization (WHO) is currently updating the tenth version of their diagnostic tool, the International Classification of Diseases (ICD, WHO, 1992). Changes have been proposed for the diagnosis of Transsexualism (ICD-10) with regard to terminology, placement and content. The aim of this study was to gather the opinions of transgender individuals (and their relatives/partners) and clinicians in the Netherlands, Flanders (Belgium) and the United Kingdom regarding the proposed changes and the clinical applicability and utility of the ICD-11 criteria of 'Gender Incongruence of Adolescence and Adulthood' (GIAA). A total of 628 participants were included in the study: 284 from the Netherlands (45.2%), 8 from Flanders (Belgium) (1.3%), and 336 (53.5%) from the UK. Most participants were transgender people (or their partners/relatives) (n = 522), 89 participants were healthcare providers (HCPs) and 17 were both healthcare providers and (partners/relatives of) transgender people. Participants completed an online survey developed for this study. Most participants were in favor of the proposed diagnostic term of 'Gender Incongruence' and thought that this was an improvement on the ICD-10 diagnostic term of 'Transsexualism'. Placement in a separate chapter dealing with Sexual- and Gender-related Health or as a Z-code was preferred by many and only a small number of participants stated that this diagnosis should be excluded from the ICD-11. In the UK, most transgender participants thought there should be a diagnosis related to being trans. However, if it were to be removed from the chapter on \"psychiatric disorders\", many transgender respondents indicated that they would prefer it to be removed from the ICD in its entirety. There were no large differences between the responses of the transgender participants (or their partners and relatives) and HCPs. HCPs were generally positive about the GIAA diagnosis; most thought the diagnosis was clearly defined and easy to use in their practice or work. The duration of gender incongruence (several months) was seen by many as too short and required a clearer definition. If the new diagnostic term of GIAA is retained, it should not be stigmatizing to individuals. Moving this diagnosis away from the mental and behavioral chapter was generally supported. Access to healthcare was one area where retaining a diagnosis seemed to be of benefit.", "title": "" }, { "docid": "f7ba998d8f4eb51619673edb66f7b3e3", "text": "We propose an extension of Convolutional Neural Networks (CNNs) to graph-structured data, including strided convolutions and data augmentation defined from inferred graph translations. Our method matches the accuracy of state-of-the-art CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graph-based alternatives.", "title": "" }, { "docid": "ac1d1bf198a178cb5655768392c3d224", "text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.", "title": "" }, { "docid": "b492c624d1593515d55b3d9b6ac127a7", "text": "We introduce a type of Deep Boltzmann Machine (DBM) that is suitable for extracting distributed semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This enables an efficient pretraining algorithm and a state initialization scheme for fast inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks.", "title": "" }, { "docid": "40dc2dc28dca47137b973757cdf3bf34", "text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.", "title": "" }, { "docid": "74e2fc764e93b5678a3d17cbca436c9f", "text": "B cells have a fundamental role in the pathogenesis of various autoimmune neurological disorders, not only as precursors of antibody-producing cells, but also as important regulators of the T-cell activation process through their participation in antigen presentation, cytokine production, and formation of ectopic germinal centers in the intermeningeal spaces. Two B-cell trophic factors—BAFF (B-cell-activating factor) and APRIL (a proliferation-inducing ligand)—and their receptors are strongly upregulated in many immunological disorders of the CNS and PNS, and these molecules contribute to clonal expansion of B cells in situ. The availability of monoclonal antibodies or fusion proteins against B-cell surface molecules and trophic factors provides a rational approach to the treatment of autoimmune neurological diseases. This article reviews the role of B cells in autoimmune neurological disorders and summarizes the experience to date with rituximab, a B-cell-depleting monoclonal antibody against CD20, for the treatment of relapsing–remitting multiple sclerosis, autoimmune neuropathies, neuromyelitis optica, paraneoplastic neurological disorders, myasthenia gravis, and inflammatory myopathies. It is expected that ongoing controlled trials will establish the efficacy and long-term safety profile of anti-B-cell agents in several autoimmune neurological disorders, as well as exploring the possibility of a safe and synergistic effect with other immunosuppressants or immunomodulators.", "title": "" }, { "docid": "55bdb8b6f4dd3dc836e9751ae8d721e3", "text": "Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.", "title": "" }, { "docid": "bf079d5c13d37a57e835856df572a306", "text": "Paraphrase Detection is the task of examining if two sentences convey the same meaning or not. Here, in this paper, we have chosen a sentence embedding by unsupervised RAE vectors for capturing syntactic as well as semantic information. The RAEs learn features from the nodes of the parse tree and chunk information along with unsupervised word embedding. These learnt features are used for measuring phrase wise similarity between two sentences. Since sentences are of varying length, we use dynamic pooling for getting a fixed sized representation for sentences. This fixed sized sentence representation is the input to the classifier. The DPIL (Detecting Paraphrases in Indian Languages) dataset is used for paraphrase identification here. Initially, paraphrase identification is defined as a 2-class problem and then later, it is extended to a 3-class problem. Word2vec and Glove embedding techniques producing 100, 200 and 300 dimensional vectors are used to check variation in accuracies. The baseline system accuracy obtained using word2vec for 2-class problem is 77.67% and the same for 3-class problem is 66.07%. Glove gave an accuracy of 77.33% for 2-class and 65.42% for 3-classproblem. The results are also compared with the existing open source word embedding and our system using Word2vec embedding is found to outperform better. This is a first attempt using chunking based approach for identification of Malayalam paraphrases.", "title": "" } ]
scidocsrr
a3b97a7c122b6065d637951d9ce67691
Cross-scenario clothing retrieval and fine-grained style recognition
[ { "docid": "c1f6052ecf802f1b4b2e9fd515d7ea15", "text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.", "title": "" }, { "docid": "ed5fad1aee50a98f16a6e6d2ced7fe2e", "text": "We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5%) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.", "title": "" } ]
[ { "docid": "0acd46f97e516e5b6fc15a7716d4247b", "text": "Proposing that the algorithms of social life are acquired as a domain-based process, the author offers distinctions between social domains preparing the individual for proximity-maintenance within a protective relationship (attachment domain), use and recognition of social dominance (hierarchical power domain), identification and maintenance of the lines dividing \"us\" and \"them\" (coalitional group domain), negotiation of matched benefits with functional equals (reciprocity domain), and selection and protection of access to sexual partners (mating domain). Flexibility in the implementation of domains occurs at 3 different levels: versatility at a bioecological level, variations in the cognitive representation of individual experience, and cultural and individual variations in the explicit management of social life. Empirical evidence for domain specificity was strongest for the attachment domain; supportive evidence was also found for the distinctiveness of the 4 other domains. Implications are considered at theoretical and applied levels.", "title": "" }, { "docid": "eb2d29417686cc86a45c33694688801f", "text": "We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.", "title": "" }, { "docid": "f773798785419625b8f283fc052d4ab2", "text": "The increasing interest in energy storage for the grid can be attributed to multiple factors, including the capital costs of managing peak demands, the investments needed for grid reliability, and the integration of renewable energy sources. Although existing energy storage is dominated by pumped hydroelectric, there is the recognition that battery systems can offer a number of high-value opportunities, provided that lower costs can be obtained. The battery systems reviewed here include sodium-sulfur batteries that are commercially available for grid applications, redox-flow batteries that offer low cost, and lithium-ion batteries whose development for commercial electronics and electric vehicles is being applied to grid storage.", "title": "" }, { "docid": "1feaf48291b7ea83d173b70c23a3b7c0", "text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).", "title": "" }, { "docid": "3c891452e416c5faa3da8b6e32a57b3f", "text": "Linear support vector machines (svms) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based svms are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their response can be represented only as a function of the set of support vectors, which has been experimentally shown to grow linearly with the size of the training set. In this paper we propose a novel locally linear svm classifier with smooth decision boundary and bounded curvature. We show how the functions defining the classifier can be approximated using local codings and show how this model can be optimized in an online fashion by performing stochastic gradient descent with the same convergence guarantees as standard gradient descent method for linear svm. Our method achieves comparable performance to the state-of-the-art whilst being significantly faster than competing kernel svms. We generalise this model to locally finite dimensional kernel svm.", "title": "" }, { "docid": "04e4c1b80bcf1a93cafefa73563ea4d3", "text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.", "title": "" }, { "docid": "a693eeae7abe600c11da8d5dedabbcf9", "text": "Objectives: This study was designed to investigate psychometric properties of the Jefferson Scale of Patient Perceptions of Physician Empathy (JSPPPE), and to examine correlations between its scores and measures of overall satisfaction with physicians, personal trust, and indicators of patient compliance. Methods: Research participants included 535 out-patients (between 18-75 years old, 66% female). A survey was mailed to participants which included the JSPPPE (5-item), a scale for measuring overall satisfaction with the primary care physician (10-item), and demographic questions. Patients were also asked about compliance with their physician’s recommendation for preventive tests (colonoscopy, mammogram, and PSA for age and gender appropriate patients). Results: Factor analysis of the JSPPPE resulted in one prominent component. Corrected item-total score correlations ranged from .88 to .94. Correlation between scores of the JSPPPE and scores on the patient satisfaction scale was 0.93. Scores of the JSPPPE were highly correlated with measures of physician-patient trust (r >.73). Higher scores of the JSPPPE were significantly associated with physicians’ recommendations for preventive tests (colonoscopy, mammogram, and PSA) and with compliance rates which were > .80). Cronbach’s coefficient alpha for the JSPPPE ranged from .97 to .99 for the total sample and for patients in different gender and age groups. Conclusions: Empirical evidence supported the psychometrics of the JSPPPE, and confirmed significant links with patients’ satisfaction with their physicians, interpersonal trust, and compliance with physicians’ recommendations. Availability of this psychometrically sound instrument will facilitate empirical research on empathy in patient care in different countries.", "title": "" }, { "docid": "3d7fabdd5f56c683de20640abccafc44", "text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.", "title": "" }, { "docid": "720778ca4d6d8eb0fa78eecb1ebbb527", "text": "Address spoofing attacks like ARP spoofing and DDoS attacks are mostly launched in a networking environment to degrade the performance. These attacks sometimes break down the network services before the administrator comes to know about the attack condition. Software Defined Networking (SDN) has emerged as a novel network architecture in which date plane is isolated from the control plane. Control plane is implemented at a central device called controller. But, SDN paradigm is not commonly used due to some constraints like budget, limited skills to control SDN, the flexibility of traditional protocols. To get SDN benefits in a traditional network, a limited number of SDN devices can be deployed among legacy devices. This technique is called hybrid SDN. In this paper, we propose a new approach to automatically detect the attack condition and mitigate that attack in hybrid SDN. We represent the network topology in the form of a graph. A graph based traversal mechanism is adopted to indicate the location of the attacker. Simulation results show that our approach enhances the network efficiency and improves the network security Keywords—Communication system security; Network Security; ARP Spoofing Introduction", "title": "" }, { "docid": "27cc510f79a4ed76da42046b49bbb9fd", "text": "This article reports the orthodontic treatment ofa 25-year-old female patient whose chief complaint was the inclination of the maxillary occlusal plane in front view. The individualized vertical placement of brackets is described. This placement made possible a symmetrical occlusal plane to be achieved in a rather straightforward manner without the need for further technical resources.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "304b4cee4006e87fc4172a3e9de88ed1", "text": "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs—a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.", "title": "" }, { "docid": "6a55a097f27609ad50e94f0947d0e72c", "text": "This study develops an antenatal care information system to assist women during pregnancy. We designed and implemented the system as both a web-based service and a multi-platform application for smartphones and tablets. The proposed system has three novel features: (1) web-based maternity records, which contains concise explanations of various antenatal screening and diagnostic tests; (2) self-care journals, which allow pregnant women to keep track of their gestational weight gains, blood pressure, fetal movements, and contractions; and (3) health education, which automatically presents detailed information on antenatal care and other pregnancy-related knowledge according to the women's gestational age. A survey was conducted among pregnant women to evaluate the usability and acceptance of the proposed system. In order to prove that the antenatal care was effective, clinical outcomes should be provided and the results are focused on a usability evaluation.", "title": "" }, { "docid": "6fab26c4c8fa05390aa03998a748f87d", "text": "Click prediction is one of the fundamental problems in sponsored search. Most of existing studies took advantage of machine learning approaches to predict ad click for each event of ad view independently. However, as observed in the real-world sponsored search system, user’s behaviors on ads yield high dependency on how the user behaved along with the past time, especially in terms of what queries she submitted, what ads she clicked or ignored, and how long she spent on the landing pages of clicked ads, etc. Inspired by these observations, we introduce a novel framework based on Recurrent Neural Networks (RNN). Compared to traditional methods, this framework directly models the dependency on user’s sequential behaviors into the click prediction process through the recurrent structure in RNN. Large scale evaluations on the click-through logs from a commercial search engine demonstrate that our approach can significantly improve the click prediction accuracy, compared to sequence-independent approaches.", "title": "" }, { "docid": "55dfc0e1fae2ca1fed295bc9aa270157", "text": "The rapid development of driver fatigue detection technology indicates important significance of traffic safety. The authors' main goals of this Letter are principally three: (i) A middleware architecture, defined as process unit (PU), which can communicate with personal electroencephalography (EEG) node (PEN) and cloud server (CS). The PU receives EEG signals from PEN, recognises the fatigue state of the driver, and transfer this information to CS. The CS sends notification messages to the surrounding vehicles. (ii) An android application for fatigue detection is built. The application can be used for the driver to detect the state of his/her fatigue based on EEG signals, and warn neighbourhood vehicles. (iii) The detection algorithm for driver fatigue is applied based on fuzzy entropy. The idea of 10-fold cross-validation and support vector machine are used for classified calculation. Experimental results show that the average accurate rate of detecting driver fatigue is about 95%, which implying that the algorithm is validity in detecting state of driver fatigue.", "title": "" }, { "docid": "269e2f8bca42d5369f9337aea6191795", "text": "Today, exposure to new and unfamiliar environments is a necessary part of daily life. Effective communication of location-based information through location-based services has become a key concern for cartographers, geographers, human-computer interaction and professional designers alike. Recently, much attention was directed towards Augmented Reality (AR) interfaces. Current research, however, focuses primarily on computer vision and tracking, or investigates the needs of urban residents, already familiar with their environment. Adopting a user-centred design approach, this paper reports findings from an empirical mobile study investigating how tourists acquire knowledge about an unfamiliar urban environment through AR browsers. Qualitative and quantitative data was used in the development of a framework that shifts the perspective towards a more thorough understanding of the overall design space for such interfaces. The authors analysis provides a frame of reference for the design and evaluation of mobile AR interfaces. The authors demonstrate the application of the framework with respect to optimization of current design of AR.", "title": "" }, { "docid": "fcd9a80d35a24c7222392c11d3376c72", "text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.", "title": "" }, { "docid": "37e936c375d34f356e195f844125ae84", "text": "LEARNING OBJECTIVES\nThe reader is presumed to have a basic understanding of facial anatomy and facial rejuvenation procedures. After reading this article, the reader should also be able to: 1. Identify the essential anatomy of the face as it relates to facelift surgery. 2. Describe the common types of facelift procedures, including their strengths and weaknesses. 3. Apply appropriate preoperative and postoperative management for facelift patients. 4. Describe common adjunctive procedures. Physicians may earn 1.0 AMA PRA Category 1 Credit by successfully completing the examination based on material covered in this article. This activity should take one hour to complete. The examination begins on page 464. As a measure of the success of the education we hope you will receive from this article, we encourage you to log on to the Aesthetic Society website and take the preexamination before reading this article. Once you have completed the article, you may then take the examination again for CME credit. The Aesthetic Society will be able to compare your answers and use these data for future reference as we attempt to continually improve the CME articles we offer. ASAPS members can complete this CME examination online by logging on to the ASAPS members-only website (http://www.surgery.org/members) and clicking on \"Clinical Education\" in the menu bar. Modern aesthetic surgery of the face began in the first part of the 20th century in the United States and Europe. Initial limited excisions gradually progressed to skin undermining and eventually to a variety of methods for contouring the subcutaneous facial tissue. This particular review focuses on the cheek and neck. While the lid-cheek junction, eyelids, and brow must also be considered to obtain a harmonious appearance, those elements are outside the scope of this article. Overall patient management, including patient selection, preoperative preparation, postoperative care, and potential complications are discussed.", "title": "" }, { "docid": "cc05dca89bf1e3f53cf7995e547ac238", "text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.", "title": "" }, { "docid": "77335856af8b62ae2e1fcd10654ed9a1", "text": "Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.", "title": "" } ]
scidocsrr
f4476a230ad3cff29a1ace5d2e4f3987
Clustering of streaming time series is meaningless
[ { "docid": "e10886264acb1698b36c4d04cf2d9df6", "text": "† This work was supported by the RGC CERG project PolyU 5065/98E and the Departmental Grant H-ZJ84 ‡ Corresponding author ABSTRACT Pattern discovery from time series is of fundamental importance. Particularly when the domain expert derived patterns do not exist or are not complete, an algorithm to discover specific patterns or shapes automatically from the time series data is necessary. Such an algorithm is noteworthy in that it does not assume prior knowledge of the number of interesting structures, nor does it require an exhaustive explanation of the patterns being described. In this paper, a clustering approach is proposed for pattern discovery from time series. In view of its popularity and superior clustering performance, the self-organizing map (SOM) was adopted for pattern discovery in temporal data sequences. It is a special type of clustering algorithm that imposes a topological structure on the data. To prepare for the SOM algorithm, data sequences are segmented from the numerical time series using a continuous sliding window. Similar temporal patterns are then grouped together using SOM into clusters, which may subsequently be used to represent different structures of the data or temporal patterns. Attempts have been made to tackle the problem of representing patterns in a multi-resolution manner. With the increase in the number of data points in the patterns (the length of patterns), the time needed for the discovery process increases exponentially. To address this problem, we propose to compress the input patterns by a perceptually important point (PIP) identification algorithm. The idea is to replace the original data segment by its PIP’s so that the dimensionality of the input pattern can be reduced. Encouraging results are observed and reported for the application of the proposed methods to the time series collected from the Hong Kong stock market.", "title": "" }, { "docid": "b336b95e53ba0d804060d2cee84f5fb4", "text": "Discovering unexpected and useful patterns in databases is a fundamental data mining task. In recent years, a trend in data mining has been to design algorithms for discovering patterns in sequential data. One of the most popular data mining tasks on sequences is sequential pattern mining. It consists of discovering interesting subsequences in a set of sequences, where the interestingness of a subsequence can be measured in terms of various criteria such as its occurrence frequency, length, and profit. Sequential pattern mining has many real-life applications since data is encoded as sequences in many fields such as bioinformatics, e-learning, market basket analysis, text analysis, and webpage click-stream analysis. This paper surveys recent studies on sequential pattern mining and its applications. The goal is to provide both an introduction to sequential pattern mining, and a survey of recent advances and research opportunities. The paper is divided into four main parts. First, the task of sequential pattern mining is defined and its applications are reviewed. Key concepts and terminology are introduced. Moreover, main approaches and strategies to solve sequential pattern mining problems are presented. Limitations of traditional sequential pattern mining approaches are also highlighted, and popular variations of the task of sequential pattern mining are presented. The paper also presents research opportunities and the relationship to other popular pattern mining problems. Lastly, the paper also discusses open-source implementations of sequential pattern mining algorithms.", "title": "" } ]
[ { "docid": "83fbffec2e727e6ed6be1e02f54e1e47", "text": "Large dc and ac electric currents are often measured by open-loop sensors without a magnetic yoke. A widely used configuration uses a differential magnetic sensor inserted into a hole in a flat busbar. The use of a differential sensor offers the advantage of partial suppression of fields coming from external currents. Hall sensors and AMR sensors are currently used in this application. In this paper, we present a current sensor of this type that uses novel integrated fluxgate sensors, which offer a greater range than magnetoresistors and better stability than Hall sensors. The frequency response of this type of current sensor is limited due to the eddy currents in the solid busbar. We present a novel amphitheater geometry of the hole in the busbar of the sensor, which reduces the frequency dependence from 15% error at 1 kHz to 9%.", "title": "" }, { "docid": "777cbf7e5c5bdf4457ce24520bbc8036", "text": "Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today's DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TL-DRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5× faster than the next fastest simulator. Ramulator is released under the permissive BSD license.", "title": "" }, { "docid": "2176518448c89ba977d849f71c86e6a6", "text": "iii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. _______________________________________ L. Peter Deutsch I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Abstract Object-oriented programming languages confer many benefits, including abstraction, which lets the programmer hide the details of an object's implementation from the object's clients. Unfortunately, crossing abstraction boundaries often incurs a substantial run-time overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction , while desirable from a design standpoint, may be impractical when it leads to inefficient programs. Aggressive compiler optimizations can reduce the overhead of abstraction. However, the long compilation times introduced by optimizing compilers delay the programming environment's responses to changes in the program. Furthermore, optimization also conflicts with source-level debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals by performing optimizations lazily. Four new techniques work together to achieve high performance and high responsiveness: • Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system. On average, programs run 1.5 times faster than the previous SELF system; compared to a commercial Smalltalk implementation, two medium-sized benchmarks run about three times faster. This level of performance is obtained with a compiler that is both simpler and faster than previous SELF compilers. • Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast non-optimizing compiler to generate initial code while automatically recompiling heavily used parts of the program with an optimizing compiler. On a previous-generation workstation like the SPARCstation-2, fewer than 200 pauses exceeded 200 ms during a 50-minute interaction, and 21 pauses exceeded one second. …", "title": "" }, { "docid": "c1ca3f495400a898da846bdf20d23833", "text": "It is very useful to integrate human knowledge and experience into traditional neural networks for faster learning speed, fewer training samples and better interpretability. However, due to the obscured and indescribable black box model of neural networks, it is very difficult to design its architecture, interpret its features and predict its performance. Inspired by human visual cognition process, we propose a knowledge-guided semantic computing network which includes two modules: a knowledge-guided semantic tree and a data-driven neural network. The semantic tree is pre-defined to describe the spatial structural relations of different semantics, which just corresponds to the tree-like description of objects based on human knowledge. The object recognition process through the semantic tree only needs simple forward computing without training. Besides, to enhance the recognition ability of the semantic tree in aspects of the diversity, randomicity and variability, we use the traditional neural network to aid the semantic tree to learn some indescribable features. Only in this case, the training process is needed. The experimental results on MNIST and GTSRB datasets show that compared with the traditional data-driven network, our proposed semantic computing network can achieve better performance with fewer training samples and lower computational complexity. Especially, Our model also has better adversarial robustness than traditional neural network with the help of human knowledge.", "title": "" }, { "docid": "4c05d5add4bd2130787fd894ce74323a", "text": "Although semi-supervised model can extract the event mentions matching frequent event patterns, it suffers much from those event mentions, which match infrequent patterns or have no matching pattern. To solve this issue, this paper introduces various kinds of linguistic knowledge-driven event inference mechanisms to semi-supervised Chinese event extraction. These event inference mechanisms can capture linguistic knowledge from four aspects, i.e. semantics of argument role, compositional semantics of trigger, consistency on coreference events and relevant events, to further recover missing event mentions from unlabeled texts. Evaluation on the ACE 2005 Chinese corpus shows that our event inference mechanisms significantly outperform the refined state-of-the-art semi-supervised Chinese event extraction system in F1-score by 8.5%.", "title": "" }, { "docid": "225a492370efee6eca39f713026efe12", "text": "Researchers in the social and behavioral sciences routinely rely on quasi-experimental designs to discover knowledge from large data-bases. Quasi-experimental designs (QEDs) exploit fortuitous circumstances in non-experimental data to identify situations (sometimes called \"natural experiments\") that provide the equivalent of experimental control and randomization. QEDs allow researchers in domains as diverse as sociology, medicine, and marketing to draw reliable inferences about causal dependencies from non-experimental data. Unfortunately, identifying and exploiting QEDs has remained a painstaking manual activity, requiring researchers to scour available databases and apply substantial knowledge of statistics. However, recent advances in the expressiveness of databases, and increases in their size and complexity, provide the necessary conditions to automatically identify QEDs. In this paper, we describe the first system to discover knowledge by applying quasi-experimental designs that were identified automatically. We demonstrate that QEDs can be identified in a traditional database schema and that such identification requires only a small number of extensions to that schema, knowledge about quasi-experimental design encoded in first-order logic, and a theorem-proving engine. We describe several key innovations necessary to enable this system, including methods for automatically constructing appropriate experimental units and for creating aggregate variables on those units. We show that applying the resulting designs can identify important causal dependencies in real domains, and we provide examples from academic publishing, movie making and marketing, and peer-production systems. Finally, we discuss the integration of QEDs with other approaches to causal discovery, including joint modeling and directed experimentation.", "title": "" }, { "docid": "38a0f56e760b0e7a2979c90a8fbcca68", "text": "The Rubik’s Cube is perhaps the world’s most famous and iconic puzzle, well-known to have a rich underlying mathematical structure (group theory). In this paper, we show that the Rubik’s Cube also has a rich underlying algorithmic structure. Specifically, we show that the n×n×n Rubik’s Cube, as well as the n×n×1 variant, has a “God’s Number” (diameter of the configuration space) of Θ(n/ logn). The upper bound comes from effectively parallelizing standard Θ(n) solution algorithms, while the lower bound follows from a counting argument. The upper bound gives an asymptotically optimal algorithm for solving a general Rubik’s Cube in the worst case. Given a specific starting state, we show how to find the shortest solution in an n×O(1)×O(1) Rubik’s Cube. Finally, we show that finding this optimal solution becomes NPhard in an n×n×1 Rubik’s Cube when the positions and colors of some cubies are ignored (not used in determining whether the cube is solved).", "title": "" }, { "docid": "e181f73c36c1d8c9463ef34da29d9e03", "text": "This paper examines prospects and limitations of citation studies in the humanities. We begin by presenting an overview of bibliometric analysis, noting several barriers to applying this method in the humanities. Following that, we present an experimental tool for extracting and classifying citation contexts in humanities journal articles. This tool reports the bibliographic information about each reference, as well as three features about its context(s): frequency, locationin-document, and polarity. We found that extraction was highly successful (above 85%) for three of the four journals, and statistics for the three citation figures were broadly consistent with previous research. We conclude by noting several limitations of the sentiment classifier and suggesting future areas for refinement. .................................................................................................................................................................................", "title": "" }, { "docid": "3a466fd05c021b8bd48600246086aaa2", "text": "Recent empirical work has examined the extent to which international trade fosters international “spillovers” of technological information. FDI is an alternate, potentially equally important channel for the mediation of such knowledge spillovers. I introduce a framework for measuring international knowledge spillovers at the firm level, and I use this framework to directly test the hypothesis that FDI is a channel of knowledge spillovers for Japanese multinationals undertaking direct investments in the United States. Using an original firm-level panel data set on Japanese firms’ FDI and innovative activity, I find evidence that FDI increases the flow of knowledge spillovers both from and to the investing Japanese firms. ∗ This paper is a revision of Branstetter (2000a). I would like to thank Natasha Hsieh, Masami Imai,Yoko Kusaka, Grace Lin, Kentaro Minato, Kaoru Nabeshima, and Yoshiaki Ogura for excellent research assistance. I also thank Paul Almeida, Jonathan Eaton, Bronwyn Hall, Takatoshi Ito, Adam Jaffe, Wolfgang Keller, Yoshiaki Nakamura, James Rauch, Mariko Sakakibara, Ryuhei Wakasugi, two anonymous referees, and seminar participants at UC-Davis, UC-Berkeley, Boston University, UC-Boulder, Brandeis University, Columbia University, Cornell University, Northwestern University, UC-San Diego, the World Bank, the University of Michigan, the Research Institute of Economy, Trade, and Industry, and the NBER for valuable comments. Funding was provided by a University of California Faculty Research Grant, a grant from the Japan Foundation Center for Global Partnership, and the NBER Project on Industrial Technology and Productivity. Note that parts of this paper borrow from Branstetter (2000b) and from Branstetter and Nakamura (2003). I am solely responsible for any errors. ** Lee Branstetter, Columbia Business School, Uris Hall 815, 3022 Broadway, New York, NY 10027; TEL 212-854-2722; FAX 212-854-9895; E-mail lgb2001@columbia.edu", "title": "" }, { "docid": "a58930da8179d71616b8b6ef01ed1569", "text": "Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.", "title": "" }, { "docid": "560a19017dcc240d48bb879c3165b3e1", "text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "80a61f27dab6a8f71a5c27437254778b", "text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.", "title": "" }, { "docid": "be689d89e1e5182895a473a52a1950cd", "text": "This paper designs a Continuous Data Level Auditing system utilizing business process based analytical procedures and evaluates the system’s performance using disaggregated transaction records of a large healthcare management firm. An important innovation in the proposed architecture of the CDA system is the utilization of analytical monitoring as the second (rather than the first) stage of data analysis. The first component of the system utilizes automatic transaction verification to filter out exceptions, defined as transactions violating formal business process rules. The second component of the system utilizes business process based analytical procedures, denoted here ―Continuity Equations‖, as the expectation models for creating business process audit benchmarks. Our first objective is to examine several expectation models that can serve as the continuity equation benchmarks: a Linear Regression Model, a Simultaneous Equation Model, two Vector Autoregressive models, and a GARCH model. The second objective is to examine the impact of the choice of the level of data aggregation on anomaly detection performance. The third objective is to design a set of online learning and error correction protocols for automatic model inference and updating. Using a seeded error simulation approach, we demonstrate that the use of disaggregated business process data allows the detection of anomalies that slip through the analytical procedures applied to more aggregated data. Furthermore, the results indicate that under most circumstances the use of real time error correction results in superior performance, thus showing the benefit of continuous auditing.", "title": "" }, { "docid": "d558f980b85bf970a7b57c00df361591", "text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.", "title": "" }, { "docid": "d18d4780cc259da28da90485bd3f0974", "text": "L'ostéogenèse imparfaite (OI) est un groupe hétérogène de maladies affectant le collagène de type I et caractérisées par une fragilité osseuse. Les formes létales sont rares et se caractérisent par une micromélie avec déformation des membres. Un diagnostic anténatal d'OI létale a été fait dans deux cas, par échographie à 17 et à 25 semaines d'aménorrhée, complélées par un scanner du squelette fœtal dans un cas. Une interruption thérapeutique de grossesse a été indiquée dans les deux cas. Pan African Medical Journal. 2016; 25:88 doi:10.11604/pamj.2016.25.88.5871 This article is available online at: http://www.panafrican-med-journal.com/content/article/25/88/full/ © Houda EL Mhabrech et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access", "title": "" }, { "docid": "3bb48e5bf7cc87d635ab4958553ef153", "text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: malin.sundstrom@hb.se", "title": "" }, { "docid": "1313abe909877b95557c51bb3b378cdb", "text": "To evaluate the effect of early systematic soccer training on postural control we measured center-of-pressure (COP) variability, range, mean velocity and frequency in bipedal quiet stance with eyes open (EO) and closed (EC) in 44 boys aged 13 (25 boys who practiced soccer for 5–6 years and 19 healthy boys who did not practice sports). The soccer players had better stability, particularly in the medial–lateral plane (M/L); their COP variability and range were lower than in controls in both EO (p < 0.05) and EC (p < 0.0005) condition indicating that the athletes were less dependent on vision than non-athletes. Improved stability of athletes was accompanied by a decrease in COP frequency (p < 0.001 in EO, and p < 0.04 in EC) which accounted for lower regulatory activity of balance system in soccer players. The athletes had lower COP mean velocity than controls (p < 0.0001 in both visual condition), with larger difference in the M/L than A/P plane (p < 0.00001 and p < 0.05, respectively). Postural behavior was more variable within the non-athletes than soccer players, mainly in the EC stances (p < 0.005 for all COP parameters). We conclude that: (1) soccer training described was efficient in improving the M/L postural control in young boys; (2) athletes developed specific postural strategies characterized by decreased COP frequency and lower reliance on vision.", "title": "" }, { "docid": "7cebca46f584b2f31fd9d2c8ef004f17", "text": "Wirelessly networked systems of intra-body sensors and actuators could enable revolutionary applications at the intersection between biomedical science, networking, and control with a strong potential to advance medical treatment of major diseases of our times. Yet, most research to date has focused on communications along the body surface among devices interconnected through traditional electromagnetic radio-frequency (RF) carrier waves; while the underlying root challenge of enabling networked intra-body miniaturized sensors and actuators that communicate through body tissues is substantially unaddressed. The main obstacle to enabling this vision of networked implantable devices is posed by the physical nature of propagation in the human body. The human body is composed primarily (65 percent) of water, a medium through which RF electromagnetic waves do not easily propagate, even at relatively low frequencies. Therefore, in this article we take a different perspective and propose to investigate and study the use of ultrasonic waves to wirelessly internetwork intra-body devices. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack.", "title": "" }, { "docid": "9a47ac8b2a5de779909f15bde96c283c", "text": "We study lender behavior in the peer-to-peer (P2P) lending market, where individuals bid on unsecured microloans requested by other individual borrowers. Online P2P exchanges are growing, but lenders in this market are not professional investors. In addition, lenders have to take big risks because loans in P2P lending are granted without collateral. While the P2P lending market shares some characteristics of online markets with respect to herding behavior, it also has characteristics that may discourage it. This study empirically investigates herding behavior in the P2P lending market where seemingly conflicting conditions and features of herding are present. Using a large sample of daily data from one of the largest P2P lending platforms in Korea, we find strong evidence of herding and its diminishing marginal effect as bidding advances. We employ a multinomial logit market-share model in which relevant variables from prior studies on P2P lending are assessed. 2012 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
c8f59650002f716fa244065bdee10466
A Sarcasm Extraction Method Based on Patterns of Evaluation Expressions
[ { "docid": "65b34f78e3b8d54ad75d32cdef487dac", "text": "Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.", "title": "" }, { "docid": "b485b27da4b17469a5c519538f4dcf1b", "text": "The research described in this work focuses on identifying key components for the task of irony detection. By means of analyzing a set of customer reviews, which are considered as ironic both in social and mass media, we try to find hints about how to deal with this task from a computational point of view. Our objective is to gather a set of discriminating elements to represent irony. In particular, the kind of irony expressed in such reviews. To this end, we built a freely available data set with ironic reviews collected from Amazon. Such reviews were posted on the basis of an online viral effect; i.e. contents whose effect triggers a chain reaction on people. The findings were assessed employing three classifiers. The results show interesting hints regarding the patterns and, especially, regarding the implications for sentiment analysis.", "title": "" } ]
[ { "docid": "fac476744429cacfe1c07ec19ee295eb", "text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.", "title": "" }, { "docid": "b5215ddc7768f75fe72cdaaad9e3cdb8", "text": "Visual saliency analysis detects salient regions/objects that attract human attention in natural scenes. It has attracted intensive research in different fields such as computer vision, computer graphics, and multimedia. While many such computational models exist, the focused study of what and how applications can be beneficial is still lacking. In this article, our ultimate goal is thus to provide a comprehensive review of the applications using saliency cues, the so-called attentive systems. We would like to provide a broad vision about saliency applications and what visual saliency can do. We categorize the vast amount of applications into different areas such as computer vision, computer graphics, and multimedia. Intensively covering 200+ publications we survey (1) key application trends, (2) the role of visual saliency, and (3) the usability of saliency into different tasks.", "title": "" }, { "docid": "2833dbe3c3e576a3ba8f175a755b6964", "text": "The accuracy and granularity of network flow measurement play a critical role in many network management tasks, especially for anomaly detection. Despite its important, traffic monitoring often introduces overhead to the network, thus, operators have to employ sampling and aggregation to avoid overloading the infrastructure. However, such sampled and aggregated information may affect the accuracy of traffic anomaly detection. In this work, we propose a novel method that performs adaptive zooming in the aggregation of flows to be measured. In order to better balance the monitoring overhead and the anomaly detection accuracy, we propose a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions. To control the load on each individual switch, we carefully delegate monitoring rules in the network wide. Using real-world data and three simple anomaly detectors, we show that the adaptive based counting can detect anomalies more accurately with less overhead.", "title": "" }, { "docid": "2a76205b80c90ff9a4ca3ccb0434bb03", "text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.", "title": "" }, { "docid": "14724ca410a07d97857bf730624644a5", "text": "We introduce a highly scalable approach for open-domain question answering with no dependence on any data set for surface form to logical form mapping or any linguistic analytic tool such as POS tagger or named entity recognizer. We define our approach under the Constrained Conditional Models framework which lets us scale up to a full knowledge graph with no limitation on the size. On a standard benchmark, we obtained near 4 percent improvement over the state-of-the-art in open-domain question answering task.", "title": "" }, { "docid": "86f93e5facbcf5ac96ba68a8d91dda63", "text": "Lawvere theories and monads have been the two main category theoretic formulations of universal algebra, Lawvere theories arising in 1963 and the connection with monads being established a few years later. Monads, although mathematically the less direct and less malleable formulation, rapidly gained precedence. A generation later, the definition of monad began to appear extensively in theoretical computer science in order to model computational effects, without reference to universal algebra. But since then, the relevance of universal algebra to computational effects has been recognised, leading to renewed prominence of the notion of Lawvere theory, now in a computational setting. This development has formed a major part of Gordon Plotkin’s mature work, and we study its history here, in particular asking why Lawvere theories were eclipsed by monads in the 1960’s, and how the renewed interest in them in a computer science setting might develop in future.", "title": "" }, { "docid": "6224f4f3541e9cd340498e92a380ad3f", "text": "A personal story: From philosophy to software.", "title": "" }, { "docid": "5931169b6433d77496dfc638988399eb", "text": "Image annotation has been an important task for visual information retrieval. It usually involves a multi-class multi-label classification problem. To solve this problem, many researches have been conducted during last two decades, although most of the proposed methods rely on the training data with the ground truth. To prepare such a ground truth is an expensive and laborious task that cannot be easily scaled, and “semantic gaps” between low-level visual features and high-level semantics still remain. In this paper, we propose a novel approach, ontology based supervised learning for multi-label image annotation, where classifiers' training is conducted using easily gathered Web data. Moreover, it takes advantage of both low-level visual features and high-level semantic information of given images. Experimental results using 0.507 million Web images database show effectiveness of the proposed framework over existing method.", "title": "" }, { "docid": "58eebe0e55f038fea268b6a7a6960939", "text": "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means that fit their regulatory orientation, and this regulatory fit increases the value of what they are doing. The following postulates of this value from fit proposal are examined: (a) People will be more inclined toward goal means that have higher regulatory fit, (b) people's motivation during goal pursuit will be stronger when regulatory fit is higher, (c) people's (prospective) feelings about a choice they might make will be more positive for a desirable choice and more negative for an undesirable choice when regulatory fit is higher, (d) people's (retrospective) evaluations of past decisions or goal pursuits will be more positive when regulatory fit was higher, and (e) people will assign higher value to an object that was chosen with higher regulatory fit. Studies testing each of these postulates support the value-from-fit proposal. How value from fit can enhance or diminish the value of goal pursuits and the quality of life itself is discussed.", "title": "" }, { "docid": "025932fa63b24d65f3b61e07864342b7", "text": "The realization of the Internet of Things (IoT) paradigm relies on the implementation of systems of cooperative intelligent objects with key interoperability capabilities. One of these interoperability features concerns the cooperation among nodes towards a collaborative deployment of applications taking into account the available resources, such as electrical energy, memory, processing, and object capability to perform a given task, which are", "title": "" }, { "docid": "c075c26fcfad81865c58a284013c0d33", "text": "A novel pulse compression technique is developed that improves the axial resolution of an ultrasonic imaging system and provides a boost in the echo signal-to-noise ratio (eSNR). The new technique, called the resolution enhancement compression (REC) technique, was validated with simulations and experimental measurements. Image quality was examined in terms of three metrics: the cSNR, the bandwidth, and the axial resolution through the modulation transfer function (MTF). Simulations were conducted with a weakly-focused, single-element ultrasound source with a center frequency of 2.25 MHz. Experimental measurements were carried out with a single-element transducer (f/3) with a center frequency of 2.25 MHz from a planar reflector and wire targets. In simulations, axial resolution of the ultrasonic imaging system was almost doubled using the REC technique (0.29 mm) versus conventional pulsing techniques (0.60 mm). The -3 dB pulse/echo bandwidth was more than doubled from 48% to 97%, and maximum range sidelobes were -40 dB. Experimental measurements revealed an improvement in axial resolution using the REC technique (0.31 mm) versus conventional pulsing (0.44 mm). The -3 dB pulse/echo bandwidth was doubled from 56% to 113%, and maximum range sidelobes were observed at -45 dB. In addition, a significant gain in eSNR (9 to 16.2 dB) was achieved", "title": "" }, { "docid": "405bae0d413aa4b5fef0ac8b8c639235", "text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.", "title": "" }, { "docid": "27b5e0594305a81c6fad15567ba1f3b9", "text": "A novel approach to the design of series-fed antenna arrays has been presented, in which a modified three-way slot power divider is applied. In the proposed coupler, the power division is adjusted by changing the slot inclination with respect to the transmission line, whereas coupled transmission lines are perpendicular. The proposed modification reduces electrical length of the feeding line to <formula formulatype=\"inline\"><tex Notation=\"TeX\">$1 \\lambda$</tex></formula>, hence results in dissipation losses' reduction. The theoretical analysis and measurement results of the 2<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$\\, \\times \\,$</tex></formula>8 microstrip antenna array operating within 10.5-GHz frequency range are shown in the letter, proving the novel inclined-slot power divider's capability to provide appropriate power distribution and its potential application in the large antenna arrays.", "title": "" }, { "docid": "491ad4b4ab179db2efd54f3149d08db5", "text": "In robotics, Air Muscle is used as the analogy of the biological motor for locomotion or manipulation. It has advantages like the passive Damping, good power-weight ratio and usage in rough environments. An experimental test set up is designed to test both contraction and volume trapped in Air Muscle. This paper gives the characteristics of Air Muscle in terms of contraction of Air Muscle with variation of pressure at different loads and also in terms of volume of air trapped in it with variation in pressure at different loads. Braid structure of the Muscle has been described and its theoretical and experimental aspects of the characteristics of an Air Muscle are analysed.", "title": "" }, { "docid": "9d9086fbdfa46ded883b14152df7f5a5", "text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.", "title": "" }, { "docid": "4f222d326bdbf006c3d8e54d2d97ba3f", "text": "Designing autonomous vehicles for urban environments remains an unresolved problem. One major dilemma faced by autonomous cars is understanding the intention of other road users and communicating with them. To investigate one aspect of this, specifically pedestrian crossing behavior, we have collected a large dataset of pedestrian samples at crosswalks under various conditions (e.g., weather) and in different types of roads. Using the data, we analyzed pedestrian behavior from two different perspectives: the way they communicate with drivers prior to crossing and the factors that influence their behavior. Our study shows that changes in head orientation in the form of looking or glancing at the traffic is a strong indicator of crossing intention. We also found that context in the form of the properties of a crosswalk (e.g., its width), traffic dynamics (e.g., speed of the vehicles) as well as pedestrian demographics can alter pedestrian behavior after the initial intention of crossing has been displayed. Our findings suggest that the contextual elements can be interrelated, meaning that the presence of one factor may increase/decrease the influence of other factors. Overall, our work formulates the problem of pedestrian-driver interaction and sheds light on its complexity in typical traffic scenarios.", "title": "" }, { "docid": "51e6db842735ae89419612bf831fce54", "text": "In this work, we focus on automatically recognizing social conversational strategies that in human conversation contribute to building, maintaining or sometimes destroying a budding relationship. These conversational strategies include self-disclosure, reference to shared experience, praise and violation of social norms. By including rich contextual features drawn from verbal, visual and vocal modalities of the speaker and interlocutor in the current and previous turn, we can successfully recognize these dialog phenomena with an accuracy of over 80% and kappa ranging from 60-80%. Our findings have been successfully integrated into an end-to-end socially aware dialog system, with implications for virtual agents that can use rapport between user and system to improve task-oriented assistance.", "title": "" }, { "docid": "71c7c98b55b2b2a9c475d4522310cfaa", "text": "This paper studies an active underground economy which spec ializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs c ollected from an active underground market operating on publi c Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal subs trate mature enough to steal wealth into the millions of dollars in less than one year.", "title": "" }, { "docid": "f7f6f01e2858e03ae9a1313e0bb7b25f", "text": "This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 10 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time.", "title": "" }, { "docid": "fe11fc1282a7efc34a9efe0e81fb21d6", "text": "Increased complexity in modern embedded systems has presented various important challenges with regard to side-channel attacks. In particular, it is common to deploy SoC-based target devices with high clock frequencies in security-critical scenarios; understanding how such features align with techniques more often deployed against simpler devices is vital from both destructive (i.e., attack) and constructive (i.e., evaluation and/or countermeasure) perspectives. In this paper, we investigate electromagnetic-based leakage from three different means of executing cryptographic workloads (including the general purpose ARM core, an on-chip co-processor, and the NEON core) on the AM335x SoC. Our conclusion is that addressing challenges of the type above is feasible, and that key recovery attacks can be conducted with modest resources.", "title": "" } ]
scidocsrr
c693172c8adb20fab73f1efd786dbf8e
Being with virtual others: Neural correlates of social interaction
[ { "docid": "d6f322f4dd7daa9525f778ead18c8b5e", "text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.", "title": "" } ]
[ { "docid": "2366ab0736d4d88cd61a578b9287f9f5", "text": "Scientific curiosity and fascination have played a key role in human research with psychedelics along with the hope that perceptual alterations and heightened insight could benefit well-being and play a role in the treatment of various neuropsychiatric disorders. These motivations need to be tempered by a realistic assessment of the hurdles to be cleared for therapeutic use. Development of a psychedelic drug for treatment of a serious psychiatric disorder presents substantial although not insurmountable challenges. While the varied psychedelic agents described in this chapter share some properties, they have a range of pharmacologic effects that are reflected in the gradation in intensity of hallucinogenic effects from the classical agents to DMT, MDMA, ketamine, dextromethorphan and new drugs with activity in the serotonergic system. The common link seems to be serotonergic effects modulated by NMDA and other neurotransmitter effects. The range of hallucinogens suggest that they are distinct pharmacologic agents and will not be equally safe or effective in therapeutic targets. Newly synthesized specific and selective agents modeled on the legacy agents may be worth considering. Defining therapeutic targets that represent unmet medical need, addressing market and commercial issues, and finding treatment settings to safely test and use such drugs make the human testing of psychedelics not only interesting but also very challenging. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.", "title": "" }, { "docid": "36c4b2ab451c24d2d0d6abcbec491116", "text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.", "title": "" }, { "docid": "6f2162f883fce56eaa6bd8d0fbcedc0b", "text": "While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time. Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course. As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts. We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered. Theoretical and practical implications will be discussed.", "title": "" }, { "docid": "dd6b50a56b740d07f3d02139d16eeec4", "text": "Mitochondria play a central role in the aging process. Studies in model organisms have started to integrate mitochondrial effects on aging with the maintenance of protein homeostasis. These findings center on the mitochondrial unfolded protein response (UPR(mt)), which has been implicated in lifespan extension in worms, flies, and mice, suggesting a conserved role in the long-term maintenance of cellular homeostasis. Here, we review current knowledge of the UPR(mt) and discuss its integration with cellular pathways known to regulate lifespan. We highlight how insight into the UPR(mt) is revolutionizing our understanding of mitochondrial lifespan extension and of the aging process.", "title": "" }, { "docid": "95b9bed09e52824f74dd81d4b0cfcff2", "text": "Short circuit current and transient recovery voltage arising in power systems under fault conditions can develop thermal and dielectric failures in the system and may create severe damage to the critical components. Therefore, main devices in our power system especially like circuit breaker extremely need to be tested first. Testing can be done by two ways; direct testing, and synthetic testing. For testing high voltage circuit breakers, direct testing is not economical because of high power generating capability requirement of laboratory, high installation cost, and more space. Synthetic testing is an economical method for testing of high voltage circuit breakers. In synthetic test circuit, it is quite complex to choose the circuit components value for a desired transient recovery voltage (TRV) envelope. It is because, modification of any component value may cause change in all parameters of output waveform. This paper proposes a synthesis process to design synthetic test circuit to generate four-parameter transient recovery voltage (TRV) envelope for circuit breaker testing. A synthetic test circuit has been simulated in PSCAD to generate four-parameter TRV envelope for 145kV rating of circuit breaker.", "title": "" }, { "docid": "0277fd19009088f84ce9f94a7e942bc1", "text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.", "title": "" }, { "docid": "ab9416aaed78f3b1d6706ecd59c83db8", "text": "The ArchiMate modelling language provides a coherent and a holistic view of an enterprise in terms of its products, services, business processes, actors, business units, software applications and more. Yet, ArchiMate currently lacks (1) expressivity in modelling an enterprise from a value exchange perspective, and (2) rigour and guidelines in modelling business processes that realize the transactions relevant from a value perspective. To address these issues, we show how to connect e $$^{3}$$ value, a technique for value modelling, to ArchiMate via transaction patterns from the DEMO methodology. Using ontology alignment techniques, we show a transformation between the meta models underlying e $$^{3}$$ value, DEMO and ArchiMate. Furthermore, we present a step-wise approach that shows how this model transformation is achieved and, in doing so, we also show the of such a transformation. We exemplify the transformation of DEMO and e $$^{3}$$ value into ArchiMate by means of a case study in the insurance industry. As a proof of concept, we present a software tool supporting our transformation approach. Finally, we discuss the functionalities and limitations of our approach; thereby, we analyze its and practical applicability.", "title": "" }, { "docid": "815355c0a4322fa15af3a1112e56fc50", "text": "People believe that depth plays an important role in success of deep neural networks (DNN). However, this belief lacks solid theoretical justifications as far as we know. We investigate role of depth from perspective of margin bound. In margin bound, expected error is upper bounded by empirical margin error plus Rademacher Average (RA) based capacity term. First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth. This indicates negative impact of depth on test performance. Second, we show that deeper networks tend to have larger representation power (measured by Betti numbers based complexity) than shallower networks in multi-class setting, and thus can lead to smaller empirical margin error. This implies positive impact of depth. The combination of these two results shows that for DNN with restricted number of hidden units, increasing depth is not always good since there is a tradeoff between positive and negative impacts. These results inspire us to seek alternative ways to achieve positive impact of depth, e.g., imposing margin-based penalty terms to cross entropy loss so as to reduce empirical margin error without increasing depth. Our experiments show that in this way, we achieve significantly better test performance.", "title": "" }, { "docid": "aca04e624f1c3dcd3f0ab9f9be1ef384", "text": "In this paper, a novel three-phase parallel grid-connected multilevel inverter topology with a novel switching strategy is proposed. This inverter is intended to feed a microgrid from renewable energy sources (RES) to overcome the problem of the polluted sinusoidal output in classical inverters and to reduce component count, particularly for generating a multilevel waveform with a large number of levels. The proposed power converter consists of <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula> two-level <inline-formula> <tex-math notation=\"LaTeX\">$(n+1)$</tex-math></inline-formula> phase inverters connected in parallel, where <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula> is the number of RES. The more the number of RES, the more the number of voltage levels, the more faithful is the output sinusoidal waveform. In the proposed topology, both voltage pulse width and height are modulated and precalculated by using a pulse width and height modulation so as to reduce the number of switching states (i.e., switching losses) and the total harmonic distortion. The topology is investigated through simulations and validated experimentally with a laboratory prototype. Compliance with the <inline-formula><tex-math notation=\"LaTeX\">$\\text{IEEE 519-1992}$</tex-math></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$\\text{IEC 61000-3-12}$</tex-math></inline-formula> standards is presented and an exhaustive comparison of the proposed topology is made against the classical cascaded H-bridge topology.", "title": "" }, { "docid": "8ec871d495cf8d796654015896e2dcd2", "text": "Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers’ actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology—traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach.", "title": "" }, { "docid": "fac03559daded831095dfc9e083b794d", "text": "Multi-label classification is prevalent in many real-world applications, where each example can be associated with a set of multiple labels simultaneously. The key challenge of multi-label classification comes from the large space of all possible label sets, which is exponential to the number of candidate labels. Most previous work focuses on exploiting correlations among different labels to facilitate the learning process. It is usually assumed that the label correlations are given beforehand or can be derived directly from data samples by counting their label co-occurrences. However, in many real-world multi-label classification tasks, the label correlations are not given and can be hard to learn directly from data samples within a moderate-sized training set. Heterogeneous information networks can provide abundant knowledge about relationships among different types of entities including data samples and class labels. In this paper, we propose to use heterogeneous information networks to facilitate the multi-label classification process. By mining the linkage structure of heterogeneous information networks, multiple types of relationships among different class labels and data samples can be extracted. Then we can use these relationships to effectively infer the correlations among different class labels in general, as well as the dependencies among the label sets of data examples inter-connected in the network. Empirical studies on real-world tasks demonstrate that the performance of multi-label classification can be effectively boosted using heterogeneous information net- works.", "title": "" }, { "docid": "15881d5448e348c6e1a63e195daa68eb", "text": "Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, we observed that bottleneck autoencoders produce subjectively low quality reconstructed images. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression produces visually superior reconstructed images and yields higher values of pixel-wise measures of reconstruction quality (PSNR and SSIM) compared to bottleneck autoencoders. In addition, we find that using alternative metrics that correlate better with human perception, such as feature perceptual loss and the classification accuracy, sparse image compression scores up to 18.06% and 2.7% higher, respectively, compared to bottleneck autoencoders. Although computationally much more intensive, we find that sparse coding is otherwise superior to bottleneck autoencoders for the same degree of compression.", "title": "" }, { "docid": "c618caa277af7a0a64dd676bffab9cd3", "text": "Theoretical and empirical research documents a negative relation between the cross-section of stock returns and individual skewness. Individual skewness has been de…ned with coskewness, industry groups, predictive models, and even with options’ skewness. However, measures of skewness computed only from stock returns, such as historical skewness, do not con…rm this negative relation. In this paper, we propose a model-free measure of individual stock skewness directly obtained from high-frequency intraday prices, which we call realized skewness. We hypothesize that realized skewness predicts future stock returns. To test this hypothesis, we sort stocks every week according to realized skewness, form …ve portfolios and analyze subsequent weekly returns. We …nd a negative relation between realized skewness and stock returns in the cross section. A trading strategy that buys stocks in the lowest realized skewness quintile and sells stocks in the highest realized skewness quintile generates an average raw return of 38 basis points per week with a t-statistic of 9.15. This result is robust to di¤erent market periods, portfolio weightings, …rm characteristics and is not explained by linear factor models. Comments are welcome. We both want to thank IFM for …nancial support. Any remaining inadequacies are ours alone. Correspondence to: Aurelio Vasquez, Faculty of Management, McGill University, 1001 Sherbrooke Street West, Montreal, Quebec, Canada, H3A 1G5; Tel: (514) 398-4000 x.00231; E-mail: Aurelio.Vasquez@mcgill.ca.", "title": "" }, { "docid": "2708052c26111d54ba2c235afa26f71f", "text": "Reinforcement Learning (RL) has been an interesting research area in Machine Learning and AI. Hierarchical Reinforcement Learning (HRL) that decomposes the RL problem into sub-problems where solving each of which will be more powerful than solving the entire problem will be our concern in this paper. A review of the state-of-the-art of HRL has been investigated. Different HRL-based domains have been highlighted. Different problems in such different domains along with some proposed solutions have been addressed. It has been observed that HRL has not yet been surveyed in the current existing research; the reason that motivated us to work on this paper. Concluding remarks are presented. Some ideas have been emerged during the work on this research and have been proposed for pursuing a future research.", "title": "" }, { "docid": "9e3a7ae57f7faf984bdf8559e7e49850", "text": "In the late 1960s Brazil was experiencing a boom in its television and record industries, as part of the so-called “Economic Miracle” (1968 74) brought about by the military dictatorship’s opening up of the market to international capital. Censorship was introduced more or less simultaneously and responded in part to the military’s recognition of the potential power of the audio-visual media in a country in which over half of the population was illiterate or semi-literate. After the 1964 coup and until the infamous 5 Institutional Act (AI-5), introduced in 1968 to silence opposition to the regime, the left wing cultural production that had characterised the period under the government of the deposed populist president, João Goulart, had continued to flourish. Until 1968, the military had largely left the cultural scene alone to face up to the failure of its revolutionary political and cultural projects. Instead the generals focused on the brutal repression of student, trade union and grassroots activists who had collaborated with the cultural left, thus effectively depriving these artists of their public. Chico Buarque, one of the most censored performers of the period, maintains that at this moment he was saved from retreating into an introspective formalism in his songs and musical dramas by the emergence in 1965 of the televised music festivals, which became one of the most talked about events in the country (Buarque, 1979, 48). Sponsored by the television stations, which were themselves closely monitored and regulated by the government, the festivals still provided oppositional songwriters with an opportunity to re-", "title": "" }, { "docid": "87f3c12df54f395b9a24ccfc4dd10aa8", "text": "The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.", "title": "" }, { "docid": "eae92d06d00d620791e6b247f8e63c36", "text": "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources.", "title": "" }, { "docid": "34c1910dbd746368671b2b795114edfe", "text": "Article history: Received: 4.7.2015. Received in revised form: 9.1.2016. Accepted: 29.1.2016. This paper presents a design of a distributed switched reluctance motor for an integrated motorfan system. Unlike a conventional compact motor structure, the rotor is distributed into the ends of the impeller blades. This distributed structure of motor makes more space for airflow to pass through so that the system efficiency is highly improved. Simultaneously, the distributed structure gives the motor a higher torque, better efficiency and heat dissipation. The paper first gives an initial design of a switched reluctance motor based on system structure constraints and output equations, then it predicts the machine performance and determines phase current and winding turns based on equivalent magnetic circuit analysis; finally it validates and refines the analytical design with 3D transient finite element analysis. It is found that the analytical performance prediction agrees well with finite element analysis results except for the weakness on core losses estimation. The results of the design shows that the distributed switched reluctance motor can produce a large torque of pretty high efficiency at specified speeds.", "title": "" }, { "docid": "8a92594dbd75885002bad0dc2e658e10", "text": "Exposure to some music, in particular classical music, has been reported to produce transient increases in cognitive performance. The authors investigated the effect of listening to an excerpt of Vivaldi's Four Seasons on category fluency in healthy older adult controls and Alzheimer's disease patients. In a counterbalanced repeated-measure design, participants completed two, 1-min category fluency tasks whilst listening to an excerpt of Vivaldi and two, 1-min category fluency tasks without music. The authors report a positive effect of music on category fluency, with performance in the music condition exceeding performance without music in both the healthy older adult control participants and the Alzheimer's disease patients. In keeping with previous reports, the authors conclude that music enhances attentional processes, and that this can be demonstrated in Alzheimer's disease.", "title": "" }, { "docid": "51ecd734744b42a5fd770231d9e84785", "text": "Within the last few years a lot of research has been done on large social and information networks. One of the principal challenges concerning complex networks is link prediction. Most link prediction algorithms are based on the underlying network structure in terms of traditional graph theory. In order to design efficient algorithms for large scale networks, researchers increasingly adapt methods from advanced matrix and tensor computations. This paper proposes a novel approach of link prediction for complex networks by means of multi-way tensors. In addition to structural data we furthermore consider temporal evolution of a network. Our approach applies the canonical Parafac decomposition to reduce tensor dimensionality and to retrieve latent trends. For the development and evaluation of our proposed link prediction algorithm we employed various popular datasets of online social networks like Facebook and Wikipedia. Our results show significant improvements for evolutionary networks in terms of prediction accuracy measured through mean average precision.", "title": "" } ]
scidocsrr
a96fd40bc8fa60ddf253889bc2d2ab65
End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding
[ { "docid": "ea200dc100d77d8c156743bede4a965b", "text": "We present a contextual spoken language understanding (contextual SLU) method using Recurrent Neural Networks (RNNs). Previous work has shown that context information, specifically the previously estimated domain assignment, is helpful for domain identification. We further show that other context information such as the previously estimated intent and slot labels are useful for both intent classification and slot filling tasks in SLU. We propose a step-n-gram model to extract sentence-level features from RNNs, which extract sequential features. The step-n-gram model is used together with a stack of Convolution Networks for training domain/intent classification. Our method therefore exploits possible correlations among domain/intent classification and slot filling and incorporates context information from the past predictions of domain/intent and slots. The proposed method obtains new state-of-the-art results on ATIS and improved performances over baseline techniques such as conditional random fields (CRFs) on a large context-sensitive SLU dataset.", "title": "" } ]
[ { "docid": "82acc0bf0fc3860255c77af5e45a31a0", "text": "We propose a mobile food recognition system the poses of which are estimating calorie and nutritious of foods and recording a user's eating habits. Since all the processes on image recognition performed on a smart-phone, the system does not need to send images to a server and runs on an ordinary smartphone in a real-time way. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract a color histogram and SURF-based bag-of-features, and finally classify it into one of the fifty food categories with linear SVM and fast 2 kernel. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, show it as an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly about once a second. We implemented this system as an Android smartphone application so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 81.55% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. In addition, we obtained positive evaluation by user study compared to the food recording system without object recognition.", "title": "" }, { "docid": "e189f36ba0fcb91d0608d0651c60516e", "text": "In this paper, we describe the progressive design of the gesture recognition module of an automated food journaling system -- Annapurna. Annapurna runs on a smartwatch and utilises data from the inertial sensors to first identify eating gestures, and then captures food images which are presented to the user in the form of a food journal. We detail the lessons we learnt from multiple in-the-wild studies, and show how eating recognizer is refined to tackle challenges such as (i) high gestural diversity, and (ii) non-eating activities with similar gestural signatures. Annapurna is finally robust (identifying eating across a wide diversity in food content, eating styles and environments) and accurate (false-positive and false-negative rates of 6.5% and 3.3% respectively)", "title": "" }, { "docid": "2e7a88fb1eef478393a99366ff7089c8", "text": "Asbestos has been described as a physical carcinogen in that long thin fibers are generally more carcinogenic than shorter thicker ones. It has been hypothesized that long thin fibers disrupt chromosome behavior during mitosis, causing chromosome abnormalities which lead to cell transformation and neoplastic progression. Using high-resolution time lapse video-enhanced light microscopy and the uniquely suited lung epithelial cells of the newt Taricha granulosa, we have characterized for the first time the behavior of crocidolite asbestos fibers, and their interactions with chromosomes, during mitosis in living cells. We found that the keratin cage surrounding the mitotic spindle inhibited fiber migration, resulting in spindles with few fibers. As in interphase, fibers displayed microtubule-mediated saltatory movements. Fiber position was only slightly affected by the ejection forces of the spindle asters. Physical interactions between crocidolite fibers and chromosomes occurred randomly within the spindle and along its edge. Crocidolite fibers showed no affinity toward chromatin and most encounters ended with the fiber passively yielding to the chromosome. In a few encounters along the spindle edge the chromosome yielded to the fiber, which remained stationary as if anchored to the keratin cage. We suggest that fibers thin enough to be caught in the keratin cage and long enough to protrude into the spindle are those fibers with the ability to snag or block moving chromosomes.", "title": "" }, { "docid": "09c5fdbd76b7e81ef95c8edcc367bce7", "text": "Convolution Neural Networks (CNN), known as ConvNets are widely used in many visual imagery application, object classification, speech recognition. After the implementation and demonstration of the deep convolution neural network in Imagenet classification in 2012 by krizhevsky, the architecture of deep Convolution Neural Network is attracted many researchers. This has led to the major development in Deep learning frameworks such as Tensorflow, caffe, keras, theno. Though the implementation of deep learning is quite possible by employing deep learning frameworks, mathematical theory and concepts are harder to understand for new learners and practitioners. This article is intended to provide an overview of ConvNets architecture and to explain the mathematical theory behind it including activation function, loss function, feedforward and backward propagation. In this article, grey scale image is taken as input information image, ReLU and Sigmoid activation function are considered for developing the architecture and cross-entropy loss function is used for computing the difference between predicted value and actual value. The architecture is developed in such a way that it can contain one convolution layer, one pooling layer, and multiple dense layers.", "title": "" }, { "docid": "8c0c7d6554f21b4cb5e155cf1e33a165", "text": "Despite progress, early childhood development (ECD) remains a neglected issue, particularly in resource-poor countries. We analyse the challenges and opportunities that ECD proponents face in advancing global priority for the issue. We triangulated among several data sources, including 19 semi-structured interviews with individuals involved in global ECD leadership, practice, and advocacy, as well as peer-reviewed research, organisation reports, and grey literature. We undertook a thematic analysis of the collected data, drawing on social science scholarship on collective action and a policy framework that elucidates why some global initiatives are more successful in generating political priority than others. The analysis indicates that the ECD community faces two primary challenges in advancing global political priority. The first pertains to framing: generation of internal consensus on the definition of the problem and solutions, agreement that could facilitate the discovery of a public positioning of the issue that could generate political support. The second concerns governance: building of effective institutions to achieve collective goals. However, there are multiple opportunities to advance political priority for ECD, including an increasingly favourable political environment, advances in ECD metrics, and the existence of compelling arguments for investment in ECD. To advance global priority for ECD, proponents will need to surmount the framing and governance challenges and leverage these opportunities.", "title": "" }, { "docid": "c495fadfd4c3e17948e71591e84c3398", "text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.", "title": "" }, { "docid": "7b526ab92e31c2677fd20022a8b46189", "text": "Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.", "title": "" }, { "docid": "cd0d425c8315a22ed9e52b8bdd489b52", "text": "Data mining is an essential phase in knowledge discovery in database which is actually used to extract hidden patterns from large databases. Data mining concepts and methods can be applied in various fields like marketing, medicine, real estate, customer relationship management, engineering, web mining, etc. The main objective of this paper is to compare the performance accuracy of Multilayer perceptron (MLP) Artificial Neural Network and ID3 (Iterative Dichotomiser 3), C4.5 (also known as J48) Decision Trees algorithms Weka data mining software in predicting Typhoid fever. The data used is the patient’s dataset collected from a well known Nigerian Hospital. ID3, C4.5 Decision tree and MLP Artificial Neural Network WEKA Data mining software was used for the implementation. The data collected were transformed in a form that is acceptable to the data mining software and it was splitted into two sets: The training dataset and the testing dataset so that it can be imported into the system. The training set was used to enable the system to observe relationships between input data and the resulting outcomes in order to perform the prediction. The testing dataset contains data used to test the performance of the model. This model can be used by medical experts both in the private and public hospitals to make more timely and consistent diagnosis of typhoid fever cases which will reduce death rate in our country. The MLP ANN model exhibits good performance in the prediction of typhoid fever disease in general because of the low values generated in the Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) and Relative Absolute Error (RAE) error performance measures. KeywordsID3, C4.5 , MLP, Decision Tree Artificial Neural Network, Typhoid fever African Journal of Computing & ICT Reference Format: O..O. Adeyemo, T. .O Adeyeye & D. Ogunbiyi (2015). Ccomparative Study of ID3/C4.5 Decision tree and Multilayer Perceptron Algorithms for the Prediction of Typhoid Fever. Afr J. of Comp & ICTs. Vol 8, No. 1. Pp 103-112.", "title": "" }, { "docid": "2ea12a279b2a059399dcc62db2957ce5", "text": "Alkaline pretreatment with NaOH under mild operating conditions was used to improve ethanol and biogas production from softwood spruce and hardwood birch. The pretreatments were carried out at different temperatures between minus 15 and 100oC with 7.0% w/w NaOH solution for 2 h. The pretreated materials were then enzymatically hydrolyzed and subsequently fermented to ethanol or anaerobically digested to biogas. In general, the pretreatment was more successful for both ethanol and biogas production from the hardwood birch than the softwood spruce. The pretreatment resulted in significant reduction of hemicellulose and the crystallinity of cellulose, which might be responsible for improved enzymatic hydrolyses of birch from 6.9% to 82.3% and spruce from 14.1% to 35.7%. These results were obtained with pretreatment at 100°C for birch and 5°C for spruce. Subsequently, the best ethanol yield obtained was 0.08 g/g of the spruce while pretreated at 100°C, and 0.17 g/g of the birch treated at 100°C. On the other hand, digestion of untreated birch and spruce resulted in methane yields of 250 and 30 l/kg VS of the wood species, respectively. The pretreatment of the wood species at the best conditions for enzymatic hydrolysis resulted in 83% and 74% improvement in methane production from birch and spruce.", "title": "" }, { "docid": "d0b29493c64e787ed88ad8166d691c3d", "text": "Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps’ privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps’ compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance. Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps tha lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.", "title": "" }, { "docid": "befc74d8dc478a67c009894c3ef963d3", "text": "In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks.", "title": "" }, { "docid": "d4b6be1c4d8dd37b71bf536441449ad5", "text": "Why should wait for some days to get or receive the distributed computing fundamentals simulations and advanced topics book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This distributed computing fundamentals simulations and advanced topics is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?", "title": "" }, { "docid": "9df0df8eb4f71d8c6952e07a179b2ec4", "text": "In interpersonal interactions, speech and body gesture channels are internally coordinated towards conveying communicative intentions. The speech-gesture relationship is influenced by the internal emotion state underlying the communication. In this paper, we focus on uncovering the emotional effect on the interrelation between speech and body gestures. We investigate acoustic features describing speech prosody (pitch and energy) and vocal tract configuration (MFCCs), as well as three types of body gestures, viz., head motion, lower and upper body motions. We employ mutual information to measure the coordination between the two communicative channels, and analyze the quantified speech-gesture link with respect to distinct levels of emotion attributes, i.e., activation and valence. The results reveal that the speech-gesture coupling is generally tighter for low-level activation and high-level valence, compared to high-level activation and low-level valence. We further propose a framework for modeling the dynamics of speech-gesture interaction. Experimental studies suggest that such quantified coupling representations can well discriminate different levels of activation and valence, reinforcing that emotions are encoded in the dynamics of the multimodal link. We also verify that the structures of the coupling representations are emotiondependent using subspace-based analysis.", "title": "" }, { "docid": "97b7065942b53f2d873c80f32242cd00", "text": "Hierarchical multilabel classification (HMC) allows an instance to have multiple labels residing in a hierarchy. A popular loss function used in HMC is the H-loss, which penalizes only the first classification mistake along each prediction path. However, the H-loss metric can only be used on tree-structured label hierarchies, but not on DAG hierarchies. Moreover, it may lead to misleading predictions as not all misclassifications in the hierarchy are penalized. In this paper, we overcome these deficiencies by proposing a hierarchy-aware loss function that is more appropriate for HMC. Using Bayesian decision theory, we then develop a Bayes-optimal classifier with respect to this loss function. Instead of requiring an exhaustive summation and search for the optimal multilabel, the proposed classification problem can be efficiently solved using a greedy algorithm on both tree-and DAG-structured label hierarchies. Experimental results on a large number of real-world data sets show that the proposed algorithm outperforms existing HMC methods.", "title": "" }, { "docid": "350d1717a5192873ef9e0ac9ed3efc7b", "text": "OBJECTIVE\nTo describe the effects of percutaneously implanted valve-in-valve in the tricuspid position for patients with pre-existing transvalvular device leads.\n\n\nMETHODS\nIn this case series, we describe implantation of the Melody valve and SAPIEN XT valve within dysfunctional bioprosthetic tricuspid valves in three patients with transvalvular device leads.\n\n\nRESULTS\nIn all cases, the valve was successfully deployed and device lead function remained unchanged. In 1/3 cases with 6-month follow-up, device lead parameters remain unchanged and transcatheter valve-in-valve function remains satisfactory.\n\n\nCONCLUSIONS\nTranscatheter tricuspid valve-in-valve is feasible in patients with pre-existing transvalvular devices leads. Further study is required to determine the long-term clinical implications of this treatment approach.", "title": "" }, { "docid": "fa22819c73c9f9cd2d0ee243a7450e76", "text": "This dissertation describes a simulated autonomous car capable of driving on urbanstyle roads. The system is built around TORCS, an open source racing car simulator. Two real-time solutions are implemented; a reactive prototype using a neural network and a more complex deliberative approach using a sense, plan, act architecture. The deliberative system uses vision data fused with simulated laser range data to reliably detect road markings. The detected road markings are then used to plan a parabolic path and compute a safe speed for the vehicle. The vehicle uses a simulated global positioning/inertial measurement sensor to guide it along the desired path with the throttle, brakes, and steering being controlled using proportional controllers. The vehicle is able to reliably navigate the test track maintaining a safe road position at speeds of up to 40km/h.", "title": "" }, { "docid": "b25b7100c035ad2953fb43087ede1625", "text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.", "title": "" }, { "docid": "9664431f0cfc22567e1e5c945f898595", "text": "Anomaly detection aims to detect abnormal events by a model of normality. It plays an important role in many domains such as network intrusion detection, criminal activity identity and so on. With the rapidly growing size of accessible training data and high computation capacities, deep learning based anomaly detection has become more and more popular. In this paper, a new domain-based anomaly detection method based on generative adversarial networks (GAN) is proposed. Minimum likelihood regularization is proposed to make the generator produce more anomalies and prevent it from converging to normal data distribution. Proper ensemble of anomaly scores is shown to improve the stability of discriminator effectively. The proposed method has achieved significant improvement than other anomaly detection methods on Cifar10 and UCI datasets.", "title": "" }, { "docid": "281b0a108c1e8507f26381cc905ce9d1", "text": "Extraction–Transform–Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.", "title": "" }, { "docid": "a7b8986dbfde4a7ccc3a4ad6e07319a7", "text": "This article tests expectations generated by the veto players theory with respect to the over time composition of budgets in a multidimensional policy space. The theory predicts that countries with many veto players (i.e., coalition governments, bicameral political systems, presidents with veto) will have difficulty altering the budget structures. In addition, countries that tend to make significant shifts in government composition will have commensurate modifications of the budget. Data collected from 19 advanced industrialized countries from 1973 to 1995 confirm these expectations, even when one introduces socioeconomic controls for budget adjustments like unemployment variations, size of retired population and types of government (minimum winning coalitions, minority or oversized governments). The methodological innovation of the article is the use of empirical indicators to operationalize the multidimensional policy spaces underlying the structure of budgets. The results are consistent with other analyses of macroeconomic outcomes like inflation, budget deficits and taxation that are changed at a slower pace by multiparty governments. The purpose of this article is to test empirically the expectations of the veto players theory in a multidimensional setting. The theory defines ‘veto players’ as individuals or institutions whose agreement is required for a change of the status quo. The basic prediction of the theory is that when the number of veto players and their ideological distances increase, policy stability also increases (only small departures from the status quo are possible) (Tsebelis 1995, 1999, 2000, 2002). The theory was designed for the study of unidimensional and multidimensional policy spaces. While no policy domain is strictly unidimensional, existing empirical tests have only focused on analyzing political economy issues in a single dimension. These studies have confirmed the veto players theory’s expectations (see Bawn (1999) on budgets; Hallerberg & Basinger (1998) on taxes; Tsebelis (1999) on labor legislation; Treisman (2000) on inflation; Franzese (1999) on budget deficits). This article is the first attempt to test whether the predictions of the veto players theory hold in multidimensional policy spaces. We will study a phenomenon that cannot be considered unidimensional: the ‘structure’ of budgets – that is, their percentage composition, and the change in this composition over © European Consortium for Political Research 2004 Published by Blackwell Publishing Ltd., 9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA", "title": "" } ]
scidocsrr
c210f30c1e3255ffe2487adf19bfd6b0
ICDAR 2003 robust reading competitions: entries, results, and future directions
[ { "docid": "f3d86ca456bb9e97b090ea68a82be93b", "text": "Many images—especially those used for page design on web pages—as well as videos contain visible text. If these text occurrences could be detected, segmented, and recognized automatically, they would be a valuable source of high-level semantics for indexing and retrieval. In this paper, we propose a novel method for localizing and segmenting text in complex images and videos. Text lines are identified by using a complex-valued multilayer feed-forward network trained to detect text at a fixed scale and position. The network’s output at all scales and positions is integrated into a single text-saliency map, serving as a starting point for candidate text lines. In the case of video, these candidate text lines are refined by exploiting the temporal redundancy of text in video. Localized text lines are then scaled to a fixed height of 100 pixels and segmented into a binary image with black characters on white background. For videos, temporal redundancy is exploited to improve segmentation performance. Input images and videos can be of any size due to a true multiresolution approach. Moreover, the system is not only able to locate and segment text occurrences into large binary images, but is also able to track each text line with sub-pixel accuracy over the entire occurrence in a video, so that one text bitmap is created for all instances of that text line. Therefore, our text segmentation results can also be used for object-based video encoding such as that enabled by MPEG-4.", "title": "" } ]
[ { "docid": "dddec8d72a4ed68ee47c0cc7f4f31dbd", "text": "Probabilistic topic modeling of text collections is a powerful tool for statistical text analysis. In this tutorial we introduce a novel non-Bayesian approach, called Additive Regularization of Topic Models. ARTM is free of redundant probabilistic assumptions and provides a simple inference for many combined and multi-objective topic models.", "title": "" }, { "docid": "8775af6029924a390cfb51aa17f99a2a", "text": "Machine learning is increasingly used to make sense of the physical world yet may suffer from adversarial manipulation. We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even when they are printed and then photographed.", "title": "" }, { "docid": "44a84af55421c88347034d6dc14e4e30", "text": "Anomaly detection plays an important role in protecting computer systems from unforeseen attack by automatically recognizing and filter atypical inputs. However, it can be difficult to balance the sensitivity of a detector – an aggressive system can filter too many benign inputs while a conservative system can fail to catch anomalies. Accordingly, it is important to rigorously test anomaly detectors to evaluate potential error rates before deployment. However, principled systems for doing so have not been studied – testing is typically ad hoc, making it difficult to reproduce results or formally compare detectors. To address this issue we present a technique and implemented system, Fortuna, for obtaining probabilistic bounds on false positive rates for anomaly detectors that process Internet data. Using a probability distribution based on PageRank and an efficient algorithm to draw samples from the distribution, Fortuna computes an estimated false positive rate and a probabilistic bound on the estimate’s accuracy. By drawing test samples from a well defined distribution that correlates well with data seen in practice, Fortuna improves on ad hoc methods for estimating false positive rate, giving bounds that are reproducible, comparable across different anomaly detectors, and theoretically sound. Experimental evaluations of three anomaly detectors (SIFT, SOAP, and JSAND) show that Fortuna is efficient enough to use in practice — it can sample enough inputs to obtain tight false positive rate bounds in less than 10 hours for all three detectors. These results indicate that Fortuna can, in practice, help place anomaly detection on a stronger theoretical foundation and help practitioners better understand the behavior and consequences of the anomaly detectors that they deploy. As part of our work, we obtain a theoretical result that may be of independent interest: We give a simple analysis of the convergence rate of the random surfer process defining PageRank that guarantees the same rate as the standard, second-eigenvalue analysis, but does not rely on any assumptions about the link structure of the web.", "title": "" }, { "docid": "e19d53b7ebccb3a1354bb6411182b1d3", "text": "ERP implementation projects affect large parts of an implementing organization and lead to changes in the way an organization performs its tasks. The costs needed for the effort to implement these systems are hard to estimate. Research indicates that the size of an ERP project can be a useful measurement for predicting the effort required to complete an ERP implementation project. However, such a metric does not yet exist. Therefore research should be carried out to find a set of variables which can define the size of an ERP project. This paper describes a first step in such a project. It shows 21 logical clusters of ERP implementation project activities based on 405 ERP implementation project activities retrieved from literature. Logical clusters of ERP project activities can be used in further research to find variables for defining the size of an ERP project. IntroductIon Globalization has put pressure on organizations to perform as efficiently and effectively as possible in order to compete in the market. Structuring their internal processes and making them most efficient by integrated information systems is very important for that reason. In the 1990s, organizations started implementing ERP systems in order to replace their legacy systems and improve their business processes. This change is still being implemented. ERP is a key ingredient for gaining competitive advantage, streamlining operations, and having “lean” manufacturing (Mabert, Soni, & Venkataramanan, 2003). A study of Hendricks indicates that research shows some evidence of improvements in profitability after implementing ERP systems (Hendricks, Singhal, & Stratman, 2006). Forecasters predict a growth in the ERP market. 1847 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 1848 Sizing ERP Implementation Projects Several researchers also indicate that much research is still being carried out in this area ( Møller, Kræmmergaard, & Rikhardsson, 2004; Botta-Genoulaz, Millet, & Grabot, 2005). Although the research area is rather clearly defined, many topics still have to be researched and the usefulness of results for actual projects has to be designed. ERP projects are large and risky projects for organizations, because they affect great parts of the implementing organization and lead to changes in the way the organization performs its tasks. The costs needed for the effort to implement these systems are usually very high and also very hard to estimate. Many cases are documented where the actual required time and costs exceeded the budget, that is to say the estimated costs, many times. There are even cases where ERP implementation projects led to bankruptcy (Holland & Light, 1999; Scott, 1999). Francalanci states that software costs only represent a fraction of the overall cost of ERP projects within the total costs of the implementation project, that is to say, less than 10% over a 5-year period (Francalanci, 2001). In addition, Willis states that consultants alone can cost as much as or more than five times the cost of the software (Willis, Willis-Brown, & McMillan, 2001). This is confirmed by von Arb, who indicates that consultancy costs can be 2 to 4 times as much as software license costs (Arb, 1997). This indicates that the effort required for implementing an ERP system largely consists of effort-related costs. Von Arb also argues that license and hardware costs are fairly constant and predictable and that only a focus on reducing these effort-related costs is realistic. The conclusion is legitimate that the total effort is the most important and difficult factor to estimate in an ERP implementation project. Therefore, the main research of the authors only focuses on the estimation of the total effort required for implementing an ERP system. In every project there is a great uncertainty at the start, while at the end there is only a minor uncertainty (Meredith & Mantel, 2003). In the planning phase, the most important decisions are made that will affect the future of the organization as a whole. As described earlier, a failure to implement an ERP system can seriously affect the health of an organization and even lead to bankruptcy. This means that it would be of great help if a method would exist that could predict the effort required for implementing the ERP system within reasonable boundaries. The method should not be too complex and should be quick. Its outcomes should support the rough estimation of the project and serve as a starting point for the detailed planning in the set-up phase of the project phase and for the first allocation of the resources. Moreover, if conditions greatly change during a project, the method could be used to estimate the consequences for the remaining effort required for implementing the ERP system. The aim of this article is to answer which activities exist in ERP projects according to literature and how these can be clustered as a basis for defining the size of an ERP project. In the article, the approach and main goal of our research will first be described, followed by a literature review on ERP project activities. After that, we will present the clustering approach and results followed by conclusions and discussion.", "title": "" }, { "docid": "b11a161588bd1a3d4d7cd78ecce4aa64", "text": "This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up a VE into a configuration task, and hence reducing the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.", "title": "" }, { "docid": "691cdea5cf3fae2713c721c1cfa8c132", "text": "of the Dissertation Addressing the Challenges of Underspecification in Web Search", "title": "" }, { "docid": "d40ac2e9a896e13ece11d7429fab3d80", "text": "We present our recent work (ICS 2011) on dynamic environments in which computational nodes, or decision makers, follow simple and unsophisticated rules of behavior (e.g., repeatedly \"best replying\" to others' actions, and minimizing \"regret\") that have been extensively studied in game theory and economics. We aim to understand when convergence of the resulting dynamics to an equilibrium point is guaranteed if nodes' interaction is not synchronized (e.g., as in Internet protocols and large-scale markets). We take the first steps of this research agenda. We exhibit a general non-convergence result and consider its implications across a wide variety of interesting and timely applications: routing, congestion control, game theory, social networks and circuit design. We also consider the relationship between classical nontermination results in distributed computing theory and our result, explore the impact of scheduling on convergence, study the computational and communication complexity of asynchronous dynamics and present some basic observations regarding the effects of asynchrony on no-regret dynamics.", "title": "" }, { "docid": "043306203de8365bd1930a9c0b4138c7", "text": "In this paper, we compare two different methods for automatic Arabic speech recognition for isolated words and sentences. Isolated word/sentence recognition was performed using cepstral feature extraction by linear predictive coding, as well as Hidden Markov Models (HMM) for pattern training and classification. We implemented a new pattern classification method, where we used Neural Networks trained using the Al-Alaoui Algorithm. This new method gave comparable results to the already implemented HMM method for the recognition of words, and it has overcome HMM in the recognition of sentences. The speech recognition system implemented is part of the Teaching and Learning Using Information Technology (TLIT) project which would implement a set of reading lessons to assist adult illiterates in developing better reading capabilities.", "title": "" }, { "docid": "a7f046dcc5e15ccfbe748fa2af400c98", "text": "INTRODUCTION\nSmoking and alcohol use (beyond social norms) by health sciences students are behaviors contradictory to the social function they will perform as health promoters in their eventual professions.\n\n\nOBJECTIVES\nIdentify prevalence of tobacco and alcohol use in health sciences students in Mexico and Cuba, in order to support educational interventions to promote healthy lifestyles and development of professional competencies to help reduce the harmful impact of these legal drugs in both countries.\n\n\nMETHODS\nA descriptive cross-sectional study was conducted using quantitative and qualitative techniques. Data were collected from health sciences students on a voluntary basis in both countries using the same anonymous self-administered questionnaire, followed by an in-depth interview.\n\n\nRESULTS\nPrevalence of tobacco use was 56.4% among Mexican students and 37% among Cuban. It was higher among men in both cases, but substantial levels were observed in women as well. The majority of both groups were regularly exposed to environmental tobacco smoke. Prevalence of alcohol use was 76.9% in Mexican students, among whom 44.4% were classified as at-risk users. Prevalence of alcohol use in Cuban students was 74.1%, with 3.7% classified as at risk.\n\n\nCONCLUSIONS\nThe high prevalence of tobacco and alcohol use in these health sciences students is cause for concern, with consequences not only for their individual health, but also for their professional effectiveness in helping reduce these drugs' impact in both countries.", "title": "" }, { "docid": "c5731d7290f1ab073c12bf67101a386a", "text": "Convolutional neural networks have emerged as the leading method for the classification and segmentation of images. In some cases, it is desirable to focus the attention of the net on a specific region in the image; one such case is the recognition of the contents of transparent vessels, where the vessel region in the image is already known. This work presents a valve filter approach for focusing the attention of the net on a region of interest (ROI). In this approach, the ROI is inserted into the net as a binary map. The net uses a different set of convolution filters for the ROI and background image regions, resulting in a different set of features being extracted from each region. More accurately, for each filter used on the image, a corresponding valve filter exists that acts on the ROI map and determines the regions in which the corresponding image filter will be used. This valve filter effectively acts as a valve that inhibits specific features in different image regions according to the ROI map. In addition, a new data set for images of materials in glassware vessels in a chemistry laboratory setting is presented. This data set contains a thousand images with pixel-wise annotation according to categories ranging from filled and empty to the exact phase of the material inside the vessel. The results of the valve filter approach and fully convolutional neural nets (FCN) with no ROI input are compared based on this data set.", "title": "" }, { "docid": "e1e1fcc7a732e5b2835c5a137722b3ee", "text": "Regular expression matching is a crucial task in several networking applications. Current implementations are based on one of two types of finite state machines. Non-deterministic finite automata (NFAs) have minimal storage demand but have high memory bandwidth requirements. Deterministic finite automata (DFAs) exhibit low and deterministic memory bandwidth requirements at the cost of increased memory space. It has already been shown how the presence of wildcards and repetitions of large character classes can render DFAs and NFAs impractical. Additionally, recent security-oriented rule-sets include patterns with advanced features, namely back-references, which add to the expressive power of traditional regular expressions and cannot therefore be supported through classical finite automata.\n In this work, we propose and evaluate an extended finite automaton designed to address these shortcomings. First, the automaton provides an alternative approach to handle character repetitions that limits memory space and bandwidth requirements. Second, it supports back-references without the need for back-tracking in the input string. In our discussion of this proposal, we address practical implementation issues and evaluate the automaton on real-world rule-sets. To our knowledge, this is the first high-speed automaton that can accommodate all the Perl-compatible regular expressions present in the Snort network intrusion and detection system.", "title": "" }, { "docid": "7875910ad044232b4631ecacfec65656", "text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3300e4e29d160fb28861ac58740834b5", "text": "To facilitate proactive fault management in large-scale systems such as IBM Blue Gene/P, online failure prediction is of paramount importance. While many techniques have been presented for online failure prediction, questions arise regarding two commonly used approaches: period-based and event-driven. Which one has better accuracy? What is the best observation window (i.e., the time interval used to collect evidence before making a prediction)? How does the lead time (i.e., the time interval from the prediction to the failure occurrence) impact prediction arruracy? To answer these questions, we analyze and compare period-based and event-driven prediction approaches via a Bayesian prediction model. We evaluate these prediction approaches, under a variety of testing parameters, by means of RAS logs collected from a production supercomputer at Argonne National Laboratory. Experimental results show that the period-based Bayesian model and the event-driven Bayesian model can achieve up to 65.0% and 83.8% prediction accuracy, respectively. Furthermore, our sensitivity study indicates that the event-driven approach seems more suitable for proactive fault management in large-scale systems like Blue Gene/P.", "title": "" }, { "docid": "807b1a6a389788d598c5c0ec11b336ab", "text": "One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. handengineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment.", "title": "" }, { "docid": "0879399fcb38c103a0e574d6d9010215", "text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.", "title": "" }, { "docid": "42cf4bd800000aed5e0599cba52ba317", "text": "There is a significant amount of controversy related to the optimal amount of dietary carbohydrate. This review summarizes the health-related positives and negatives associated with carbohydrate restriction. On the positive side, there is substantive evidence that for many individuals, low-carbohydrate, high-protein diets can effectively promote weight loss. Low-carbohydrate diets (LCDs) also can lead to favorable changes in blood lipids (i.e., decreased triacylglycerols, increased high-density lipoprotein cholesterol) and decrease the severity of hypertension. These positives should be balanced by consideration of the likelihood that LCDs often lead to decreased intakes of phytochemicals (which could increase predisposition to cardiovascular disease and cancer) and nondigestible carbohydrates (which could increase risk for disorders of the lower gastrointestinal tract). Diets restricted in carbohydrates also are likely to lead to decreased glycogen stores, which could compromise an individual's ability to maintain high levels of physical activity. LCDs that are high in saturated fat appear to raise low-density lipoprotein cholesterol and may exacerbate endothelial dysfunction. However, for the significant percentage of the population with insulin resistance or those classified as having metabolic syndrome or prediabetes, there is much experimental support for consumption of a moderately restricted carbohydrate diet (i.e., one providing approximately 26%-44 % of calories from carbohydrate) that emphasizes high-quality carbohydrate sources. This type of dietary pattern would likely lead to favorable changes in the aforementioned cardiovascular disease risk factors, while minimizing the potential negatives associated with consumption of the more restrictive LCDs.", "title": "" }, { "docid": "aefa758e6b5681c213150ed674eae915", "text": "This paper presents a solution to automatically recognize the correct left/right and upright/upside-down orientation of iris images. This solution can be used to counter spoofing attacks directed to generate fake identities by rotating an iris image or the iris sensor during the acquisition. Two approaches are compared on the same data, using the same evaluation protocol: 1) feature engineering, using hand-crafted features classified by a support vector machine (SVM) and 2) feature learning, using data-driven features learned and classified by a convolutional neural network (CNN). A data set of 20 750 iris images, acquired for 103 subjects using four sensors, was used for development. An additional subject-disjoint data set of 1,939 images, from 32 additional subjects, was used for testing purposes. Both same-sensor and cross-sensor tests were carried out to investigate how the classification approaches generalize to unknown hardware. The SVM-based approach achieved an average correct classification rate above 95% (89%) for recognition of left/right (upright/upside-down) orientation when tested on subject-disjoint data and camera-disjoint data, and 99% (97%) if the images were acquired by the same sensor. The CNN-based approach performed better for same-sensor experiments, and presented slightly worse generalization capabilities to unknown sensors when compared with the SVM. We are not aware of any other papers on the automatic recognition of upright/upside-down orientation of iris images, or studying both hand-crafted and data-driven features in same-sensor and cross-sensor subject-disjoint experiments. The data sets used in this paper, along with random splits of the data used in cross-validation, are being made available.", "title": "" }, { "docid": "26db4ecbc2ad4b8db0805b06b55fe27d", "text": "The advent of high voltage (HV) wide band-gap power semiconductor devices has enabled the medium voltage (MV) grid tied operation of non-cascaded neutral point clamped (NPC) converters. This results in increased power density, efficiency as well as lesser control complexity. The multi-chip 15 kV/40 A SiC IGBT and 15 kV/20 A SiC MOSFET are two such devices which have gained attention for MV grid interface applications. Such converters based on these devices find application in active power filters, STATCOM or as active front end converters for solid state transformers. This paper presents an experimental comparative evaluation of these two SiC devices for 3-phase grid connected applications using a 3-level NPC converter as reference. The IGBTs are generally used for high power applications due to their lower conduction loss while MOSFETs are used for high frequency applications due to their lower switching loss. The thermal performance of these devices are compared based on device loss characteristics, device heat-run tests, 3-level pole heat-run tests, PLECS thermal simulation based loss comparison and MV experiments on developed hardware prototypes. The impact of switching frequency on the harmonic control of the grid connected converter is also discussed and suitable device is selected for better grid current THD.", "title": "" }, { "docid": "d9160f2cc337de729af34562d77a042e", "text": "Ontologies proliferate with the progress of the Semantic Web. Ontology matching is an important way of establishing interoperability between (Semantic) Web applications that use different but related ontologies. Due to their sizes and monolithic nature, large ontologies regarding real world domains bring a new challenge to the state of the art ontology matching technology. In this paper, we propose a divide-and-conquer approach to matching large ontologies. We develop a structure-based partitioning algorithm, which partitions entities of each ontology into a set of small clusters and constructs blocks by assigning RDF Sentences to those clusters. Then, the blocks from different ontologies are matched based on precalculated anchors, and the block mappings holding high similarities are selected. Finally, two powerful matchers, V-DOC and GMO, are employed to discover alignments in the block mappings. Comprehensive evaluation on both synthetic and real world data sets demonstrates that our approach both solves the scalability problem and achieves good precision and recall with significant reduction of execution time. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4cff5279110ff2e45060f3ccec7d51ba", "text": "Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)", "title": "" } ]
scidocsrr
9075dc1f6297ae56988ab18f77b78e9f
Activity Recognition using Actigraph Sensor
[ { "docid": "d62bded822aff38333a212ed1853b53c", "text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated", "title": "" } ]
[ { "docid": "ca8b1080c8e1d6d234d12370f47d7874", "text": "Alcelaphine herpesvirus-1 (AlHV-1), a causative agent of malignant catarrhal fever in cattle, was detected in wildebeest (Connochaetes taurinus) placenta tissue for the first time. Although viral load was low, the finding of viral DNA in over 50% of 94 samples tested lends support to the possibility that placental tissue could play a role in disease transmission and that wildebeest calves are infected in utero. Two viral loci were sequenced to examine variation among virus samples obtained from wildebeest and cattle: the ORF50 gene, encoding the lytic cycle transactivator protein, and the A9.5 gene, encoding a novel polymorphic viral glycoprotein. ORF50 was well conserved with six newly discovered alleles differing at only one or two base positions. In contrast, while only three new A9.5 alleles were discovered, these differed by up to 13% at the nucleotide level and up to 20% at the amino acid level. Structural homology searching performed with the additional A9.5 sequences determined in this study adds power to recent analysis identifying the four-helix bundle cytokine interleukin-4 (IL4) as the major homologue. The majority of MCF virus samples obtained from Tanzanian cattle and wildebeest encoded A9.5 polypeptides identical to the previously characterized A9.5 allele present in the laboratory maintained AlHV-1 C500 strain. This supports the view that AlHV-1 C500 is suitable for the development of a vaccine for wildebeest-associated MCF.", "title": "" }, { "docid": "2a487ff4b9218900e9a0e480c23e4c25", "text": "5.1 CONVENTIONAL ACTUATORS, SHAPE MEMORY ALLOYS, AND ELECTRORHEOLOGICAL FLUIDS ............................................................................................................................................................. 1 5.1.", "title": "" }, { "docid": "6da632d61dbda324da5f74b38f25b1b9", "text": "Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.", "title": "" }, { "docid": "e1fb515f0f5bbec346098f1ee2aaefdc", "text": "Observing failures and other – desired or undesired – behavior patterns in large scale software systems of specific domains (telecommunication systems, information systems, online web applications, etc.) is difficult. Very often, it is only possible by examining the runtime behavior of these systems through operational logs or traces. However, these systems can generate data in order of gigabytes every day, which makes a challenge to process in the course of predicting upcoming critical problems or identifying relevant behavior patterns. We can say that there is a gap between the amount of information we have and the amount of information we need to make a decision. Low level data has to be processed, correlated and synthesized in order to create high level, decision helping data. The actual value of this high level data lays in its availability at the time of decision making (e.g., do we face a virus attack?). In other words high level data has to be available real-time or near real-time. The research area of event processing deals with processing such data that are viewed as events and with making alerts to the administrators (users) of the systems about relevant behavior patterns based on the rules that are determined in advance. The rules or patterns describe the typical circumstances of the events which have been experienced by the administrators. Normally, these experts improve their observation capabilities over time as they experience more and more critical events and the circumstances preceding them. However, there is a way to aid this manual process by applying the results from a related (and from many aspects, overlapping) research area, predictive analytics, and thus improving the effectiveness of event processing. Predictive analytics deals with the prediction of future events based on previously observed historical data by applying sophisticated methods like machine learning, the historical data is often collected and transformed by using techniques similar to the ones of event processing, e.g., filtering, correlating the data, and so on. In this paper, we are going to examine both research areas and offer a survey on terminology, research achievements, existing solutions, and open issues. We discuss the applicability of the research areas to the telecommunication domain. We primarily base our survey on articles published in international conferences and journals, but we consider other sources of information as well, like technical reports, tools or web-logs.", "title": "" }, { "docid": "7210c2e82441b142f722bcc01bfe9aca", "text": "In the beginning of the last decade, agile methodologies emerged as a response to software development processes that were based on rigid approaches. In fact, the flexible characteristics of agile methods are expected to be suitable to the less-defined and uncertain nature of software development. However, many studies in this area lack empirical evaluation in order to provide more confident evidences about which contexts the claims are true. This paper reports an empirical study performed to analyze the impact of Scrum adoption on customer satisfaction as an external success perspective for software development projects in a software intensive organization. The study uses data from real-life projects executed in a major software intensive organization located in a nation wide software ecosystem. The empirical method applied was a cross-sectional survey using a sample of 19 real-life software development projects involving 156 developers. The survey aimed to determine whether there is any impact on customer satisfaction caused by the Scrum adoption. However, considering that sample, our results indicate that it was not possible to establish any evidence that using Scrum may help to achieve customer satisfaction and, consequently, increase the success rates in software projects, in contrary to general claims made by Scrum's advocates.", "title": "" }, { "docid": "f0242a2a54b1c4538abdd374c74f69f6", "text": "Background: An increasing research effort has devoted to just-in-time (JIT) defect prediction. A recent study by Yang et al. at FSE'16 leveraged individual change metrics to build unsupervised JIT defect prediction model. They found that many unsupervised models performed similarly to or better than the state-of-the-art supervised models in effort-aware JIT defect prediction. Goal: In Yang et al.'s study, code churn (i.e. the change size of a code change) was neglected when building unsupervised defect prediction models. In this study, we aim to investigate the effectiveness of code churn based unsupervised defect prediction model in effort-aware JIT defect prediction. Methods: Consistent with Yang et al.'s work, we first use code churn to build a code churn based unsupervised model (CCUM). Then, we evaluate the prediction performance of CCUM against the state-of-the-art supervised and unsupervised models under the following three prediction settings: cross-validation, time-wise cross-validation, and cross-project prediction. Results: In our experiment, we compare CCUM against the state-of-the-art supervised and unsupervised JIT defect prediction models. Based on six open-source projects, our experimental results show that CCUM performs better than all the prior supervised and unsupervised models. Conclusions: The result suggests that future JIT defect prediction studies should use CCUM as a baseline model for comparison when a novel model is proposed.", "title": "" }, { "docid": "96c3c7f605f7ca763df0710629edd726", "text": "This study underlines the importance of cinnamon, a widely-used food spice and flavoring material, and its metabolite sodium benzoate (NaB), a widely-used food preservative and a FDA-approved drug against urea cycle disorders in humans, in increasing the levels of neurotrophic factors [e.g., brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NT-3)] in the CNS. NaB, but not sodium formate (NaFO), dose-dependently induced the expression of BDNF and NT-3 in primary human neurons and astrocytes. Interestingly, oral administration of ground cinnamon increased the level of NaB in serum and brain and upregulated the levels of these neurotrophic factors in vivo in mouse CNS. Accordingly, oral feeding of NaB, but not NaFO, also increased the level of these neurotrophic factors in vivo in the CNS of mice. NaB induced the activation of protein kinase A (PKA), but not protein kinase C (PKC), and H-89, an inhibitor of PKA, abrogated NaB-induced increase in neurotrophic factors. Furthermore, activation of cAMP response element binding (CREB) protein, but not NF-κB, by NaB, abrogation of NaB-induced expression of neurotrophic factors by siRNA knockdown of CREB and the recruitment of CREB and CREB-binding protein to the BDNF promoter by NaB suggest that NaB exerts its neurotrophic effect through the activation of CREB. Accordingly, cinnamon feeding also increased the activity of PKA and the level of phospho-CREB in vivo in the CNS. These results highlight a novel neutrophic property of cinnamon and its metabolite NaB via PKA – CREB pathway, which may be of benefit for various neurodegenerative disorders.", "title": "" }, { "docid": "20adf89d9301cdaf64d8bf684886de92", "text": "A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to estimating the density of such spatial point events. One key feature of the new approach is that the network space is represented with basic linear units of equal network length, termed lixel (linear pixel), and related network topology. The use of lixel not only facilitates the systematic selection of a set of regularly spaced locations along a network for density estimation, but also makes the practical application of the network KDE feasible by significantly improving the computation efficiency. The approach is implemented in the ESRI ArcGIS environment and tested with the year 2005 traffic accident data and a road network in the Bowling Green, Kentucky area. The test results indicate that the new network KDE is more appropriate than standard planar KDE for density estimation of traffic accidents, since the latter covers space beyond the event context (network space) and is likely to overestimate the density values. The study also investigates the impacts on density calculation from two kernel functions, lixel lengths, and search bandwidths. It is found that the kernel function is least important in structuring the density pattern over network space, whereas the lixel length critically impacts the local variation details of the spatial density pattern. The search bandwidth imposes the highest influence by controlling the smoothness of the spatial pattern, showing local effects at a narrow bandwidth and revealing \" hot spots \" at larger or global scales with a wider bandwidth. More significantly, the idea of representing a linear network by a network system of equal-length lixels may potentially 3 lead the way to developing a suite of other network related spatial analysis and modeling methods.", "title": "" }, { "docid": "2d4c99f3ff7a19580f9f012da99a8348", "text": "OBJECTIVES\nTo compare the effectiveness of a mixture of acacia fiber, psyllium fiber, and fructose (AFPFF) with polyethylene glycol 3350 combined with electrolytes (PEG+E) in the treatment of children with chronic functional constipation (CFC); and to evaluate the safety and effectiveness of AFPFF in the treatment of children with CFC.\n\n\nSTUDY DESIGN\nThis was a randomized, open label, prospective, controlled, parallel-group study involving 100 children (M/F: 38/62; mean age ± SD: 6.5 ± 2.7 years) who were diagnosed with CFC according to the Rome III Criteria. Children were randomly divided into 2 groups: 50 children received AFPFF (16.8 g daily) and 50 children received PEG+E (0.5 g/kg daily) for 8 weeks. Primary outcome measures were frequency of bowel movements, stool consistency, fecal incontinence, and improvement of other associated gastrointestinal symptoms. Safety was assessed with evaluation of clinical adverse effects and growth measurements.\n\n\nRESULTS\nCompliance rates were 72% for AFPFF and 96% for PEG+E. A significant improvement of constipation was seen in both groups. After 8 weeks, 77.8% of children treated with AFPFF and 83% of children treated with PEG+E had improved (P = .788). Neither PEG+E nor AFPFF caused any clinically significant side effects during the entire course of the study period.\n\n\nCONCLUSIONS\nIn this randomized study, we did not find any significant difference between the efficacy of AFPFF and PEG+E in the treatment of children with CFC. Both medications were proved to be safe for CFC treatment, but PEG+E was better accepted by children.", "title": "" }, { "docid": "61096a0d1e94bb83f7bd067b06d69edd", "text": "A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of “overfitting”, defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, “slow” convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the normalized weight matrix at each layer of a deep network converges to a minimum norm solution (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for predicting the generalization performance of different zero minimizers of the empirical loss. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 80 6. 11 37 9v 1 [ cs .L G ] 2 9 Ju n 20 18 Theory IIIb: Generalization in Deep Networks Tomaso Poggio ∗1, Qianli Liao1, Brando Miranda1, Andrzej Banburski1, Xavier Boix1, and Jack Hidary2 1Center for Brains, Minds and Machines, MIT 2Alphabet (Google) X", "title": "" }, { "docid": "76ede41b63f6c960729228c505026851", "text": "Although the hip musculature is found to be very important in connecting the core to the lower extremities and in transferring forces from and to the core, it is proposed to leave the hip musculature out of consideration when talking about the concept of core stability. A low level of co-contraction of the trunk muscles is important for core stability. It provides a level of stiffness, which gives sufficient stability against minor perturbations. Next to this stiffness, direction-specific muscle reflex responses are also important in providing core stability, particularly when encountering sudden perturbations. It appears that most trunk muscles, both the local and global stabilization system, must work coherently to achieve core stability. The contributions of the various trunk muscles depend on the task being performed. In the search for a precise balance between the amount of stability and mobility, the role of sensory-motor control is much more important than the role of strength or endurance of the trunk muscles. The CNS creates a stable foundation for movement of the extremities through co-contraction of particular muscles. Appropriate muscle recruitment and timing is extremely important in providing core stability. No clear evidence has been found for a positive relationship between core stability and physical performance and more research in this area is needed. On the other hand, with respect to the relationship between core stability and injury, several studies have found an association between a decreased stability and a higher risk of sustaining a low back or knee injury. Subjects with such injuries have been shown to demonstrate impaired postural control, delayed muscle reflex responses following sudden trunk unloading and abnormal trunk muscle recruitment patterns. In addition, various relationships have been demonstrated between core stability, balance performance and activation characteristics of the trunk muscles. Most importantly, a significant correlation was found between poor balance performance in a sitting balance task and delayed firing of the trunk muscles during sudden perturbation. It was suggested that both phenomena are caused by proprioceptive deficits. The importance of sensory-motor control has implications for the development of measurement and training protocols. It has been shown that challenging propriocepsis during training activities, for example, by making use of unstable surfaces, leads to increased demands on trunk muscles, thereby improving core stability and balance. Various tests to directly or indirectly measure neuromuscular control and coordination have been developed and are discussed in the present article. Sitting balance performance and trunk muscle response times may be good indicators of core stability. In light of this, it would be interesting to quantify core stability using a sitting balance task, for example by making use of accelerometry. Further research is required to develop training programmes and evaluation methods that are suitable for various target groups.", "title": "" }, { "docid": "ce32b34898427802abd4cc9c99eac0bc", "text": "A circular polarizer is a single layer or multi-layer structure that converts linearly polarized waves into circularly polarized ones and vice versa. In this communication, a simple method based on transmission line circuit theory is proposed to model and design circular polarizers. This technique is more flexible than those previously presented in the way that it permits to design polarizers with the desired spacing between layers, while obtaining surfaces that may be easier to fabricate and less sensitive to fabrication errors. As an illustrating example, a modified version of the meander-line polarizer being twice as thin as its conventional counterpart is designed. Then, both polarizers are fabricated and measured. Results are shown and compared for normal and oblique incidence angles in the planes φ = 0° and φ = 90°.", "title": "" }, { "docid": "9504571e66ea9071c6c227f61dfba98f", "text": "Recent research has shown that although Reinforcement Learning (RL) can benefit from expert demonstration, it usually takes considerable efforts to obtain enough demonstration. The efforts prevent training decent RL agents with expert demonstration in practice. In this work, we propose Active Reinforcement Learning with Demonstration (ARLD), a new framework to streamline RL in terms of demonstration efforts by allowing the RL agent to query for demonstration actively during training. Under the framework, we propose Active Deep Q-Network, a novel query strategy which adapts to the dynamically-changing distributions during the RL training process by estimating the uncertainty of recent states. The expert demonstration data within Active DQN are then utilized by optimizing supervised max-margin loss in addition to temporal difference loss within usual DQN training. We propose two methods of estimating the uncertainty based on two state-of-the-art DQN models, namely the divergence of bootstrapped DQN and the variance of noisy DQN. The empirical results validate that both methods not only learn faster than other passive expert demonstration methods with the same amount of demonstration and but also reach super-expert level of performance across four different tasks.", "title": "" }, { "docid": "1c9eb6b002b36e2607cc63e08151ee65", "text": "Qualitative trend analysis (QTA) is a process-history-based data-driven technique that works by extracting important features (trends) from the measured signals and evaluating the trends. QTA has been widely used for process fault detection and diagnosis. Recently, Dash et al. (2001, 2003) presented an intervalhalving-based algorithm for off-line automatic trend extraction from a record of data, a fuzzy-logic based methodology for trend-matching and a fuzzy-rule-based framework for fault diagnosis (FD). In this article, an algorithm for on-line extraction of qualitative trends is proposed. A framework for on-line fault diagnosis using QTA also has been presented. Some of the issues addressed are (i) development of a robust and computationally efficient QTA-knowledge-base, (ii) fault detection, (iii) estimation of the fault occurrence time, (iv) on-line trend-matching and (v) updating the QTA-knowledge-base when a novel fault is diagnosed manually. Some results for FD of the Tennessee Eastman (TE) process using the developed framework are presented. Copyright c 2003 IFAC.", "title": "" }, { "docid": "490114176c31592da4cac2bcf75f31f3", "text": "In this letter, we present a compact ultrawideband (UWB) antenna printed on a 50.8-μm Kapton polyimide substrate. The antenna is fed by a linearly tapered coplanar waveguide (CPW) that provides smooth transitional impedance for improved matching. The proposed design is tuned to cover the 2.2-14.3-GHz frequency range that encompasses both the 2.45-GHz Industrial, Scientific, Medical (ISM) band and the standard 3.1-10.6-GHz UWB band. Furthermore, the antenna is compared to a conventional CPW-fed antenna to demonstrate the significance of the proposed design. A parametric study is first performed on the feed of the proposed design to achieve the desired impedance matching. Next, a prototype is fabricated; measurement results show good agreement with the simulated model. Moreover, the antenna demonstrates a very low susceptibility to performance degradation due to bending effects in terms of impedance matching and far-field radiation patterns, which makes it suitable for integration within modern flexible electronic devices.", "title": "" }, { "docid": "e43814f288e1c5a84fb9d26b46fc7e37", "text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.", "title": "" }, { "docid": "7419fa101c2471e225c976da196ed813", "text": "A 4×40 Gb/s collaborative digital CDR is implemented in 28nm CMOS. The CDR is capable of recovering a low jitter clock from a partially-equalized or un-equalized eye by using a phase detection scheme that inherently filters out ISI edges. The CDR uses split feedback that simultaneously allows wider bandwidth and lower recovered clock jitter. A shared frequency tracking is also introduced that results in lower periodic jitter. Combining these techniques the CDR recovers a 10GHz clock from an eye containing 0.8UIpp DDJ and still achieves 1-10 MHz of tracking bandwidth while adding <; 300fs of jitter. Per lane CDR occupies only .06 mm2 and consumes 175 mW.", "title": "" }, { "docid": "2f60e3d89966d4680796c1e4355de4bc", "text": "This letter addresses the problem of energy detection of an unknown signal over a multipath channel. It starts with the no-diversity case, and presents some alternative closed-form expressions for the probability of detection to those recently reported in the literature. Detection capability is boosted by implementing both square-law combining and square-law selection diversity schemes", "title": "" }, { "docid": "956ffd90cc922e77632b8f9f79f42a98", "text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433", "title": "" }, { "docid": "d7ec0f978b066686edf9b930492dae71", "text": "The association between MMORPG play (World of Warcraft) and psychological wellbeing was explored through a cross sectional, online questionnaire design testing the relationship between average hours playing per week and psychological wellbeing. Play motivation including achievement, social interaction and immersion as well as problematic use were tested as mediating variables. Participants (N = 565) completed online measures including demographics and play time, health, motivations to play and problematic use. Analysis revealed a negative correlation between playing time and psychological wellbeing. A Multiple Mediation Model showed the relationship specifically occurred where play was motivated by Immersion and/or where play was likely to have become problematic. No evidence of a direct effect of play on psychological wellbeing was found when taking these mediating pathways into account. Clinical and research implications are discussed.", "title": "" } ]
scidocsrr
728b67b9387e9c182a914936ed0c9f88
Tree-based Bayesian Mixture Model for Competing Risks
[ { "docid": "56db9e027eb9ca536a2ef8cec9b53beb", "text": "Multiple hypothesis testing is concerned with controlling the rate of false positives when testing several hypotheses simultaneously. One multiple hypothesis testing error measure is the false discovery rate (FDR), which is loosely defined to be the expected proportion of false positives among all significant hypotheses. The FDR is especially appropriate for exploratory analyses in which one is interested in finding several significant results among many tests. In this work, we introduce a modified version of the FDR called the “positive false discovery rate” (pFDR). We discuss the advantages and disadvantages of the pFDR and investigate its statistical properties. When assuming the test statistics follow a mixture distribution, we show that the pFDR can be written as a Bayesian posterior probability and can be connected to classification theory. These properties remain asymptotically true under fairly general conditions, even under certain forms of dependence. Also, a new quantity called the “q-value” is introduced and investigated, which is a natural “Bayesian posterior p-value,” or rather the pFDR analogue of the p-value.", "title": "" } ]
[ { "docid": "271f6291ab2c97b5e561cf06b9131f9d", "text": "Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set.", "title": "" }, { "docid": "89318aa5769daa08a67ae7327c458e8e", "text": "The present thesis is concerned with the development and evaluation (in terms of accuracy and utility) of systems using hand postures and hand gestures for enhanced Human-Computer Interaction (HCI). In our case, these systems are based on vision techniques, thus only requiring cameras, and no other specific sensors or devices. When dealing with hand movements, it is necessary to distinguish two aspects of these hand movements : the static aspect and the dynamic aspect. The static aspect is characterized by a pose or configuration of the hand in an image and is related to the Hand Posture Recognition (HPR) problem. The dynamic aspect is defined either by the trajectory of the hand, or by a series of hand postures in a sequence of images. This second aspect is related to the Hand Gesture Recognition (HGR) task. Given the recognized lack of common evaluation databases in the HGR field, a first contribution of this thesis was the collection and public distribution of two databases, containing both oneand two-handed gestures, which part of the results reported here will be based upon. On these databases, we compare two state-of-the-art models for the task of HGR. As a second contribution, we propose a HPR technique based on a new feature extraction. This method has the advantage of being faster than conventional methods while yielding good performances. In addition, we provide comparison results of this method with other state-of-the-art technique. Finally, the most important contribution of this thesis lies in the thorough study of the state-of-the-art not only in HGR and HPR but also more generally in the field of HCI. The first chapter of the thesis provides an extended study of the state-of-the-art. The second chapter of this thesis contributes to HPR. We propose to apply for HPR a technique employed with success for face detection. This method is based on the Modified Census Transform (MCT) to extract relevant features in images. We evaluate this technique on an existing benchmark database and provide comparison results with other state-of-the-art approaches. The third chapter is related to HGR. In this chapter we describe the first recorded database, containing both oneand two-handed gestures in the 3D space. We propose to compare two models used with success in HGR, namely Hidden Markov Models (HMM) and Input-Output Hidden Markov Model (IOHMM). The fourth chapter is also focused on HGR but more precisely on two-handed gesture recognition. For that purpose, a second database has been recorded using two cameras. The goal of these gestures is to manipulate virtual objects on a screen. We propose to investigate on this second database the state-of-the-art sequence processing techniques we used in the previous chapter. We then discuss the results obtained using different features, and using images of one or two cameras. In conclusion, we propose a method for HPR based on new feature extraction. For HGR, we provide two databases and comparison results of two major sequence processing techniques. Finally, we present a complete survey on recent state-of-the-art techniques for both HPR and HGR. We also present some possible applications of these techniques, applied to two-handed gesture interaction. We hope this research will open new directions in the field of hand posture and gesture recognition.", "title": "" }, { "docid": "4f323f6591079882eed52a1549f6e66a", "text": "General Video Game Artificial Intelligence is a general game playing framework for Artificial General Intelligence research in the video-games domain. In this paper, we propose for the first time a screen capture learning agent for General Video Game AI framework. A Deep Q-Network algorithm was applied and improved to develop an agent capable of learning to play different games in the framework. After testing this algorithm using various games of different categories and difficulty levels, the results suggest that our proposed screen capture learning agent has the potential to learn many different games using only a single learning algorithm.", "title": "" }, { "docid": "5b84008df77e2ff8929cd759ae92de7d", "text": "Purpose – Organizations invest in enterprise systems (ESs) with an expectation to share digital information from disparate sources to improve organizational effectiveness. This study aims to examine how organizations realize digital business strategies using an ES. It does so by evaluating the ES data support activities for knowledge creation, particularly how ES data are transformed into corporate knowledge in relevance to business strategies sought. Further, how this knowledge leads to realization of the business benefits. The linkage between establishing digital business strategy, utilization of ES data in decision-making processes, and realized or unrealized benefits provides the reason for this study. Design/methodology/approach – This study develops and utilizes a transformational model of how ES data are transformed into knowledge and results to evaluate the role of digital business strategies in achieving benefits using an ES. Semi-structured interviews are first conducted with ES vendors, consultants and IT research firms to understand the process of ES data transformation for realizing business strategies from their perspective. This is followed by three in-depth cases (two large and one medium-sized organization) who have implemented ESs. The empirical data are analyzed using the condensation approach. This method condenses the data into multiple groups according to pre-defined categories, which follow the scope of the research questions. Findings – The key findings emphasize that strategic benefit realization from an ES implementation is a holistic process that not only includes the essential data and technology factors, but also includes factors such as digital business strategy deployment, people and process management, and skills and competency development. Although many companies are mature with their ES implementation, these firms have only recently started aligning their ES capabilities with digital business strategies correlating data, decisions, and actions to maximize business value from their ES investment. Research limitations/implications – The findings reflect the views of two large and one mediumsized organization in the manufacturing sector. Although the evidence of the benefit realization process success and its results is more prominent in larger organizations than medium-sized, it may not be generalized that smaller firms cannot achieve these results. Exploration of these aspects in smaller firms or a different industry sector such as retail/service would be of value. Practical implications – The paper highlights the importance of tools and practices for accessing relevant information through an integrated ES so that competent decisions can be established towards achieving digital business strategies, and optimizing organizational performance. Knowledge is a key factor in this process. Originality/value – The paper evaluates a holistic framework for utilization of ES data in realizing digital business strategies. Thus, it develops an enhanced transformational cycle model for ES data transformation into knowledge and results, which maintains to build up the transformational process success in the long term.", "title": "" }, { "docid": "301fc0a18bec8128165ec73e15e66eb1", "text": "data structure queries (A). Some queries check properties of abstract data struct [11][131] such as stacks, hash tables, trees, and so on. These queries are not domain because the data structures can hold data of any domain. These queries are also differ the programming construct queries, because they check the constraints of well-defined a data structures. For example, a query about a binary tree may find the number of its nod have only one child. On the other hand, programming construct queries usually span di data structures. Abstract data structure queries can usually be expressed as class invar could be packaged with the class that implements an ADT. However, the queries that p information rather than detect violations are best answered by dynamic queries. For ex monitoring B+ trees using queries may indicate whether this data structure is efficient f underlying problem. Program construct queries (P). Program construct queries verify object relationships that related to the program implementation and not directly to the problem domain. Such q verify and visualize groups of objects that have to conform to some constraints because lower level of program design and implementation. For example, in a graphical user int implementation, every window object has a parent window, and this window referenc children widgets through the widget_collection collection (section 5.2.2). Such construct is n", "title": "" }, { "docid": "32775ba6d1a26274eaa6ce92513d9850", "text": "Data reduction plays an important role in machine learning and pattern recognition with a high-dimensional data. In real-world applications data usually exists with hybrid formats, and a unified data reducing technique for hybrid data is desirable. In this paper, an information measure is proposed to computing discernibility power of a crisp equivalence relation or a fuzzy one, which is the key concept in classical rough set model and fuzzy-rough set model. Based on the information measure, a general definition of significance of nominal, numeric and fuzzy attributes is presented. We redefine the independence of hybrid attribute subset, reduct, and relative reduct. Then two greedy reduction algorithms for unsupervised and supervised data dimensionality reduction based on the proposed information measure are constructed. Experiments show the reducts found by the proposed algorithms get a better performance compared with classical rough set approaches. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "827c9d65c2c3a2a39d07c9df7a21cfe2", "text": "A worldwide movement in advanced manufacturing countries is seeking to reinvigorate (and revolutionize) the industrial and manufacturing core competencies with the use of the latest advances in information and communications technology. Visual computing plays an important role as the \"glue factor\" in complete solutions. This article positions visual computing in its intrinsic crucial role for Industrie 4.0 and provides a general, broad overview and points out specific directions and scenarios for future research.", "title": "" }, { "docid": "accad42ca98cd758fd1132e51942cba8", "text": "The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions.", "title": "" }, { "docid": "ee0a1a7c7a8f2c42969b2beb09d7f94e", "text": "Currently, electric vehicle technology is becoming more and more mature. Although the anti-lock braking system (ABS) has been commonly applied, most electric vehicles (EVs) still use traditional hydraulic-based disc braking, which has the drawbacks that vehicle wheels are is easy to skid in the rainy day, and easy to be abraded during emergency brake. As a novel method of braking, regenerative braking has the advantages of compact structure, sensitive response, reliability and controllable braking distance. In this research task, a regenerative driving and braking control system for EVs with satisfactory braking performance is proposed. When braking, a motor is converted into a generator-the acquired energy can be used to generate reverse magnetic braking torque with fast response. On this basis, an anti-lock braking controller is realize. A PID controller is also designed to drive the motor and a fuzzy slip ratio controller is designed and used to obtain the optimal slip ratio. Finally, real-world experiments are conducted to verify the proposed method.", "title": "" }, { "docid": "b146013415b3ca19eee9ffef15155fe4", "text": "48 nm pitch dual damascene interconnects are patterned and filled with ruthenium. Ru interconnect has comparable high yield for line and via macros. Electrical results show minimal impact for via resistance and around 2 times higher line resistance. Resistivity and cross section area of Ru interconnects are measured by temperature coefficient of resistivity method and the area was verified by TEM. Reliability results show non-failure in electromigration and longer time dependent dielectric breakdown. Based on the data collected, Ru could be a metallization contender at linewidth of 16 nm and below.", "title": "" }, { "docid": "96ee31337d66b8ccd3876c1575f9b10c", "text": "Although different modeling techniques have been proposed during the last 300 years, the differential equation formalism proposed by Newton and Leibniz has been the tool of choice for modeling and problem solving Taylor (1996); Wainer (2009). Differential equations provide a formal mathematical method (sometimes also called an analytical method) for studying the entity of interest. Computational methods based on differential equations could not be easily applied in studying human-made dynamic systems (e.g., traffic controllers, robotic arms, automated factories, production plants, computer networks, VLSI circuits). These systems are usually referred to as discrete event systems because their states do not change continuously but, rather, because of the occurrence of events. This makes them asynchronous, inherently concurrent, and highly nonlinear, rendering their modeling and simulation different from that used in traditional approaches. In order to improve the model definition for this class of systems, a number of techniques were introduced, including Petri Nets, Finite State Machines, min-max algebra, Timed Automata, etc. Banks & Nicol. (2005); Cassandras (1993); Cellier & Kofman. (2006); Fishwick (1995); Law & Kelton (2000); Toffoli & Margolus. (1987). Wireless Sensor Network (WSN) is a discrete event system which consists of a network of sensor nodes equipped with sensing, computing, power, and communication modules to monitor certain phenomenon such as environmental data or object tracking Zhao & Guibas (2004). Emerging applications of wireless sensor networks are comprised of asset and warehouse *madani@ciit.net.pk †jawhaikaz@ciit.net.pk ‡mahlknecht@ict.tuwien.ac.at 1", "title": "" }, { "docid": "b91c387335e7f63b720525d0ee28dbd6", "text": "Road condition acquisition and assessment are the key to guarantee their permanent availability. In order to maintain a country's whole road network, millions of high-resolution images have to be analyzed annually. Currently, this requires cost and time excessive manual labor. We aim to automate this process to a high degree by applying deep neural networks. Such networks need a lot of data to be trained successfully, which are not publicly available at the moment. In this paper, we present the GAPs dataset, which is the first freely available pavement distress dataset of a size, large enough to train high-performing deep neural networks. It provides high quality images, recorded by a standardized process fulfilling German federal regulations, and detailed distress annotations. For the first time, this enables a fair comparison of research in this field. Furthermore, we present a first evaluation of the state of the art in pavement distress detection and an analysis of the effectiveness of state of the art regularization techniques on this dataset.", "title": "" }, { "docid": "8c95392ab3cc23a7aa4f621f474d27ba", "text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.", "title": "" }, { "docid": "29c91c8d6f7faed5d23126482a2f553b", "text": "In this article, we present an account of the state of the art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different implementations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The data set recorded for this purpose is presented along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods.", "title": "" }, { "docid": "415c43b39543f2889eca11cbc3669784", "text": "The fabrication of electronic devices based on organic materials, known as ’printed electronics’, is an emerging technology due to its unprecedented advantages involving fl exibility, light weight, and portability, which will ultimately lead to future ubiquitous applications. [ 1 ] The solution processability of semiconducting and metallic polymers enables the cost-effective fabrication of optoelectronic devices via high-throughput printing techniques. [ 2 ] These techniques require high-performance fl exible and transparent electrodes (FTEs) fabricated on plastic substrates, but currently, they depend on indium tin oxide (ITO) coated on plastic substrates. However, its intrinsic mechanical brittleness and inferior physical properties arising from lowtemperature ( T ) processing below the melting T of the plastic substrates (i.e., typically below 150 °C) have increased the demand for alternative FTE materials. [ 3 ]", "title": "" }, { "docid": "dd9f6ef9eafdef8b29c566bcea8ded57", "text": "A recent trend in saliency algorithm development is large-scale benchmarking and algorithm ranking with ground truth provided by datasets of human fixations. In order to accommodate the strong bias humans have toward central fixations, it is common to replace traditional ROC metrics with a shuffled ROC metric which uses randomly sampled fixations from other images in the database as the negative set. However, the shuffled ROC introduces a number of problematic elements, including a fundamental assumption that it is possible to separate visual salience and image spatial arrangement. We argue that it is more informative to directly measure the effect of spatial bias on algorithm performance rather than try to correct for it. To capture and quantify these known sources of bias, we propose a novel metric for measuring saliency algorithm performance: the spatially binned ROC (spROC). This metric provides direct in-sight into the spatial biases of a saliency algorithm without sacrificing the intuitive raw performance evaluation of traditional ROC measurements. By quantitatively measuring the bias in saliency algorithms, researchers will be better equipped to select and optimize the most appropriate algorithm for a given task. We use a baseline measure of inherent algorithm bias to show that Adaptive Whitening Saliency (AWS) [14], Attention by Information Maximization (AIM) [8], and Dynamic Visual Attention (DVA) [20] provide the least spatially biased results, suiting them for tasks in which there is no information about the underlying spatial bias of the stimuli, whereas algorithms such as Graph Based Visual Saliency (GBVS) [18] and Context-Aware Saliency (CAS) [15] have a significant inherent central bias.", "title": "" }, { "docid": "00bcce935ca2e4d443941b7e90d644c9", "text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.", "title": "" }, { "docid": "5fe036906302ab4131c7f9afc662df3f", "text": "Plant peptide hormones play an important role in regulating plant developmental programs via cell-to-cell communication in a non-cell autonomous manner. To characterize the biological relevance of C-TERMINALLY ENCODED PEPTIDE (CEP) genes in rice, we performed a genome-wide search against public databases using a bioinformatics approach and identified six additional CEP members. Expression analysis revealed a spatial-temporal pattern of OsCEP6.1 gene in different tissues and at different developmental stages of panicle. Interestingly, the expression level of the OsCEP6.1 was also significantly up-regulated by exogenous cytokinin. Application of a chemically synthesized 15-amino acid OsCEP6.1 peptide showed that OsCEP6.1 had a negative role in regulating root and seedling growth, which was further confirmed by transgenic lines. Furthermore, the constitutive expression of OsCEP6.1 was sufficient to lead to panicle architecture and grain size variations. Scanning electron microscopy analysis revealed that the phenotypic variation of OsCEP6.1 overexpression lines resulted from decreased cell size but not reduced cell number. Moreover, starch accumulation was not significantly affected. Taken together, these data suggest that the OsCEP6.1 peptide might be involved in regulating the development of panicles and grains in rice.", "title": "" }, { "docid": "af3e8e26ec6f56a8cd40e731894f5993", "text": "Probiotic bacteria are sold mainly in fermented foods, and dairy products play a predominant role as carriers of probiotics. These foods are well suited to promoting the positive health image of probiotics for several reasons: 1) fermented foods, and dairy products in particular, already have a positive health image; 2) consumers are familiar with the fact that fermented foods contain living microorganisms (bacteria); and 3) probiotics used as starter organisms combine the positive images of fermentation and probiotic cultures. When probiotics are added to fermented foods, several factors must be considered that may influence the ability of the probiotics to survive in the product and become active when entering the consumer's gastrointestinal tract. These factors include 1) the physiologic state of the probiotic organisms added (whether the cells are from the logarithmic or the stationary growth phase), 2) the physical conditions of product storage (eg, temperature), 3) the chemical composition of the product to which the probiotics are added (eg, acidity, available carbohydrate content, nitrogen sources, mineral content, water activity, and oxygen content), and 4) possible interactions of the probiotics with the starter cultures (eg, bacteriocin production, antagonism, and synergism). The interactions of probiotics with either the food matrix or the starter culture may be even more intensive when probiotics are used as a component of the starter culture. Some of these aspects are discussed in this article, with an emphasis on dairy products such as milk, yogurt, and cheese.", "title": "" }, { "docid": "6c31a285d3548bfb6cbe9ea72f0d5192", "text": "PURPOSE\nTo compare the effects of a 10-week training program with two different exercises -- traditional hamstring curl (HC) and Nordic hamstrings (NH), a partner exercise focusing the eccentric phase -- on muscle strength among male soccer players.\n\n\nMETHODS\nSubjects were 21 well-trained players who were randomized to NH training (n = 11) or HC training (n = 10). The programs were similar, with a gradual increase in the number of repetitions from two sets of six reps to three sets of eight to 12 reps over 4 weeks, and then increasing load during the final 6 weeks of training. Strength was measured as maximal torque on a Cybex dynamometer before and after the training period.\n\n\nRESULTS\nIn the NH group, there was an 11% increase in eccentric hamstring torque measured at 60 degrees s(-1), as well as a 7% increase in isometric hamstring strength at 90 degrees, 60 degrees and 30 degrees of knee flexion. Since there was no effect on concentric quadriceps strength, there was a significant increase in the hamstrings:quadriceps ratio from 0.89 +/- 0.12 to 0.98 +/- 0.17 (11%) in the NH group. No changes were observed in the HC group.\n\n\nCONCLUSION\nNH training for 10 weeks more effectively develops maximal eccentric hamstring strength in well-trained soccer players than a comparable program based on traditional HC.", "title": "" } ]
scidocsrr
551ac2dfdd05c6885d65f68b4039181b
Background subtraction and human detection in outdoor videos using fuzzy logic
[ { "docid": "af752d0de962449acd9a22608bd7baba", "text": "Ї R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. ‡ R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. ‡ R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. ‡ R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.", "title": "" } ]
[ { "docid": "e269585a133a138b2ba11c7fb2d025ec", "text": "Concept and design of a low cost two-axes MEMS scanning mirror with an aperture size of 7 millimetres for a compact automotive LIDAR sensor is presented. Hermetic vacuum encapsulation and stacked vertical comb drives are the key features to enable a large tilt angle of 15 degrees. A tripod MEMS mirror design provides an advantageous ratio of mirror aperture and chip size and allows circular laser scanning.", "title": "" }, { "docid": "e5abde9ecd6e50c60306411fc011db2d", "text": "We present a user study for two different automatic strategies that simplify text content for people with dyslexia. The strategies considered are the standard one (replacing a complex word with the most simpler synonym) and a new one that presents several synonyms for a complex word if the user requests them. We compare texts transformed by both strategies with the original text and to a gold standard manually built. The study was undertook by 96 participants, 47 with dyslexia plus a control group of 49 people without dyslexia. To show device independence, for the new strategy we used three different reading devices. Overall, participants with dyslexia found texts presented with the new strategy significantly more readable and comprehensible. To the best of our knowledge, this is the largest user study of its kind.", "title": "" }, { "docid": "b12947614198d639aef0d3a26b83a215", "text": "In the era of mobile Internet, mobile operators are facing pressure on ever-increasing capital expenditures and operating expenses with much less growth of income. Cloud Radio Access Network (C-RAN) is expected to be a candidate of next generation access network techniques that can solve operators' puzzle. In this article, on the basis of a general survey of C-RAN, we present a novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane. Compared to traditional architecture, the proposed C-RAN architecture emphasizes the notion of service cloud, service-oriented resource scheduling and management, thus it facilitates the utilization of new communication and computer techniques. With the extensive computation resource offered by the cloud platform, a coordinated user scheduling algorithm and parallel optimum precoding scheme are proposed, which can achieve better performance. The proposed scheme opens another door to design new algorithms matching well with C-RAN architecture, instead of only migrating existing algorithms from traditional architecture to C-RAN.", "title": "" }, { "docid": "e485aca373cf4543e1a8eeadfa0e6772", "text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.", "title": "" }, { "docid": "598dbf48c54bcea6e74d85a8393dada1", "text": "With the fast development of social media, the information overload problem becomes increasingly severe and recommender systems play an important role in helping online users find relevant information by suggesting information of potential interests. Social activities for online users produce abundant social relations. Social relations provide an independent source for recommendation, presenting both opportunities and challenges for traditional recommender systems. Users are likely to seek suggestions from both their local friends and users with high global reputations, motivating us to exploit social relations from local and global perspectives for online recommender systems in this paper. We develop approaches to capture local and global social relations, and propose a novel framework LOCABAL taking advantage of both local and global social context for recommendation. Empirical results on real-world datasets demonstrate the effectiveness of our proposed framework and further experiments are conducted to understand how local and global social context work for the proposed framework.", "title": "" }, { "docid": "e81f197acf7e3b7590d93481a4a4b5b3", "text": "Naive T cells have long been regarded as a developmentally synchronized and fairly homogeneous and quiescent cell population, the size of which depends on age, thymic output and prior infections. However, there is increasing evidence that naive T cells are heterogeneous in phenotype, function, dynamics and differentiation status. Current strategies to identify naive T cells should be adjusted to take this heterogeneity into account. Here, we provide an integrated, revised view of the naive T cell compartment and discuss its implications for healthy ageing, neonatal immunity and T cell reconstitution following haematopoietic stem cell transplantation. Evidence is increasing that naive T cells are heterogeneous in phenotype, function, dynamics and differentiation status. Here, van den Broek et al. provide a revised view of the naive T cell compartment and then discuss the implications for ageing, neonatal immunity and T cell reconstitution following haematopoietic stem cell transplantation.", "title": "" }, { "docid": "4fa0a60eb5ae8bd84e4a88c6eada4af4", "text": "Image retrieval can be considered as a classification problem. Classification is usually based on some image features. In the feature extraction image segmentation is commonly used. In this paper we introduce a new feature for image classification for retrieval purposes. This feature is based on the gray level histogram of the image. The feature is called binary histogram and it can be used for image classification without segmentation. Binary histogram can be used for image retrieval as such by using similarity calculation. Another approach is to extract some features from it. In both cases indexing and retrieval do not require much computational time. We test the similarity measurement and the feature-based retrieval by making classification experiments. The proposed features are tested using a set of paper defect images, which are acquired from an industrial imaging application.", "title": "" }, { "docid": "b1f348ff63eaa97f6eeda5fcd81330a9", "text": "The recent expansion of the cloud computing paradigm has motivated educators to include cloud-related topics in computer science and computer engineering curricula. While programming and algorithm topics have been covered in different undergraduate and graduate courses, cloud architecture/system topics are still not usually studied in academic contexts. But design, deployment and management of datacenters, virtualization technologies for cloud, cloud management tools and similar issues should be addressed in current computer science and computer engineering programs. This work presents our approach and experiences in designing and implementing a curricular module covering all these topics. In this approach the utilization of a simulation tool, CloudSim, is essential to allow the students a practical approximation to the course contents.", "title": "" }, { "docid": "4ca4ccd53064c7a9189fef3e801612a0", "text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.", "title": "" }, { "docid": "1ebaa8de358a160024c07470dd48943a", "text": "This study introduces and evaluates the robustness of different volumetric, sentiment, and social network approaches to predict the elections in three Asian countries – Malaysia, India, and Pakistan from Twitter posts. We find that predictive power of social media performs well for India and Pakistan but is not effective for Malaysia. Overall, we find that it is useful to consider the recency of Twitter posts while using it to predict a real outcome, such as an election result. Sentiment information mined using machine learning models was the most accurate predictor of election outcomes. Social network information is stable despite sudden surges in political discussions, for e.g. around electionsrelated news events. Methods combining sentiment and volume information, or sentiment and social network information, are effective at predicting smaller vote shares, for e.g. vote shares in the case of independent candidates and regional parties. We conclude with a detailed discussion on the caveats of social media analysis for predicting real-world outcomes and recommendations for future work. ARTICLE HISTORY Received 1 August 2017 Revised 12 February 2018 Accepted 12 March 2018", "title": "" }, { "docid": "7fe86801de04054ffca61eb1b3334872", "text": "Images rendered with traditional computer graphics techniques, such as scanline rendering and ray tracing, appear focused at all depths. However, there are advantages to having blur, such as adding realism to a scene or drawing attention to a particular place in a scene. In this paper we describe the optics underlying camera models that have been used in computer graphics, and present object space techniques for rendering with those models. In our companion paper [3], we survey image space techniques to simulate these models. These techniques vary in both speed and accuracy.", "title": "" }, { "docid": "6df55b88150f5d52aa30ab770f464546", "text": "OBJECTIVES\nThe objective of this study has been to review the incidence of biological and technical complications in case of tooth-implant-supported fixed partial denture (FPD) treatments on the basis of survival data regarding clinical cases.\n\n\nMATERIAL AND METHODS\nBased on the treatment documentations of a Bundeswehr dental clinic (Cologne-Wahn German Air Force Garrison), the medical charts of 83 patients with tooth-implant-supported FPDs were completely recorded. The median follow-up time was 4.73 (time range: 2.2-8.3) years. In the process, survival curves according to Kaplan and Meier were applied in addition to frequency counts.\n\n\nRESULTS\nA total of 84 tooth-implant (83 patients) connected prostheses were followed (132 abutment teeth, 142 implant abutments (Branemark, Straumann). FPDs: the time-dependent illustration reveals that after 5 years, as many as 10% of the tooth-implant-supported FPDs already had to be subjected to a technical modification (renewal (n=2), reintegration (n=4), veneer fracture (n=5), fracture of frame (n=2)). In contrast to non-rigid connection of teeth and implants, technical modification measures were rarely required in case of tooth-implant-supported FPDs with a rigid connection. There was no statistical difference between technical complications and the used implant system. Abutment teeth and implants: during the observation period, none of the functionally loaded implants (n=142) had to be removed. Three of the overall 132 abutment teeth were lost because of periodontal inflammation. The time-dependent illustration reveals, that after 5 years as many as 8% of the abutment teeth already required corresponding therapeutic measures (periodontal treatment (5%), filling therapy (2.5%), endodontic treatment (0.5%)). After as few as 3 years, the connection related complications of implant abutments (abutment or occlusal screw loosening, loss of cementation) already had to be corrected in approximately 8% of the cases. In the utilization period there was no screw or abutment fracture.\n\n\nCONCLUSION\nTechnical complications of implant-supported FPDs are dependent on the different bridge configurations. When using rigid functional connections, similarly favourable values will be achieved as in case of solely implant-supported FPDs. In this study other characteristics like different fixation systems (screwed vs. cemented) or various implant systems had no significant effect to the rate of technical complications.", "title": "" }, { "docid": "751bde322930a292e2ddc8ba06e24f17", "text": "Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.", "title": "" }, { "docid": "768a8cfff3f127a61f12139466911a94", "text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.", "title": "" }, { "docid": "42d27f1a6ad81e13c449a08a6ada34d6", "text": "Face detection of comic characters is a necessary step in most applications, such as comic character retrieval, automatic character classification and comic analysis. However, the existing methods were developed for simple cartoon images or small size comic datasets, and detection performance remains to be improved. In this paper, we propose a Faster R-CNN based method for face detection of comic characters. Our contribution is twofold. First, for the binary classification task of face detection, we empirically find that the sigmoid classifier shows a slightly better performance than the softmax classifier. Second, we build two comic datasets, JC2463 and AEC912, consisting of 3375 comic pages in total for characters face detection evaluation. Experimental results have demonstrated that the proposed method not only performs better than existing methods, but also works for comic images with different drawing styles.", "title": "" }, { "docid": "0c8fb6cc1d252429c7e1dc5b01c14910", "text": "We present a generative attribute controller (GAC), a novel functionality for generating or editing an image while intuitively controlling large variations of an attribute. This controller is based on a novel generative model called the conditional filtered generative adversarial network (CFGAN), which is an extension of the conventional conditional GAN (CGAN) that incorporates a filtering architecture into the generator input. Unlike the conventional CGAN, which represents an attribute directly using an observable variable (e.g., the binary indicator of attribute presence) so its controllability is restricted to attribute labeling (e.g., restricted to an ON or OFF control), the CFGAN has a filtering architecture that associates an attribute with a multi-dimensional latent variable, enabling latent variations of the attribute to be represented. We also define the filtering architecture and training scheme considering controllability, enabling the variations of the attribute to be intuitively controlled using typical controllers (radio buttons and slide bars). We evaluated our CFGAN on MNIST, CUB, and CelebA datasets and show that it enables large variations of an attribute to be not only represented but also intuitively controlled while retaining identity. We also show that the learned latent space has enough expressive power to conduct attribute transfer and attribute-based image retrieval.", "title": "" }, { "docid": "6ad711fa60e05c8fb08b6f1c2c3a87d9", "text": "An algorithm proposed by Dinic for finding maximum flows in networks and by Hopcroft and Karp for finding maximum bipartite matchings is applied to graph connectivity problems. It is shown that the algorithm requires 0(V<supscrpt>1/2</supscrpt>E) time to find a maximum set of node-disjoint paths in a graph, and 0(V<supscrpt>2/3</supscrpt>E) time to find a maximum set of edge disjoint paths. These bounds are tight. Thus the node connectivity of a graph may be tested in 0(V<supscrpt>5/2</supscrpt>E) time, and the edge connectivity of a graph may be tested in 0(V<supscrpt>5/3</supscrpt>E) time.", "title": "" }, { "docid": "0102e5661220268902544401dedf70fc", "text": "It was hypothesized that playfulness in adults relates positively to different indicators of subjective but also physical well-being. A sample of 255 adults completed subjective measures of playfulness along with self-ratings for different facets of well-being and the endorsement to enjoyable activities. Adult playfulness demonstrated robust positive relations with life satisfaction and an inclination to enjoyable activities and an active way of life. There were also minor positive relations with physical fitness. Leading an active way of life partially mediated the relation between playfulness and life satisfaction. The study provides further evidence on the contribution of adult playfulness to different aspects of well-being.", "title": "" }, { "docid": "b16bb73155af7f141127617a7e9fdde1", "text": "Organizing code into coherent programs and relating different programs to each other represents an underlying requirement for scaling genetic programming to more difficult task domains. Assuming a model in which policies are defined by teams of programs, in which team and program are represented using independent populations and coevolved, has previously been shown to support the development of variable sized teams. In this work, we generalize the approach to provide a complete framework for organizing multiple teams into arbitrarily deep/wide structures through a process of continuous evolution; hereafter the Tangled Program Graph (TPG). Benchmarking is conducted using a subset of 20 games from the Arcade Learning Environment (ALE), an Atari 2600 video game emulator. The games considered here correspond to those in which deep learning was unable to reach a threshold of play consistent with that of a human. Information provided to the learning agent is limited to that which a human would experience. That is, screen capture sensory input, Atari joystick actions, and game score. The performance of the proposed approach exceeds that of deep learning in 15 of the 20 games, with 7 of the 15 also exceeding that associated with a human level of competence. Moreover, in contrast to solutions from deep learning, solutions discovered by TPG are also very ‘sparse’. Rather than assuming that all of the state space contributes to every decision, each action in TPG is resolved following execution of a subset of an individual’s graph. This results in significantly lower computational requirements for model building than presently the case for deep learning.", "title": "" }, { "docid": "78e4395a6bd6b4424813e20633d140b8", "text": "This paper introduces a high-speed CMOS comparator. The comparator consists of a differential input stage, two regenerative flip-flops, and an S-R latch. No offset cancellation is exploited, which reduces the power consumption as well as the die area and increases the comparison speed. An experimental version of the comparator has been integrated in a standard double-poly double-metal 1.5-pm n-well process with a die area of only 140 x 100 pmz. This circuit, operating under a +2.5/– 2.5-V power supply, performs comparison to a precision of 8 b with a symmetrical input dynamic range of 2.5 V (therefore ~0.5 LSB resolution is equal to ~ 4.9 mV). input stage flip-flops S-R Iat", "title": "" } ]
scidocsrr
2c554093795422c9e5d50673adcf88da
Information Retrieval as Statistical Translation
[ { "docid": "6eb9d8f22237bdc49570e219150d50b4", "text": "Researchers in both machine translation (e.g., Brown et a/, 1990) arm bilingual lexicography (e.g., Klavans and Tzoukermarm, 1990) have recently become interested in studying parallel texts (also known as bilingual corpora), bodies of text such as the Canadian Hansards (parliamentary debates) which are available in multiple languages (such as French and English). Much of the current excitement surrounding parallel texts was initiated by Brown et aL (1990), who outline a selforganizing method for using these parallel texts to build a machine translation system.", "title": "" } ]
[ { "docid": "44bd9d0b66cb8d4f2c4590b4cb724765", "text": "AIM\nThis paper is a description of inductive and deductive content analysis.\n\n\nBACKGROUND\nContent analysis is a method that may be used with either qualitative or quantitative data and in an inductive or deductive way. Qualitative content analysis is commonly used in nursing studies but little has been published on the analysis process and many research books generally only provide a short description of this method.\n\n\nDISCUSSION\nWhen using content analysis, the aim was to build a model to describe the phenomenon in a conceptual form. Both inductive and deductive analysis processes are represented as three main phases: preparation, organizing and reporting. The preparation phase is similar in both approaches. The concepts are derived from the data in inductive content analysis. Deductive content analysis is used when the structure of analysis is operationalized on the basis of previous knowledge.\n\n\nCONCLUSION\nInductive content analysis is used in cases where there are no previous studies dealing with the phenomenon or when it is fragmented. A deductive approach is useful if the general aim was to test a previous theory in a different situation or to compare categories at different time periods.", "title": "" }, { "docid": "ce5c5d0d0cb988c96f0363cfeb9610d4", "text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.", "title": "" }, { "docid": "6620d6177ed14871321314f307746d85", "text": "Global software engineering increases coordination, communication, and control challenges in software development. The testing phase in this context is not a widely researched subject. In this paper, we study the outsourcing of software testing in the Oulu area, research the ways in which it is used, and determine the observable benefits and obstacles. The companies that participated in this study were found to use the outsourcing possibility of software testing with good efficiency and their testing process was considered to be mature. The most common benefits, in addition to the companies' cost savings, included the utilization of time zone differences for around-the-clock productivity, a closer proximity to the market, an improved record of communication and the tools that record the audit materials. The most commonly realized difficulties consisted of teamwork challenges, a disparate tool infrastructure, tool expense, and often-elevated coordination costs. We utilized in our study two matrices that consist in one dimension of the three distances, control, coordination, and communication, and in another dimension of four distances, temporal, geographical, socio-cultural and technical. The technical distance was our extension to the matrix that has been used as the basis for many other studies about global software development and outsourcing efforts. Our observations justify the extension of matrices with respect to the technical distance.", "title": "" }, { "docid": "97a2cc4cb07b0fbfb880984ca42d9553", "text": "While today many online platforms employ complex algorithms to curate content, these algorithms are rarely highlighted in interfaces, preventing users from understanding these algorithms' operation or even existence. Here, we study how knowledgeable users are about these algorithms, showing that providing insight to users about an algorithm's existence or functionality through design facilitates rapid processing of the underlying algorithm models and increases users' engagement with the system. We also study algorithmic systems that might introduce bias to users' online experience to gain insight into users' behavior around biased algorithms. We will leverage these insights to build an algorithm-aware design that shapes a more informed interaction between users and algorithmic systems.", "title": "" }, { "docid": "370c012ce6ebb22fe793a307b2a88abc", "text": "In this paper, we present a novel approach to model arguments, their components and relations in persuasive essays in English. We propose an annotation scheme that includes the annotation of claims and premises as well as support and attack relations for capturing the structure of argumentative discourse. We further conduct a manual annotation study with three annotators on 90 persuasive essays. The obtained inter-rater agreement of αU = 0.72 for argument components and α = 0.81 for argumentative relations indicates that the proposed annotation scheme successfully guides annotators to substantial agreement. The final corpus and the annotation guidelines are freely available to encourage future research in argument recognition.", "title": "" }, { "docid": "791314f5cee09fc8e27c236018a0927f", "text": "© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/ publi cdoma in/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Oral presentations", "title": "" }, { "docid": "0c24b767705b3a88acf9fe128c0e3477", "text": "The studied camera is basically just a line of pixel sensors, which can be rotated on a full circle, describing a cylindrical surface this way. During a rotation we take individual shots, line by line. All these line images define a panoramic image on a cylindrical surface. This camera architecture (in contrast to the plane segment of the pinhole camera) comes with new challenges, and this report is about a classification of different models of such cameras and their calibration. Acknowledgment. The authors acknowledge comments, collaboration or support by various students and colleagues at CITR Auckland and DLR Berlin-Adlershof. report1_HWK.tex; 22/03/2006; 9:47; p.1", "title": "" }, { "docid": "023dd7b74feead464f2e643c70aef43e", "text": "Technological advances are bringing connected and autonomous vehicles (CAVs) to the everevolving transportation system. Anticipating the public acceptance and adoption of these technologies is important. A recent internet-based survey was conducted polling 347 Austinites to understand their opinions on smart-car technologies and strategies. Ordered-probit and other model results indicate that respondents perceive fewer crashes to be the primary benefit of autonomous vehicles (AVs), with equipment failure being their top concern. Their average willingness to pay (WTP) for adding full (Level 4) automation ($7,253) appears to be much higher than that for adding partial (Level 3) automation ($3,300) to their current vehicles. This study estimates the impact of demographics, built-environment variables, and travel characteristics on Austinites’ WTP for adding such automations and connectivity to their current and coming vehicles. It also estimates adoption rates of shared autonomous vehicles (SAVs) under different pricing scenarios ($1, $2, and $3 per mile), choice dependence on friends’ and neighbors’ adoption rates, home-location decisions after AVs and SAVs become a common mode of transport, and preferences regarding how congestion-toll revenues are used. Higherincome, technology-savvy males, living in urban areas, and those who have experienced more crashes have a greater interest in and higher WTP for the new technologies, with less dependence on others’ adoption rates. Such behavioral models are useful to simulate long-term adoption of CAV technologies under different vehicle pricing and demographic scenarios. These results can be used to develop smarter transportation systems for more efficient and sustainable travel.", "title": "" }, { "docid": "8dfeae1304eb97bc8f7d872af7aaa795", "text": "Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the \"perfect single frame detector\". We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech dataset), and by manually clustering the recurrent errors of a top detector. Our results characterise both localisation and background-versusforeground errors. To address localisation errors we study the impact of training annotation noise on the detector performance, and show that we can improve even with a small portion of sanitised training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech dataset, and provide a new sanitised set of training and test annotations.", "title": "" }, { "docid": "4afdb551efb88711ffe3564763c3806a", "text": "This article applied GARCH model instead AR or ARMA model to compare with the standard BP and SVM in forecasting of the four international including two Asian stock markets indices.These models were evaluated on five performance metrics or criteria. Our experimental results showed the superiority of SVM and GARCH models, compared to the standard BP in forecasting of the four international stock markets indices.", "title": "" }, { "docid": "b719b861a5bb6cc349ccbcd260f45054", "text": "Road accident analysis is very challenging task and investigating the dependencies between the attributes become complex because of many environmental and road related factors. In this research work we applied data mining classification techniques to carry out gender based classification of which RndTree and C4.5 using AdaBoost Meta classifier gives high accurate results. The training dataset used for the research work is obtained from Fatality Analysis Reporting System (FARS) which is provided by the University of Alabama's Critical Analysis Reporting Environment (CARE) system. The results reveal that AdaBoost used with RndTree improvised the classifier's accuracy.", "title": "" }, { "docid": "04b7d1197e9e5d78e948e0c30cbdfcfe", "text": "Context: Software development depends significantly on team performance, as does any process that involves human interaction. Objective: Most current development methods argue that teams should self-manage. Our objective is thus to provide a better understanding of the nature of self-managing agile teams, and the teamwork challenges that arise when introducing such teams. Method: We conducted extensive fieldwork for 9 months in a software development company that introduced Scrum. We focused on the human sensemaking, on how mechanisms of teamwork were understood by the people involved. Results: We describe a project through Dickinson and McIntyre’s teamwork model, focusing on the interrelations between essential teamwork components. Problems with team orientation, team leadership and coordination in addition to highly specialized skills and corresponding division of work were important barriers for achieving team effectiveness. Conclusion: Transitioning from individual work to self-managing teams requires a reorientation not only by developers but also by management. This transition takes time and resources, but should not be neglected. In addition to Dickinson and McIntyre’s teamwork components, we found trust and shared mental models to be of fundamental importance. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fc9699b4382b1ddc6f60fc6ec883a6d3", "text": "Applications hosted in today's data centers suffer from internal fragmentation of resources, rigidity, and bandwidth constraints imposed by the architecture of the network connecting the data center's servers. Conventional architectures statically map web services to Ethernet VLANs, each constrained in size to a few hundred servers owing to control plane overheads. The IP routers used to span traffic across VLANs and the load balancers used to spray requests within a VLAN across servers are realized via expensive customized hardware and proprietary software. Bisection bandwidth is low, severly constraining distributed computation Further, the conventional architecture concentrates traffic in a few pieces of hardware that must be frequently upgraded and replaced to keep pace with demand - an approach that directly contradicts the prevailing philosophy in the rest of the data center, which is to scale out (adding more cheap components) rather than scale up (adding more power and complexity to a small number of expensive components).\n Commodity switching hardware is now becoming available with programmable control interfaces and with very high port speeds at very low port cost, making this the right time to redesign the data center networking infrastructure. In this paper, we describe monsoon, a new network architecture, which scales and commoditizes data center networking monsoon realizes a simple mesh-like architecture using programmable commodity layer-2 switches and servers. In order to scale to 100,000 servers or more,monsoon makes modifications to the control plane (e.g., source routing) and to the data plane (e.g., hot-spot free multipath routing via Valiant Load Balancing). It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service and unfragmented server capacity at low cost.", "title": "" }, { "docid": "44327eaaabf489d5deaf97a5bb041985", "text": "Convolutional neural networks with deeply trained make a significant performance improvement in face detection. However, the major shortcomings, i.e. need of high computational cost and slow calculation, make the existing CNN-based face detectors impractical in many applications. In this paper, a real-time approach for face detection was proposed by utilizing a single end-to-end deep neural network with multi-scale feature maps, multi-scale prior aspect ratios as well as confidence rectification. Multi-scale feature maps overcome the difficulties of detecting small face, and meanwhile, multiscale prior aspect ratios reduce the computing cost and the confidence rectification, which is in line with the biological intuition and can further improve the detection rate. Evaluated on the public benchmark, FDDB, the proposed algorithm, gained a performance as good as the state-of-the-art CNNbased methods, however, with much faster speed.", "title": "" }, { "docid": "853ac793e92b97d41e5ef6d1bc16d504", "text": "We present a systematic study of parameters used in the construction of semantic vector space models. Evaluation is carried out on a variety of similarity tasks, including a compositionality dataset, using several source corpora. In addition to recommendations for optimal parameters, we present some novel findings, including a similarity metric that outperforms the alternatives on all tasks considered.", "title": "" }, { "docid": "c5cfe386f6561eab1003d5572443612e", "text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.", "title": "" }, { "docid": "8e28f1561b3a362b2892d7afa8f2164c", "text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.", "title": "" }, { "docid": "9c92a9409cd3ce2b2c546f7ef156e1f3", "text": "We describe a decorrelation network training method for improving the quality of regression learning in \\en-semble\" neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desired output, but also to have their errors be linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the \\3 Parity\" logic function, a noisy sine function, and a one dimensional nonlinear function, and compare the results with the ensemble networks composed of independently trained individual networks (without decorrelation training). Empirical results show that when individual networks are forced to be decorrelated with one another the resulting ensemble neural networks have lower mean squared errors than the ensemble networks having independently trained individual networks. This method is particularly applicable when there is insuucient data to train each individual network on disjoint subsets of training patterns.", "title": "" }, { "docid": "c71b4a8d6d9ffc64c9e86aab40d9784f", "text": "Voice impersonation is not the same as voice transformation, although the latter is an essential element of it. In voice impersonation, the resultant voice must convincingly convey the impression of having been naturally produced by the target speaker, mimicking not only the pitch and other perceivable signal qualities, but also the style of the target speaker. In this paper, we propose a novel neural-network based speech quality- and style-mimicry framework for the synthesis of impersonated voices. The framework is built upon a fast and accurate generative adversarial network model. Given spectrographic representations of source and target speakers' voices, the model learns to mimic the target speaker's voice quality and style, regardless of the linguistic content of either's voice, generating a synthetic spectrogram from which the time-domain signal is reconstructed using the Griffin-Lim method. In effect, this model reframes the well-known problem of style-transfer for images as the problem of style-transfer for speech signals, while intrinsically addressing the problem of durational variability of speech sounds. Experiments demonstrate that the model can generate extremely convincing samples of impersonated speech. It is even able to impersonate voices across different genders effectively. Results are qualitatively evaluated using standard procedures for evaluating synthesized voices.", "title": "" }, { "docid": "8400fd3ffa3cdfd54e92370b8627c7e8", "text": "A number of computer vision problems such as human age estimation, crowd density estimation and body/face pose (view angle) estimation can be formulated as a regression problem by learning a mapping function between a high dimensional vector-formed feature input and a scalar-valued output. Such a learning problem is made difficult due to sparse and imbalanced training data and large feature variations caused by both uncertain viewing conditions and intrinsic ambiguities between observable visual features and the scalar values to be estimated. Encouraged by the recent success in using attributes for solving classification problems with sparse training data, this paper introduces a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. More precisely, low-level visual features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space where each dimension has clearly defined semantic interpretation (a label) that captures how the scalar output value (e.g. age, people count) changes continuously and cumulatively. Extensive experiments show that our cumulative attribute framework gains notable advantage on accuracy for both age estimation and crowd counting when compared against conventional regression models, especially when the labelled training data is sparse with imbalanced sampling.", "title": "" } ]
scidocsrr
a1ede71923b1a94dff46f1c8d67dfb20
Real-Time Bidding by Reinforcement Learning in Display Advertising
[ { "docid": "d8982dd146a28c7d2779c781f7110ed5", "text": "We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.", "title": "" }, { "docid": "e9eefe7d683a8b02a8456cc5ff0ebe9d", "text": "The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4% of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past.", "title": "" } ]
[ { "docid": "548ca7ecd778bc64e4a3812acd73dcfb", "text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.", "title": "" }, { "docid": "759a4737f3774c1487670597f5e011d1", "text": "Indoor positioning systems (IPS) based on Wi-Fi signals are gaining popularity recently. IPS based on Received Signal Strength Indicator (RSSI) could only achieve a precision of several meters due to the strong temporal and spatial variation of indoor environment. On the other hand, IPS based on Channel State Information (CSI) drive the precision into the sub-meter regime with several access points (AP). However, the performance degrades with fewer APs mainly due to the limit of bandwidth. In this paper, we propose a Wi-Fi-based time-reversal indoor positioning system (WiFi-TRIPS) using the location-specific fingerprints generated by CSIs with a total bandwidth of 1 GHz. WiFi-TRIPS consists of an offline phase and an online phase. In the offline phase, CSIs are collected in different 10 MHz bands from each location-of-interest and the timing and frequency synchronization errors are compensated. We perform a bandwidth concatenation to combine CSIs in different bands into a single fingerprint of 1 GHz. In the online phase, we evaluate the time-reversal resonating strength using the fingerprint from an unknown location and those in the database for location estimation. Extensive experiment results demonstrate a perfect 5cm precision in an 20cm × 70cm area in a non-line-of-sight office environment with one link measurement.", "title": "" }, { "docid": "48544ec3225799c82732db7b3215833b", "text": "Christian M Jones Laura Scholes Daniel Johnson Mary Katsikitis Michelle C. Carras University of the Sunshine Coast University of the Sunshine Coast Queensland University of Technology University of the Sunshine Coast Johns Hopkins University Queensland, Australia Queensland, Australia Queensland, Australia Queensland, Australia Baltimore, MD, USA cmjones@usc.edu.au l.scholes@usc.edu.au dm.johnson@qut.edu.au mkatsiki@usc.edu.au mcarras@jhsph.edu", "title": "" }, { "docid": "65580dfc9bdf73ef72b6a133ab19ccdd", "text": "A rotary piezoelectric motor design with simple structural components and the potential for miniaturization using a pretwisted beam stator is demonstrated in this paper. The beam acts as a vibration converter to transform axial vibration input from a piezoelectric element into combined axial-torsional vibration. The axial vibration of the stator modulates the torsional friction forces transmitted to the rotor. Prototype stators measuring 6.5 times 6.5 times 67.5 mm were constructed using aluminum (2024-T6) twisted beams with rectangular cross-section and multilayer piezoelectric actuators. The stall torque and no-load speed attained for a rectangular beam with an aspect ratio of 1.44 and pretwist helix angle of 17.7deg were 0.17 mNm and 840 rpm with inputs of 184.4 kHz and 149 mW, respectively. Operation in both clockwise and counterclockwise directions was obtained by choosing either 70.37 or 184.4 kHz for the operating frequency. The effects of rotor preload and power input on motor performance were investigated experimentally. The results suggest that motor efficiency is higher at low power input, and that efficiency increases with preload to a maximum beyond which it begins to drop.", "title": "" }, { "docid": "610629d3891c10442fe5065e07d33736", "text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.", "title": "" }, { "docid": "b3a9ad04e7df1b2250f0a7b625509efd", "text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.", "title": "" }, { "docid": "1d5624ab9e2e69cd7a96619b25db3e1c", "text": "Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully convolutional network (MB-FCN) for face detection, which considers both efficiency and effectiveness in the design process. Our MB-FCN detector can deal with faces at all scale ranges with only a single pass through the backbone network. As such, our MB-FCN model saves computation and thus is more efficient, compared to previous methods that make multiple passes. For each branch, the specific skip connections of the convolutional feature maps at different layers are exploited to represent faces in specific scale ranges. Specifically, small faces can be represented with both shallow fine-grained and deep powerful coarse features. With this representation, superior improvement in performance is registered for the task of detecting small faces. We test our MB-FCN detector on two public face detection benchmarks, including FDDB and WIDER FACE. Extensive experiments show that our detector outperforms state-of-the-art methods on all these datasets in general and by a substantial margin on the most challenging among them (e.g. WIDER FACE Hard subset). Also, MB-FCN runs at 15 FPS on a GPU for images of size 640 × 480 with no assumption on the minimum detectable face size.", "title": "" }, { "docid": "3f1161fa81b19a15b0d4ff882b99b60a", "text": "INTRODUCTION\nDupilumab is a fully human IgG4 monoclonal antibody directed against the α subunit of the interleukin (IL)-4 receptor (IL-4Rα). Since the activation of IL-4Rα is utilized by both IL-4 and IL-13 to mediate their pathophysiological effects, dupilumab behaves as a dual antagonist of these two sister cytokines, which blocks IL-4/IL-13-dependent signal transduction. Areas covered: Herein, the authors review the cellular and molecular pathways activated by IL-4 and IL-13, which are relevant to asthma pathobiology. They also review: the mechanism of action of dupilumab, the phase I, II and III studies evaluating the pharmacokinetics as well as the safety, tolerability and clinical efficacy of dupilumab in asthma therapy. Expert opinion: Supported by a strategic mechanism of action, as well as by convincing preliminary clinical results, dupilumab currently appears to be a very promising biological drug for the treatment of severe uncontrolled asthma. It also may have benefits to comorbidities of asthma including atopic dermatitis, chronic sinusitis and nasal polyposis.", "title": "" }, { "docid": "254f437f82e14d889fe6ba15df8369ad", "text": "In academia, scientific research achievements would be inconceivable without academic collaboration and cooperation among researchers. Previous studies have discovered that productive scholars tend to be more collaborative. However, it is often difficult and time-consuming for researchers to find the most valuable collaborators (MVCs) from a large volume of big scholarly data. In this paper, we present MVCWalker, an innovative method that stands on the shoulders of random walk with restart (RWR) for recommending collaborators to scholars. Three academic factors, i.e., coauthor order, latest collaboration time, and times of collaboration, are exploited to define link importance in academic social networks for the sake of recommendation quality. We conducted extensive experiments on DBLP data set in order to compare MVCWalker to the basic model of RWR and the common neighbor-based model friend of friends in various aspects, including, e.g., the impact of critical parameters and academic factors. Our experimental results show that incorporating the above factors into random walk model can improve the precision, recall rate, and coverage rate of academic collaboration recommendations.", "title": "" }, { "docid": "69ced55a44876f7cc4e57f597fcd5654", "text": "A wideband circularly polarized (CP) antenna with a conical radiation pattern is investigated. It consists of a feeding probe and parasitic dielectric parallelepiped elements that surround the probe. Since the structure of the antenna looks like a bird nest, it is named as bird-nest antenna. The probe, which protrudes from a circular ground plane, operates in its fundamental monopole mode that generates omnidirectional linearly polarized (LP) fields. The dielectric parallelepipeds constitute a wave polarizer that converts omnidirectional LP fields of the probe into omnidirectional CP fields. To verify the design, a prototype operating in C band was fabricated and measured. The reflection coefficient, axial ratio (AR), radiation pattern, and antenna gain are studied, and reasonable agreement between the measured and simulated results is observed. The prototype has a 10-dB impedance bandwidth of 41.0% and a 3-dB AR bandwidth of as wide as 54.9%. A parametric study was carried out to characterize the proposed antenna. Also, a design guideline is given to facilitate designs of the antenna.", "title": "" }, { "docid": "64f4ee1e5397b1a5dd35f7908ead0429", "text": "Online user feedback is principally used as an information source for evaluating customers’ satisfaction for a given goods, service or software application. The increasing attitude of people towards sharing comments through the social media is making online user feedback a resource containing different types of valuable information. The huge amount of available user feedback has drawn the attention of researchers from different fields. For instance, data mining techniques have been developed to enable information extraction for different purposes, or the use of social techniques for involving users in the innovation of services and processes. Specifically, current research and technological efforts are put into the definition of platforms to gather and/or analyze multi-modal feedback. But we believe that the understanding of the type of concepts instantiated as information contained in user feedback would be beneficial to define new methods for its better exploitation. In our research, we focus on online explicit user feedback that can be considered as a powerful means for user-driven evolution of software services and applications. Up to our knowledge, a conceptualization of user feedback is still missing. With the purpose of contributing to fill up this gap we propose an ontology, for explicit online user feedback that is founded on a foundational ontology and has been proposed to describe artifacts and processes in software engineering. Our contribution in this paper concerns a novel user feedback ontology founded on a Unified Foundational Ontology (UFO) that supports the description of analysis processes of user feedback in software engineering. We describe the ontology together with an evaluation of its quality, and discuss some application scenarios.", "title": "" }, { "docid": "5940949b1fd6f6b8ab2c45dcb1ece016", "text": "Despite significant work on the problem of inferring a Twitter user’s gender from her online content, no systematic investigation has been made into leveraging the most obvious signal of a user’s gender: first name. In this paper, we perform a thorough investigation of the link between gender and first name in English tweets. Our work makes several important contributions. The first and most central contribution is two different strategies for incorporating the user’s self-reported name into a gender classifier. We find that this yields a 20% increase in accuracy over a standard baseline classifier. These classifiers are the most accurate gender inference methods for Twitter data developed to date. In order to evaluate our classifiers, we developed a novel way of obtaining gender-labels for Twitter users that does not require analysis of the user’s profile or textual content. This is our second contribution. Our approach eliminates the troubling issue of a label being somehow derived from the same text that a classifier will use to", "title": "" }, { "docid": "27034289da290734ec5136656573ca11", "text": "Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.", "title": "" }, { "docid": "f81dd0c86a7b45e743e4be117b4030c2", "text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.", "title": "" }, { "docid": "ddb2fb53f0ead327d064d9b34af9b335", "text": "We seek to automate the design of molecules based on specific chemical properties. In computational terms, this task involves continuous embedding and generation of molecular graphs. Our primary contribution is the direct realization of molecular graphs, a task previously approached by generating linear SMILES strings instead of graphs. Our junction tree variational autoencoder generates molecular graphs in two phases, by first generating a tree-structured scaffold over chemical substructures, and then combining them into a molecule with a graph message passing network. This approach allows us to incrementally expand molecules while maintaining chemical validity at every step. We evaluate our model on multiple tasks ranging from molecular generation to optimization. Across these tasks, our model outperforms previous state-of-the-art baselines by a significant margin.", "title": "" }, { "docid": "87c33e325d074d8baefd56f6396f1c7a", "text": "We present a recurrent model for semantic instance segmentation that sequentially generates binary masks and their associated class probabilities for every object in an image. Our proposed system is trainable end-to-end from an input image to a sequence of labeled masks and, compared to methods relying on object proposals, does not require postprocessing steps on its output. We study the suitability of our recurrent model on three different instance segmentation benchmarks, namely Pascal VOC 2012, CVPPP Plant Leaf Segmentation and Cityscapes. Further, we analyze the object sorting patterns generated by our model and observe that it learns to follow a consistent pattern, which correlates with the activations learned in the encoder part of our network.", "title": "" }, { "docid": "1c576cf604526b448f0264f2c39f705a", "text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.", "title": "" }, { "docid": "c474df285da8106b211dc7fe62733423", "text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.", "title": "" }, { "docid": "9d175a211ec3b0ee7db667d39c240e1c", "text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.", "title": "" }, { "docid": "d464711e6e07b61896ba6efe2bbfa5e4", "text": "This paper presents a simple model for body-shadowing in off-body and body-to-body channels. The model is based on a body shadowing pattern associated with the on-body antenna, represented by a cosine function whose amplitude parameter is calculated from measurements. This parameter, i.e the maximum body-shadowing loss, is found to be linearly dependent on distance. The model was evaluated against a set of off-body channel measurements at 2.45 GHz in an indoor office environment, showing a good fit. The coefficient of determination obtained for the linear model of the maximum body-shadowing loss is greater than 0.6 in all considered scenarios, being higher than 0.8 for the ones with a static user.", "title": "" } ]
scidocsrr
ae6eae748436bd9099d1b047c04e39c4
EDGE DETECTION TECHNIQUES FOR IMAGE SEGMENTATION
[ { "docid": "68990d2cb2ed45e1c8d30b2d7cb45926", "text": "Methods for histogram thresholding based on the minimization of a threshold-dependent criterion function might not work well for images having multimodal histograms. We propose an approach to threshold the histogram according to the similarity between gray levels. Such a similarity is assessed through a fuzzy measure. In this way, we overcome the local minima that affect most of the conventional methods. The experimental results demonstrate the effectiveness of the proposed approach for both bimodal and multimodal histograms.", "title": "" }, { "docid": "e14234696124c47d1860301c873f6685", "text": "We propose a novel image segmentation technique using the robust, adaptive least k-th order squares (ALKS) estimator which minimizes the k-th order statistics of the squared of residuals. The optimal value of k is determined from the data and the procedure detects the homogeneous surface patch representing the relative majority of the pixels. The ALKS shows a better tolerance to structured outliers than other recently proposed similar techniques: Minimize the Probability of Randomness (MINPRAN) and Residual Consensus (RESC). The performance of the new, fully autonomous, range image segmentation algorithm is compared to several other methods. Index Terms|robust methods, range image segmentation, surface tting", "title": "" }, { "docid": "6a96e3680d3d25fc8bcffe3b7e70968f", "text": "All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, without permission in writing from the publisher. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher shall not be liable in any event for incidental or consequential damages with, or arising out of, the furnishing, performance, or use of these programs. 1 1 Introduction Preview Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. In this chapter we outline how a theoretical base and state-of-the-art software can be integrated into a prototyping environment whose objective is to provide a set of well-supported tools for the solution of a broad class of problems in digital image processing. Background An important characteristic underlying the design of image processing systems is the significant level of testing and experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches and quickly prototype candidate solutions generally plays a major role in reducing the cost and time required to arrive at a viable system implementation. Little has been written in the way of instructional material to bridge the gap between theory and application in a well-supported software environment. The main objective of this book is to integrate under one cover a broad base of theoretical concepts with the knowledge required to implement those concepts using state-of-the-art image processing software tools. The theoretical underpinnings of the material in the following chapters are mainly from the leading textbook in the field: Digital Image Processing, by Gonzalez and Woods, published by Prentice Hall. The software code and supporting tools are based on the leading software package in the field: The MATLAB Image Processing Toolbox, † 1.1 † In the following discussion and in subsequent chapters we sometimes refer to Digital Image Processing by Gonzalez and Woods as \" the Gonzalez-Woods book, \" and to the Image Processing Toolbox as \" IPT \" or simply as the \" toolbox. \" 2 Chapter 1 I Introduction from The MathWorks, Inc. (see Section 1.3). The material in the present book shares the same design, notation, and style of presentation …", "title": "" } ]
[ { "docid": "6bb1914cbbaf0ba27a8ab52dbec2152a", "text": "This paper presents a novel local feature for 3D range image data called `the line image'. It is designed to be highly viewpoint invariant by exploiting the range image to efficiently detect 3D occupancy, producing a representation of the surface, occlusions and empty spaces. We also propose a strategy for defining keypoints with stable orientations which define regions of interest in the scan for feature computation. The feature is applied to the task of object classification on sparse urban data taken with a Velodyne laser scanner, producing good results.", "title": "" }, { "docid": "7be0d43664c4ebb3c66f58c485a517ce", "text": "We consider problems requiring to allocate a set of rectangular items to larger rectangular standardized units by minimizing the waste. In two-dimensional bin packing problems these units are finite rectangles, and the objective is to pack all the items into the minimum number of units, while in two-dimensional strip packing problems there is a single standardized unit of given width, and the objective is to pack all the items within the minimum height. We discuss mathematical models, and survey lower bounds, classical approximation algorithms, recent heuristic and metaheuristic methods and exact enumerative approaches. The relevant special cases where the items have to be packed into rows forming levels are also discussed in detail. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "ed41127bf43b4f792f8cbe1ec652f7b2", "text": "Today, more than 100 blockchain projects created to transform government systems are being conducted in more than 30 countries. What leads countries rapidly initiate blockchain projects? I argue that it is because blockchain is a technology directly related to social organization; Unlike other technologies, a consensus mechanism form the core of blockchain. Traditionally, consensus is not the domain of machines but rather humankind. However, blockchain operates through a consensus algorithm with human intervention; once that consensus is made, it cannot be modified or forged. Through utilization of Lawrence Lessig’s proposition that “Code is law,” I suggest that blockchain creates “absolute law” that cannot be violated. This characteristic of blockchain makes it possible to implement social technology that can replace existing social apparatuses including bureaucracy. In addition, there are three close similarities between blockchain and bureaucracy. First, both of them are defined by the rules and execute predetermined rules. Second, both of them work as information processing machines for society. Third, both of them work as trust machines for society. Therefore, I posit that it is possible and moreover unavoidable to replace bureaucracy with blockchain systems. In conclusion, I suggest five principles that should be adhered to when we replace bureaucracy with the blockchain system: 1) introducing Blockchain Statute law; 2) transparent disclosure of data and source code; 3) implementing autonomous executing administration; 4) building a governance system based on direct democracy and 5) making Distributed Autonomous Government(DAG).", "title": "" }, { "docid": "7e8976250bd67e07fb71c6dd8b5be414", "text": "With the rapid growth of product review forums, discussion groups, and Blogs, it is almost impossible for a customer to make an informed purchase decision. Different and possibly contradictory opinions written by different reviewers can even make customers more confused. In the last few years, mining customer reviews (opinion mining) has emerged as an interesting new research direction to address this need. One of the interesting problem in opinion mining is Opinion Question Answering (Opinion QA). While traditional QA can only answer factual questions, opinion QA aims to find the authors' sentimental opinions on a specific target. Current opinion QA systems suffers from several weaknesses. The main cause of these weaknesses is that these methods can only answer a question if they find a content similar to the given question in the given documents. As a result, they cannot answer majority questions like \"What is the best digital camera?\" nor comparative questions, e.g. \"Does SamsungY work better than CanonX?\". In this paper we address the problem of opinion question answering to answer opinion questions about products by using reviewers' opinions. Our proposed method, called Aspect-based Opinion Question Answering (AQA), support answering of opinion-based questions while improving the weaknesses of current techniques. AQA contains five phases: question analysis, question expansion, high quality review retrieval, subjective sentence extraction, and answer grouping. AQA adopts an opinion mining technique in the preprocessing phase to identify target aspects and estimate their quality. Target aspects are attributes or components of the target product that have been commented on in the review, e.g. 'zoom' and 'battery life' for a digital camera. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the AQA in terms of the accuracy of the retrieved answers.", "title": "" }, { "docid": "85908a576c13755e792d52d02947f8b3", "text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.", "title": "" }, { "docid": "18b3328725661770be1f408f37c7eb64", "text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.", "title": "" }, { "docid": "511c4a62c32b32eb74761b0585564fe4", "text": "In the previous chapters, we proposed several features for writer identification, historical manuscript dating and localization separately. In this chapter, we present a summarization of the proposed features for different applications by proposing a joint feature distribution (JFD) principle to design novel discriminative features which could be the joint distribution of features on adjacent positions or the joint distribution of different features on the same location. Following the proposed JFD principle, we introduce seventeen features, including twelve textural-based and five grapheme-based features. We evaluate these features for different applications from four different perspectives to understand handwritten documents beyond OCR, by writer identification, script recognition, historical manuscript dating and localization.", "title": "" }, { "docid": "bd125a32cba00b4071c87aa42e7f3236", "text": "With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (eg. Kinect) still comes with several challenges that result in noise or even incomplete shapes. Recent success in deep learning has shown how to learn complex shape distributions in a data-driven way from large scale 3D CAD Model collections and to utilize them for 3D processing on volumetric representations and thereby circumventing problems of topology and tessellation. Prior work has shown encouraging results on problems ranging from shape completion to recognition. We provide an analysis of such approaches and discover that training as well as the resulting representation are strongly and unnecessarily tied to the notion of object labels. Furthermore, deep learning research argues [1] that learning representation with over-complete model are more prone to overfitting compared to the approach that learns from noisy data. Thus, we investigate a full convolutional volumetric denoising auto encoder that is trained in a unsupervised fashion. It outperforms prior work on recognition as well as more challenging tasks like denoising and shape completion. In addition, our approach is atleast two order of magnitude faster at test time and thus, provides a path to scaling up 3D deep learning.", "title": "" }, { "docid": "a6a2c027b809a98430ad80b837fa8090", "text": "This paper presents a 60-GHz CMOS direct-conversion Doppler radar RF sensor with a clutter canceller for single-antenna noncontact human vital-signs detection. A high isolation quasi-circulator (QC) is designed to reduce the transmitting (Tx) power leakage (to the receiver). The clutter canceller performs cancellation for the Tx leakage power (from the QC) and the stationary background reflection clutter to enhance the detection sensitivity of weak vital signals. The integration of the 60-GHz RF sensor consists of the voltage-controlled oscillator, divided-by-2 frequency divider, power amplifier, QC, clutter canceller (consisting of variable-gain amplifier and 360 ° phase shifter), low-noise amplifier, in-phase/quadrature-phase sub-harmonic mixer, and three couplers. In the human vital-signs detection experimental measurement, at a distance of 75 cm, the detected heartbeat (1-1.3 Hz) and respiratory (0.35-0.45 Hz) signals can be clearly observed with a 60-GHz 17-dBi patch-array antenna. The RF sensor is fabricated in 90-nm CMOS technology with a chip size of 2 mm×2 mm and a consuming power of 217 mW.", "title": "" }, { "docid": "78ccfdac121daaae3abe3f8f7c73482b", "text": "We present a method for constructing smooth n-direction fields (line fields, cross fields, etc.) on surfaces that is an order of magnitude faster than state-of-the-art methods, while still producing fields of equal or better quality. Fields produced by the method are globally optimal in the sense that they minimize a simple, well-defined quadratic smoothness energy over all possible configurations of singularities (number, location, and index). The method is fully automatic and can optionally produce fields aligned with a given guidance field such as principal curvature directions. Computationally the smoothest field is found via a sparse eigenvalue problem involving a matrix similar to the cotan-Laplacian. When a guidance field is present, finding the optimal field amounts to solving a single linear system.", "title": "" }, { "docid": "e3739a934ecd7b99f2d35a19f2aed5cf", "text": "We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations, the timing of information exchange, the amount of local information needed at each computation node, and the initial conditions for the algorithm. The class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to a shortest path problem the algorithm reduces to the algorithm originally implemented for routing of messages in the ARPANET.", "title": "" }, { "docid": "bbb4f7b90ade0ffbf7ba3e598c18a78f", "text": "In this paper, an analysis of the resistance of multi-track coils in printed circuit board (PCB) implementations, where the conductors have rectangular cross-section, for spiral planar coils is carried out. For this purpose, different analytical losses models for the mentioned conductors have been reviewed. From this review, we conclude that for the range of frequencies, the coil dimensions and the planar configuration typically used in domestic induction heating, the application in which we focus, these analysis are unsatisfactory. Therefore, in this work the resistance of multi-track winding has been calculated by means of finite element analysis (FEA) tool. These simulations provide us some design guidelines that allow us to optimize the design of multi-track coils for domestic induction heating. Furthermore, several prototypes are used to verify the simulated results, both single-turn coils and multi-turn coils.", "title": "" }, { "docid": "96bd149346554dac9e3889f0b1569be7", "text": "BACKGROUND\nFlight related low back pain (LBP) among helicopter pilots is frequent and may influence flight performance. Prolonged confined sitting during flights seems to weaken lumbar trunk (LT) muscles with associated secondary transient pain. Aim of the study was to investigate if structured training could improve muscular function and thus improve LBP related to flying.\n\n\nMETHODS\n39 helicopter pilots (35 men and 4 women), who reported flying related LBP on at least 1 of 3 missions last month, were allocated to two training programs over a 3-month period. Program A consisted of 10 exercises recommended for general LBP. Program B consisted of 4 exercises designed specifically to improve LT muscular endurance. The pilots were examined before and after the training using questionnaires for pain, function, quality of health and tests of LT muscular endurance as well as ultrasound measurements of the contractility of the lumbar multifidus muscle (LMM).\n\n\nRESULTS\nApproximately half of the participants performed the training per-protocol. Participants in this subset group had comparable baseline characteristics as the total study sample. Pre and post analysis of all pilots included, showed participants had marked improvement in endurance and contractility of the LMM following training. Similarly, participants had improvement in function and quality of health. Participants in program B had significant improvement in pain, function and quality of health.\n\n\nCONCLUSIONS\nThis study indicates that participants who performed a three months exercise program had improved muscle endurance at the end of the program. The helicopter pilots also experienced improved function and quality of health.\n\n\nTRIAL REGISTRATION\nIdentifier: NCT01788111 Registration date; February 5th, 2013, verified April 2016.", "title": "" }, { "docid": "31dbedbcdb930ead1f8274ff2c181fcb", "text": "This paper sums up lessons learned from a sequence of cooperative design workshops where end users were enabled to design mobile systems through scenario building, role playing, and low-fidelity prototyping. We present a resulting fixed workshop structure with well-chosen constraints that allows for end users to explore and design new technology and work practices. In these workshops, the systems developers get input to design from observing how users stage and act out current and future use scenarios and improvise new technology to fit their needs. A theoretical framework is presented to explain the creative processes involved and the workshop as a user-centered design method. Our findings encourage us to recommend the presented workshop structure for design projects involving mobility and computer-mediated communication, in particular project where the future use of the resulting products and services also needs to be designed.", "title": "" }, { "docid": "0048b244bd55a724f9bcf4dbf5e551a8", "text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.", "title": "" }, { "docid": "f83d8a69a4078baf4048b207324e505f", "text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.", "title": "" }, { "docid": "b16407fc67058110b334b047bcfea9ac", "text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@", "title": "" }, { "docid": "c84d41e54b12cca847135dfc2e9e13f8", "text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.", "title": "" }, { "docid": "41cfa1840ef8b6f35865b220c087302b", "text": "Ultra-high voltage (>10 kV) power devices based on SiC are gaining significant attentions since Si power devices are typically at lower voltage levels. In this paper, a world record 22kV Silicon Carbide (SiC) p-type ETO thyristor is developed and reported as a promising candidate for ultra-high voltage applications. The device is based on a 2cm2 22kV p type gate turn off thyristor (p-GTO) structure. Its static as well as dynamic performances are analyzed, including the anode to cathode blocking characteristics, forward conduction characteristics at different temperatures, turn-on and turn-off dynamic performances. The turn-off energy at 6kV, 7kV and 8kV respectively is also presented. In addition, theoretical boundary of the reverse biased safe operation area (RBSOA) of the 22kV SiC ETO is obtained by simulations and the experimental test also demonstrated a wide RBSOA.", "title": "" }, { "docid": "945bf7690169b5f2e615324fb133bc19", "text": "Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.", "title": "" } ]
scidocsrr
0edabeebbf0365b18eeacd6d81e02853
A Stress Sensor Based on Galvanic Skin Response (GSR) Controlled by ZigBee
[ { "docid": "1d51506f851a8b125edd7edcd8c6bd1b", "text": "A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.", "title": "" }, { "docid": "963eb2a6225a1f320489a504f8010e94", "text": "A method for recognizing the emotion states of subjects based on 30 features extracted from their Galvanic Skin Response (GSR) signals was proposed. GSR signals were acquired by means of experiments attended by those subjects. Next the data was normalized with the calm signal of the same subject after being de-noised. Then the normalized data were extracted features before the step of feature selection. Immune Hybrid Particle Swarm Optimization (IH-PSO) was proposed to select the feature subsets of different emotions. Classifier for feature selection was evaluated on the correct recognition as well as number of the selected features. At last, this paper verified the effectiveness of the feature subsets selected with another new data. All performed in this paper illustrate that IH-PSO can achieve much effective results, and further more, demonstrate that there is significant emotion information in GSR signal.", "title": "" } ]
[ { "docid": "3b07476ebb8b1d22949ec32fc42d2d05", "text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.", "title": "" }, { "docid": "dbfb89ae6abef4d3dd9fa7591f0c57b1", "text": "While everyday document search is done by keyword-based queries to search engines, we have situations that need deep search of documents such as scrutinies of patents, legal documents, and so on. In such cases, using document queries, instead of keyword-based queries, can be more helpful because it exploits more information from the query document. This paper studies a scheme of document search based on document queries. In particular, it uses centrality vectors, instead of tf-idf vectors, to represent query documents, combined with the Word2vec method to capture the semantic similarity in contained words. This scheme improves the performance of document search and provides a way to find documents not only lexically, but semantically close to a query document.", "title": "" }, { "docid": "01ea2d3c28382459aafa064e70e582d3", "text": "* In recent decades, an intriguing view of human cognition has garnered increasing support. According to this view, which I will call 'the hypothesis of extended cognition' ('HEC', hereafter), human cognitive processing literally extends into the environment surrounding the organism, and human cognitive states literally comprise—as wholes do their proper parts— elements in that environment; in consequence, while the skin and scalp may encase the human organism, they do not delimit the thinking subject. 1 The hypothesis of extended cognition should provoke our critical interest. Acceptance of HEC would alter our approach to research and theorizing in cognitive science and, it would seem, significantly change our conception of persons. Thus, if HEC faces substantive difficulties, these should be brought to light; this paper is meant to do just that, exposing some of the problems HEC must overcome if it is to stand among leading views of the nature of human cognition. The essay unfolds as follows: The first section consists of preliminary remarks, mostly about the scope and content of HEC as I will construe it. Sections II and III clarify HEC by situating it with respect to related theses one finds in the literature—the hypothesis of embedded cognition Association. I would like to express my appreciation to members of all three audiences for their useful feedback (especially William Lycan at the Mountain-Plains and David Chalmers at the APA), as well as to my conference commentators, Robert Welshon and Tadeusz Zawidzki. I also benefited from discussing extended cognition with 2 and content-externalism. The remaining sections develop a series of objections to HEC and the arguments that have been offered in its support. The first objection appeals to common sense: HEC implies highly counterintuitive attributions of belief. Of course, HEC-theorists can take, and have taken, a naturalistic stand. They claim that HEC need not be responsive to commonsense objections, for HEC is being offered as a theoretical postulate of cognitive science; whether we should accept HEC depends, they say, on the value of the empirical work premised upon it. Thus, I consider a series of arguments meant to show that HEC is a promising causal-explanatory hypothesis, concluding that these arguments fail and that, ultimately, HEC appears to be of marginal interest as part of a philosophical foundation for cognitive science. If the cases canvassed here are any indication, adopting HEC results in a significant loss of explanatory power or, at the …", "title": "" }, { "docid": "a4b57037235e306034211e07e8500399", "text": "As wireless devices boom and bandwidth-hungry applications (e.g., video and cloud uploading) get popular, today's wireless local area networks (WLANs) become not only crowded but also stressed at throughput. Multiuser multiple-input-multiple-output (MU-MIMO), an advanced form of MIMO, has gained attention due to its huge potential in improving the performance of WLANs. This paper surveys random access-based medium access control (MAC) protocols for MU-MIMO-enabled WLANs. It first provides background information about the evolution and the fundamental MAC schemes of IEEE 802.11 Standards and Amendments, and then identifies the key requirements of designing MU-MIMO MAC protocols for WLANs. After this, the most representative MU-MIMO MAC proposals in the literature are overviewed by benchmarking their MAC procedures and examining the key components, such as the channel state information acquisition, decoding/precoding, and scheduling schemes. Classifications and discussions on important findings of the surveyed MAC protocols are provided, based on which, the research challenges for designing effective MU-MIMO MAC protocols, as well as the envisaged MAC's role in the future heterogeneous networks, are highlighted.", "title": "" }, { "docid": "16db60e96604f65f8b6f4f70e79b8ae5", "text": "Yahoo! Answers is currently one of the most popular question answering systems. We claim however that its user experience could be significantly improved if it could route the \"right question\" to the \"right user.\" Indeed, while some users would rush answering a question such as \"what should I wear at the prom?,\" others would be upset simply being exposed to it. We argue here that Community Question Answering sites in general and Yahoo! Answers in particular, need a mechanism that would expose users to questions they can relate to and possibly answer.\n We propose here to address this need via a multi-channel recommender system technology for associating questions with potential answerers on Yahoo! Answers. One novel aspect of our approach is exploiting a wide variety of content and social signals users regularly provide to the system and organizing them into channels. Content signals relate mostly to the text and categories of questions and associated answers, while social signals capture the various user interactions with questions, such as asking, answering, voting, etc. We fuse and generalize known recommendation approaches within a single symmetric framework, which incorporates and properly balances multiple types of signals according to channels. Tested on a large scale dataset, our model exhibits good performance, clearly outperforming standard baselines.", "title": "" }, { "docid": "6a9e30fd08b568ef6607158cab4f82b2", "text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.", "title": "" }, { "docid": "1cde5c2c4e4fe5d791242da86d4dd06d", "text": "Recent years have seen an increasing interest in micro aerial vehicles (MAVs) and flapping flight in connection to that. The Delft University of Technology has developed a flapping wing MAV, “DelFly II”, which relies on a flapping bi-plane wing configuration for thrust and lift. The ultimate aim of the present research is to improve the flight performance of the DelFly II from both an aerodynamic and constructional perspective. This is pursued by a parametric wing geometry study in combination with a detailed aerodynamic and aeroelastic investigation. In the geometry study an improved wing geometry was found, where stiffeners are placed more outboard for a more rigid in-flight wing shape. The improved wing shows a 10% increase in the thrust-to-power ratio. Investigations into the swirling strength around the DelFly wing in hovering flight show a leading edge vortex (LEV) during the inand out-stroke. The LEV appears to be less stable than in insect flight, since some shedding of LEV is present. Nomenclature Symbol Description Unit f Wing flapping frequency Hz P Power W R DelFly wing length (semi-span) mm T Thrust N λci Positive imaginary part of eigenvalue τ Dimensionless time Abbreviations LEV Leading Edge Vortex MAV Micro Aerial Vehicle UAV Unmanned Aerial Vehicle", "title": "" }, { "docid": "358adb9e7fb3507d8cfe8af85e028686", "text": "An under-recognized inflammatory dermatosis characterized by an evolution of distinctive clinicopathological features\" (2016).", "title": "" }, { "docid": "968ea2dcfd30492a81a71be25f16e350", "text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.", "title": "" }, { "docid": "4c563b09a10ce0b444edb645ce411d42", "text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic", "title": "" }, { "docid": "1a819d090746e83676b0fc3ee94fd526", "text": "Brain-computer interfaces (BCIs) use signals recorded from the brain to operate robotic or prosthetic devices. Both invasive and noninvasive approaches have proven effective. Achieving the speed, accuracy, and reliability necessary for real-world applications remains the major challenge for BCI-based robotic control.", "title": "" }, { "docid": "b05fc1f939ff50dc07dbbc170cd28478", "text": "A compact multiresonant antenna for octaband LTE/WWAN operation in the internal smartphone applications is proposed and discussed in this letter. With a small volume of 15×25×4 mm3, the presented antenna comprises two direct feeding strips and a chip-inductor-loaded two-branch shorted strip. The two direct feeding strips can provide two resonant modes at around 1750 and 2650 MHz, and the two-branch shorted strip can generate a double-resonance mode at about 725 and 812 MHz. Moreover, a three-element bandstop matching circuit is designed to generate an additional resonance for bandwidth enhancement of the lower band. Ultimately, up to five resonances are achieved to cover the desired 704-960- and 1710-2690-MHz bands. Simulated and measured results are presented to demonstrate the validity of the proposed antenna.", "title": "" }, { "docid": "c497964a942cc4187ab5dd8c8ea1c6d4", "text": "De novo sequencing is an important task in proteomics to identify novel peptide sequences. Traditionally, only one MS/MS spectrum is used for the sequencing of a peptide; however, the use of multiple spectra of the same peptide with different types of fragmentation has the potential to significantly increase the accuracy and practicality of de novo sequencing. Research into the use of multiple spectra is in a nascent stage. We propose a general framework to combine the two different types of MS/MS data. Experiments demonstrate that our method significantly improves the de novo sequencing of existing software.", "title": "" }, { "docid": "f6826b5983bc4af466e42e149ac19ba8", "text": "Automatic violence detection from video is a hot topic for many video surveillance applications. However, there has been little success in developing an algorithm that can detect violence in surveillance videos with high performance. In this paper, following our recently proposed idea of motion Weber local descriptor (WLD), we make two major improvements and propose a more effective and efficient algorithm for detecting violence from motion images. First, we propose an improved WLD (IWLD) to better depict low-level image appearance information, and then extend the spatial descriptor IWLD by adding a temporal component to capture local motion information and hence form the motion IWLD. Second, we propose a modified sparse-representation-based classification model to both control the reconstruction error of coding coefficients and minimize the classification error. Based on the proposed sparse model, a class-specific dictionary containing dictionary atoms corresponding to the class labels is learned using class labels of training samples. With this learned dictionary, not only the representation residual but also the representation coefficients become discriminative. A classification scheme integrating the modified sparse model is developed to exploit such discriminative information. The experimental results on three benchmark data sets have demonstrated the superior performance of the proposed approach over the state of the arts.", "title": "" }, { "docid": "b850d522f3283e638a5995242ebe2b08", "text": "Agile methods may produce software faster but we also need to know how they meet our quality requirements. In this paper we compare the waterfall model with agile processes to show how agile methods achieve software quality under time pressure and in an unstable requirements environment, i.e. we analyze agile software quality assurance. We present a detailed waterfall model showing its software quality support processes. We then show the quality practices that agile methods have integrated into their processes. This allows us to answer the question \"can agile methods ensure quality even though they develop software faster and can handle unstable requirements?\".", "title": "" }, { "docid": "b23230f0386f185b7d5eb191034d58ec", "text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.", "title": "" }, { "docid": "b91204ac8a118fcde9a774e925f24a7e", "text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.", "title": "" }, { "docid": "95d8b83eadde6d6da202341c0b9238c8", "text": "Numerous studies have demonstrated that water-based compost preparations, referred to as compost tea and compost-water extract, can suppress phytopathogens and plant diseases. Despite its potential, compost tea has generally been considered as inadequate for use as a biocontrol agent in conventional cropping systems but important to organic producers who have limited disease control options. The major impediments to the use of compost tea have been the lessthan-desirable and inconsistent levels of plant disease suppression as influenced by compost tea production and application factors including compost source and maturity, brewing time and aeration, dilution and application rate and application frequency. Although the mechanisms involved in disease suppression are not fully understood, sterilization of compost tea has generally resulted in a loss in disease suppressiveness. This indicates that the mechanisms of suppression are often, or predominantly, biological, although physico-chemical factors have also been implicated. Increasing the use of molecular approaches, such as metagenomics, metaproteomics, metatranscriptomics and metaproteogenomics should prove useful in better understanding the relationships between microbial abundance, diversity, functions and disease suppressive efficacy of compost tea. Such investigations are crucial in developing protocols for optimizing the compost tea production process so as to maximize disease suppressive effect without exposing the manufacturer or user to the risk of human pathogens. To this end, it is recommended that compost tea be used as part of an integrated disease management system.", "title": "" }, { "docid": "72a86b52797d61bf631d75cd7109e9d9", "text": "We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus’ open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.", "title": "" }, { "docid": "3b302ce4b5b8b42a61c7c4c25c0f3cbf", "text": "This paper describes quorum leases, a new technique that allows Paxos-based systems to perform reads with high throughput and low latency. Quorum leases do not sacrifice consistency and have only a small impact on system availability and write latency. Quorum leases allow a majority of replicas to perform strongly consistent local reads, which substantially reduces read latency at those replicas (e.g., by two orders of magnitude in wide-area scenarios). Previous techniques for performing local reads in Paxos systems either (a) sacrifice consistency; (b) allow only one replica to read locally; or (c) decrease the availability of the system and increase the latency of all updates by requiring all replicas to be notified synchronously. We describe the design of quorum leases and evaluate their benefits compared to previous approaches through an implementation running in five geo-distributed Amazon EC2 datacenters.", "title": "" } ]
scidocsrr
6f75cbc55edf5728ea099300c7dedca0
Summarization of Egocentric Videos: A Comprehensive Survey
[ { "docid": "0ff159433ed8958109ba8006822a2d67", "text": "In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text summaries written by humans. We show that our technique has higher agreement with human judgment than pixel-based distance metrics. We also release text annotations and ground-truth text summaries for a number of publicly available video datasets, for use by the computer vision community.", "title": "" }, { "docid": "c2f1750b668ec7acdd53249773081927", "text": "Video indexing and retrieval have a wide spectrum of promising applications, motivating the interest of researchers worldwide. This paper offers a tutorial and an overview of the landscape of general strategies in visual content-based video indexing and retrieval, focusing on methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, video retrieval including query interfaces, similarity measure and relevance feedback, and video browsing. Finally, we analyze future research directions.", "title": "" } ]
[ { "docid": "79a16052e5e6a44ca6f9fef8ebac3c2d", "text": "Plants are among the earth's most useful and beautiful products of nature. Plants have been crucial to mankind's survival. The urgent need is that many plants are at the risk of extinction. About 50% of ayurvedic medicines are prepared using plant leaves and many of these plant species belong to the endanger group. So it is indispensable to set up a database for plant protection. We believe that the first step is to teach a computer how to classify plants. Leaf /plant identification has been a challenge for many researchers. Several researchers have proposed various techniques. In this paper we have proposed a novel framework for recognizing and identifying plants using shape, vein, color, texture features which are combined with Zernike movements. Radial basis probabilistic neural network (RBPNN) has been used as a classifier. To train RBPNN we use a dual stage training algorithm which significantly enhances the performance of the classifier. Simulation results on the Flavia leaf dataset indicates that the proposed method for leaf recognition yields an accuracy rate of 93.82%", "title": "" }, { "docid": "ef62b0e14f835a36c3157c1ae0f858e5", "text": "Algorithms based on Convolutional Neural Network (CNN) have recently been applied to object detection applications, greatly improving their performance. However, many devices intended for these algorithms have limited computation resources and strict power consumption constraints, and are not suitable for algorithms designed for GPU workstations. This paper presents a novel method to optimise CNN-based object detection algorithms targeting embedded FPGA platforms. Given parameterised CNN hardware modules, an optimisation flow takes network architectures and resource constraints as input, and tunes hardware parameters with algorithm-specific information to explore the design space and achieve high performance. The evaluation shows that our design model accuracy is above 85% and, with optimised configuration, our design can achieve 49.6 times speed-up compared with software implementation.", "title": "" }, { "docid": "b70032a5ca8382ac6853535b499f4937", "text": "Centroid and spread are commonly used approaches in ranking fuzzy numbers. Some experts rank fuzzy numbers using centroid or spread alone while others tend to integrate them together. Although a lot of methods for ranking fuzzy numbers that are related to both approaches have been presented, there are still limitations whereby the ranking obtained is inconsistent with human intuition. This paper proposes a novel method for ranking fuzzy numbers that integrates the centroid point and the spread approaches and overcomes the limitations and weaknesses of most existing methods. Proves and justifications with regard to the proposed ranking method are also presented. 5", "title": "" }, { "docid": "f8082d18f73bee4938ab81633ff02391", "text": "Against the background of Moreno’s “cognitive-affective theory of learning with media” (CATLM) (Moreno, 2006), three papers on cognitive and affective processes in learning with multimedia are discussed in this commentary. The papers provide valuable insights in how cognitive processing and learning results can be affected by constructs such as “situational interest”, “positive emotions”, or “confusion”, and they suggest questions for further research in this field. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f45b7caf3c599a6de835330c39599570", "text": "Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.", "title": "" }, { "docid": "ff71838a3f8f44e30dc69ed2f9371bfc", "text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.", "title": "" }, { "docid": "5701585d5692b4b28da3132f4094fc9f", "text": "We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.", "title": "" }, { "docid": "d8f6f4bef57e26e9d2dc3684ea07a2f4", "text": "Alzheimer's disease is a progressive neurodegenerative disease that typically manifests clinically as an isolated amnestic deficit that progresses to a characteristic dementia syndrome. Advances in neuroimaging research have enabled mapping of diverse molecular, functional, and structural aspects of Alzheimer's disease pathology in ever increasing temporal and regional detail. Accumulating evidence suggests that distinct types of imaging abnormalities related to Alzheimer's disease follow a consistent trajectory during pathogenesis of the disease, and that the first changes can be detected years before the disease manifests clinically. These findings have fuelled clinical interest in the use of specific imaging markers for Alzheimer's disease to predict future development of dementia in patients who are at risk. The potential clinical usefulness of single or multimodal imaging markers is being investigated in selected patient samples from clinical expert centres, but additional research is needed before these promising imaging markers can be successfully translated from research into clinical practice in routine care.", "title": "" }, { "docid": "f11dc9f1978544823aeb61114d4f927f", "text": "This paper presents a passive radar system using GSM as illuminator of opportunity. The new feature is the used high performance uniform linear antenna (ULA) for extracting both the reference and the echo signal in a software defined radar. The signal processing steps used by the proposed scheme are detailed and the feasibility of the whole system is proved by measurements.", "title": "" }, { "docid": "ab4cada23ae2142e52c98a271c128c58", "text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.", "title": "" }, { "docid": "119dd2c7eb5533ece82cff7987f21dba", "text": "Despite the word's common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants' abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants' eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.", "title": "" }, { "docid": "bb94ef2ab26fddd794a5b469f3b51728", "text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "ee732b213767471c29f12e7d00f4ded3", "text": "The increasing interest in scene text reading in multilingual environments raises the need to recognize and distinguish between different writing systems. In this paper, we propose a novel method for script identification in scene text using triplets of local convolutional features in combination with the traditional bag-of-visual-words model. Feature triplets are created by making combinations of descriptors extracted from local patches of the input images using a convolutional neural network. This approach allows us to generate a more descriptive codeword dictionary for the bag-of-visual-words model, as the low discriminative power of weak descriptors is enhanced by other descriptors in a triplet. The proposed method is evaluated on two public benchmark datasets for scene text script identification and a public dataset for script identification in video captions. The experiments demonstrate that our method outperforms the baseline and yields competitive results on all three datasets.", "title": "" }, { "docid": "7f711c94920e0bfa8917ad1b5875813c", "text": "With the increasing acceptance of Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, a radical transformation is currently occurring inside network providers infrastructures. The trend of Software-based networks foreseen with the 5th Generation of Mobile Network (5G) is drastically changing requirements in terms of how networks are deployed and managed. One of the major changes requires the transaction towards a distributed infrastructure, in which nodes are built with standard commodity hardware. This rapid deployment of datacenters is paving the way towards a different type of environment in which the computational resources are deployed up to the edge of the network, referred to as Multi-access Edge Computing (MEC) nodes. However, MEC nodes do not usually provide enough resources for executing standard virtualization technologies typically used in large datacenters. For this reason, software containerization represents a lightweight and viable virtualization alternative for such scenarios. This paper presents an architecture based on the Open Baton Management and Orchestration (MANO) framework combining different infrastructural technologies supporting the deployment of container-based network services even at the edge of the network.", "title": "" }, { "docid": "d537214f407128585d6a4e6bab55a45b", "text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.", "title": "" }, { "docid": "fff9e38c618a6a644e3795bdefd74801", "text": "Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.", "title": "" }, { "docid": "04c8ed83fce5c5052a23d02082a11f00", "text": "Usually, well-being has been measured by means of questionnaires or scales. Although most of these methods have a high level of reliability and validity, they present some limitations. In order to try to improve well-being assessment, in the present work, the authors propose a new complementary instrument: The Implicit Overall Well-Being Measure (IOWBM). The Implicit Association Test (IAT) was adapted to measure wellbeing by assessing associations of the self with well-being-related words. In the first study, the IOWBM showed good internal consistency and adequate temporal reliability. In the second study, it presented weak correlations with explicit well-being measures. The third study examined the validity of the measure, analyzing the effect of traumatic memories on implicit well-being. The results showed that people who remember a traumatic event presented low levels of implicit well-being compared with people in the control condition.", "title": "" }, { "docid": "28fb1491be87cc850200eddd5011315d", "text": "While Salsa and ChaCha are well known software oriented stream ciphers, since the work of Aumasson et al in FSE 2008 there aren’t many significant results against them. The basic model of their attack was to introduce differences in the IV bits, obtain biases after a few forward rounds, as well as to look at the Probabilistic Neutral Bits (PNBs) while reverting back. In this paper we first consider the biases in the forward rounds, and estimate an upper bound on the number of rounds till such biases can be observed. For this, we propose a hybrid model (under certain assumptions), where initially the nonlinear rounds as proposed by the designer are considered, and then we employ their linearized counterpart. The effect of reverting the rounds with the idea of PNBs is also considered. Based on the assumptions and analysis, we conclude that 12 rounds of Salsa and ChaCha should be considered sufficient for 256-bit keys under the current best known attack models.", "title": "" }, { "docid": "53bed9c8e439ed9dcb64b8724a3fc389", "text": "This paper presents the outcomes of research into an automatic classification system based on the lingual part of music. Two novel kinds of short features are extracted from lyrics using tf*idf and rhyme. Meta-learning algorithm is adapted to combine these two sets of features. Results show that our features promote the accuracy of classification and meta-learning algorithm is effective in fusing the two features.", "title": "" }, { "docid": "45dfa7f6b1702942b5abfb8de920d1c2", "text": "Loneliness is a common condition in older adults and is associated with increased morbidity and mortality, decreased sleep quality, and increased risk of cognitive decline. Assessing loneliness in older adults is challenging due to the negative desirability biases associated with being lonely. Thus, it is necessary to develop more objective techniques to assess loneliness in older adults. In this paper, we describe a system to measure loneliness by assessing in-home behavior using wireless motion and contact sensors, phone monitors, and computer software as well as algorithms developed to assess key behaviors of interest. We then present results showing the accuracy of the system in detecting loneliness in a longitudinal study of 16 older adults who agreed to have the sensor platform installed in their own homes for up to 8 months. We show that loneliness is significantly associated with both time out-of-home (β = -0.88 andp <; 0.01) and number of computer sessions (β = 0.78 and p <; 0.05). R2 for the model was 0.35. We also show the model's ability to predict out-of-sample loneliness, demonstrating that the correlation between true loneliness and predicted out-of-sample loneliness is 0.48. When compared with the University of California at Los Angeles loneliness score, the normalized mean absolute error of the predicted loneliness scores was 0.81 and the normalized root mean squared error was 0.91. These results represent first steps toward an unobtrusive, objective method for the prediction of loneliness among older adults, and mark the first time multiple objective behavioral measures that have been related to this key health outcome.", "title": "" } ]
scidocsrr
02ce80dc277237d28e5b16de1f8a14d3
Mobile-D: an agile approach for mobile application development
[ { "docid": "67d704317471c71842a1dfe74ddd324a", "text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.", "title": "" }, { "docid": "83637dc7109acc342d50366f498c141a", "text": "With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice.", "title": "" } ]
[ { "docid": "3f06fc0b50a1de5efd7682b4ae9f5a46", "text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.", "title": "" }, { "docid": "74972989924aef7d8923d3297d221e23", "text": "Emerging evidence suggests that a traumatic brain injury (TBI) in childhood may disrupt the ability to abstract the central meaning or gist-based memory from connected language (discourse). The current study adopts a novel approach to elucidate the role of immediate and working memory processes in producing a cohesive and coherent gist-based text in the form of a summary in children with mild and severe TBI as compared to typically developing children, ages 8-14 years at test. Both TBI groups showed decreased performance on a summary production task as well as retrieval of specific content from a long narrative. Working memory on n-back tasks was also impaired in children with severe TBI, whereas immediate memory performance for recall of a simple word list in both TBI groups was comparable to controls. Interestingly, working memory, but not simple immediate memory for a word list, was significantly correlated with summarization ability and ability to recall discourse content.", "title": "" }, { "docid": "54df0e1a435d673053f9264a4c58e602", "text": "Next location prediction anticipates a person’s movement based on the history of previous sojourns. It is useful for proactive actions taken to assist the person in an ubiquitous environment. This paper evaluates next location prediction methods: dynamic Bayesian network, multi-layer perceptron, Elman net, Markov predictor, and state predictor. For the Markov and state predictor we use additionally an optimization, the confidence counter. The criterions for the comparison are the prediction accuracy, the quantity of useful predictions, the stability, the learning, the relearning, the memory and computing costs, the modelling costs, the expandability, and the ability to predict the time of entering the next location. For evaluation we use the same benchmarks containing movement sequences of real persons within an office building.", "title": "" }, { "docid": "919d86270951a89a14398ee796b4e542", "text": "The role of the circadian clock in skin and the identity of genes participating in its chronobiology remain largely unknown, leading us to define the circadian transcriptome of mouse skin at two different stages of the hair cycle, telogen and anagen. The circadian transcriptomes of telogen and anagen skin are largely distinct, with the former dominated by genes involved in cell proliferation and metabolism. The expression of many metabolic genes is antiphasic to cell cycle-related genes, the former peaking during the day and the latter at night. Consistently, accumulation of reactive oxygen species, a byproduct of oxidative phosphorylation, and S-phase are antiphasic to each other in telogen skin. Furthermore, the circadian variation in S-phase is controlled by BMAL1 intrinsic to keratinocytes, because keratinocyte-specific deletion of Bmal1 obliterates time-of-day-dependent synchronicity of cell division in the epidermis leading to a constitutively elevated cell proliferation. In agreement with higher cellular susceptibility to UV-induced DNA damage during S-phase, we found that mice are most sensitive to UVB-induced DNA damage in the epidermis at night. Because in the human epidermis maximum numbers of keratinocytes go through S-phase in the late afternoon, we speculate that in humans the circadian clock imposes regulation of epidermal cell proliferation so that skin is at a particularly vulnerable stage during times of maximum UV exposure, thus contributing to the high incidence of human skin cancers.", "title": "" }, { "docid": "0cfac94bf56f39386802571ecd45cd3b", "text": "Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.", "title": "" }, { "docid": "76b081d26dc339218652cd6d7e0dfe4c", "text": "Software developers working on change tasks commonly experience a broad range of emotions, ranging from happiness all the way to frustration and anger. Research, primarily in psychology, has shown that for certain kinds of tasks, emotions correlate with progress and that biometric measures, such as electro-dermal activity and electroencephalography data, might be used to distinguish between emotions. In our research, we are building on this work and investigate developers' emotions, progress and the use of biometric measures to classify them in the context of software change tasks. We conducted a lab study with 17 participants working on two change tasks each. Participants were wearing three biometric sensors and had to periodically assess their emotions and progress. The results show that the wide range of emotions experienced by developers is correlated with their perceived progress on the change tasks. Our analysis also shows that we can build a classifier to distinguish between positive and negative emotions in 71.36% and between low and high progress in 67.70% of all cases. These results open up opportunities for improving a developer's productivity. For instance, one could use such a classifier for providing recommendations at opportune moments when a developer is stuck and making no progress.", "title": "" }, { "docid": "abd026e3f71c7e2a2b8d4fc8900b800f", "text": "Text Summarization aims to generate concise and compressed form of original documents. The techniques used for text summarization may be categorized as extractive summarization and abstractive summarization. We consider extractive techniques which are based on selection of important sentences within a document. A major issue in extractive summarization is how to select important sentences, i.e., what criteria should be defined for selection of sentences which are eventually part of the summary. We examine this issue using rough sets notion of reducts. A reduct is an attribute subset which essentially contains the same information as the original attribute set. In particular, we defined and examined three types of matrices based on an information table, namely, discernibility matrix, indiscernibility matrix and equal to one matrix. Each of these matrices represents a certain type of relationship between the objects of an information table. Three types of reducts are determined based on these matrices. The reducts are used to select sentences and consequently generate text summaries. Experimental results and comparisons with existing approaches advocates for the use of the proposed approach in generating text summaries.", "title": "" }, { "docid": "7cf8e2555cfccc1fc091272559ad78d7", "text": "This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification.", "title": "" }, { "docid": "bf2f9a0387de2b2aa3136a2879a07e83", "text": "Rich representations in reinforcement learning have been studied for the purpose of enabling generalization and making learning feasible in large state spaces. We introduce Object-Oriented MDPs (OO-MDPs), a representation based on objects and their interactions, which is a natural way of modeling environments and offers important generalization opportunities. We introduce a learning algorithm for deterministic OO-MDPs and prove a polynomial bound on its sample complexity. We illustrate the performance gains of our representation and algorithm in the well-known Taxi domain, plus a real-life videogame.", "title": "" }, { "docid": "25c80c2fe20576ca6f94d5abac795521", "text": "BACKGROUND\nIntelligence theory research has illustrated that people hold either \"fixed\" (intelligence is immutable) or \"growth\" (intelligence can be improved) mindsets and that these views may affect how people learn throughout their lifetime. Little is known about the mindsets of physicians, and how mindset may affect their lifetime learning and integration of feedback. Our objective was to determine if pediatric physicians are of the \"fixed\" or \"growth\" mindset and whether individual mindset affects perception of medical error reporting. \n\n\nMETHODS\nWe sent an anonymous electronic survey to pediatric residents and attending pediatricians at a tertiary care pediatric hospital. Respondents completed the \"Theories of Intelligence Inventory\" which classifies individuals on a 6-point scale ranging from 1 (Fixed Mindset) to 6 (Growth Mindset). Subsequent questions collected data on respondents' recall of medical errors by self or others.\n\n\nRESULTS\nWe received 176/349 responses (50 %). Participants were equally distributed between mindsets with 84 (49 %) classified as \"fixed\" and 86 (51 %) as \"growth\". Residents, fellows and attendings did not differ in terms of mindset. Mindset did not correlate with the small number of reported medical errors.\n\n\nCONCLUSIONS\nThere is no dominant theory of intelligence (mindset) amongst pediatric physicians. The distribution is similar to that seen in the general population. Mindset did not correlate with error reports.", "title": "" }, { "docid": "082a077db6f8b0d41c613f9a50934239", "text": "Traceability is recognized to be important for supporting agile development processes. However, after analyzing many of the existing traceability approaches it can be concluded that they strongly depend on traditional development process characteristics. Within this paper it is justified that this is a drawback to support adequately agile processes. As it is discussed, some concepts do not have the same semantics for traditional and agile methodologies. This paper proposes three features that traceability models should support to be less dependent on a specific development process: (1) user-definable traceability links, (2) roles, and (3) linkage rules. To present how these features can be applied, an emerging traceability metamodel (TmM) will be used within this paper. TmM supports the definition of traceability methodologies adapted to the needs of each project. As it is shown, after introducing these three features into traceability models, two main advantages are obtained: 1) the support they can provide to agile process stakeholders is significantly more extensive, and 2) it will be possible to achieve a higher degree of automation. In this sense it will be feasible to have a methodical trace acquisition and maintenance process adapted to agile processes.", "title": "" }, { "docid": "2d7963a209ec1c7f38c206a0945a1a7e", "text": "We present a system which enables a user to remove a le from both the le system and all the backup tapes on which the le is stored. The ability to remove les from all backup tapes is desirable in many cases. Our system erases information from the backup tape without actually writing on the tape. This is achieved by applying cryptography in a new way: a block cipher is used to enable the system to \\forget\" information rather than protect it. Our system is easy to install and is transparent to the end user. Further, it introduces no slowdown in system performance and little slowdown in the backup procedure.", "title": "" }, { "docid": "de8045598fe808788aca455eee4a1126", "text": "This paper presents an efficient and practical approach for automatic, unsupervised object detection and segmentation in two-texture images based on the concept of Gabor filter optimization. The entire process occurs within a hierarchical framework and consists of the steps of detection, coarse segmentation, and fine segmentation. In the object detection step, the image is first processed using a Gabor filter bank. Then, the histograms of the filtered responses are analyzed using the scale-space approach to predict the presence/absence of an object in the target image. If the presence of an object is reported, the proposed approach proceeds to the coarse segmentation stage, wherein the best Gabor filter (among the bank of filters) is automatically chosen, and used to segment the image into two distinct regions. Finally, in the fine segmentation step, the coefficients of the best Gabor filter (output from the previous stage) are iteratively refined in order to further fine-tune and improve the segmentation map produced by the coarse segmentation step. In the validation study, the proposed approach is applied as part of a machine vision scheme with the goal of quantifying the stain-release property of fabrics. To that end, the presented hierarchical scheme is used to detect and segment stains on a sizeable set of digitized fabric images, and the performance evaluation of the detection, coarse segmentation, and fine segmentation steps is conducted using appropriate metrics. The promising nature of these results bears testimony to the efficacy of the proposed approach.", "title": "" }, { "docid": "72d75ebfc728d3b287bcaf429a6b2ee5", "text": "We present a fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization. This technology reflects an improvement of 2.8X routed logic density and >40% performance over the 14nm reference technology described in [1-3]. A full range of Vts is enabled on-chip through a unique multi-workfunction process. This enables both excellent low voltage SRAM response and highly scaled memory area simultaneously. The HD 6-T bitcell size is 0.0269um2. This 7nm technology is fully enabled by immersion lithography and advanced optical patterning techniques (like SAQP and SADP). However, the technology platform is also designed to leverage EUV insertion for specific multi-patterned (MP) levels for cycle time benefit and manufacturing efficiency. A complete set of foundation and complex IP is available in this advanced CMOS platform to enable both High Performance Compute (HPC) and mobile applications.", "title": "" }, { "docid": "deedf390faeef304bf0479a844297113", "text": "A compact 24-GHz Doppler radar module is developed in this paper for non-contact human vital-sign detection. The 24-GHz radar transceiver chip, transmitting and receiving antennas, baseband circuits, microcontroller, and Bluetooth transmission module have been integrated and implemented on a printed circuit board. For a measurement range of 1.5 m, the developed radar module can successfully detect the respiration and heartbeat of a human adult.", "title": "" }, { "docid": "f15a7d48f3c42ccc97480204dc5c8622", "text": "We have developed a wearable upper limb support system (ULSS) for support during heavy overhead tasks. The purpose of this study is to develop the voluntary motion support algorithm for the ULSS, and to confirm the effectiveness of the ULSS with the developed algorithm through dynamic evaluation experiments. The algorithm estimates the motor intention of the wearer based on a bioelectrical signal (BES). The ULSS measures the BES via electrodes attached onto the triceps brachii, deltoid, and clavicle. The BES changes in synchronization with the motion of the wearer's upper limbs. The algorithm changes a control phase by comparing the BES and threshold values. The algorithm achieves voluntary motion support for dynamic tasks by changing support torques of the ULSS in synchronization with the control phase. Five healthy adult males moved heavy loads vertically overhead in the evaluation experiments. In a random instruction experiment, the volunteers moved in synchronization with random instructions, and we confirmed that the control phase changes in synchronization with the random instructions. In a motion support experiment, we confirmed that the average number of the vertical motion with the ULSS increased 2.3 times compared to the average number without the ULSS. As a result, the ULSS with the algorithm supports the motion voluntarily, and it has a positive effect on the support. In conclusion, we could develop the novel voluntary motion support algorithm of the ULSS.", "title": "" }, { "docid": "b2470ecd83971aa877d8a38a5b88a6dc", "text": "In this paper, we improve the attention or alignment accuracy of neural machine translation by utilizing the alignments of training sentence pairs. We simply compute the distance between the machine attentions and the “true” alignments, and minimize this cost in the training procedure. Our experiments on large-scale Chinese-to-English task show that our model improves both translation and alignment qualities significantly over the large-vocabulary neural machine translation system, and even beats a state-of-the-art traditional syntax-based system.", "title": "" }, { "docid": "e9b3ddc114998e25932819e3281e2e0c", "text": "We study the problem of jointly aligning sentence constituents and predicting their similarities. While extensive sentence similarity data exists, manually generating reference alignments and labeling the similarities of the aligned chunks is comparatively onerous. This prompts the natural question of whether we can exploit easy-to-create sentence level data to train better aligners. In this paper, we present a model that learns to jointly align constituents of two sentences and also predict their similarities. By taking advantage of both sentence and constituent level data, we show that our model achieves state-of-the-art performance at predicting alignments and constituent similarities.", "title": "" }, { "docid": "bffbc725b52468b41c53b156f6eadedb", "text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.", "title": "" } ]
scidocsrr
b8ee5b956f08ba90cb14316a18b551cf
A Multihop Peer-Communication Protocol With Fairness Guarantee for IEEE 802.16-Based Vehicular Networks
[ { "docid": "23c8dd52480d1193b2728b05c9458080", "text": "This article presents an overview of highway cooperative collision avoidance (CCA), which is an emerging vehicular safety application using the IEEE- and ASTM-adopted Dedicated Short Range Communication (DSRC) standard. Along with a description of the DSRC architecture, we introduce the concept of CCA and its implementation requirements in the context of a vehicle-to-vehicle wireless network, primarily at the Medium Access Control (MAC) and the routing layer. An overview is then provided to establish that the MAC and routing protocols from traditional Mobile Ad Hoc networks arc not directly applicable for CCA and similar safety-critical applications. Specific constraints and future research directions are then identified for packet routing protocols used to support such applications in the DSRC environment. In order to further explain the interactions between CCA and its underlying networking protocols, we present an example of the safety performance of CCA using simulated vehicle crash experiments. The results from these experiments arc also used to demonstrate the need for network data prioritization for safety-critical applications such as CCA. Finally, the performance sensitivity of CCA to unreliable wireless channels is discussed based on the experimental results.", "title": "" } ]
[ { "docid": "88cf953ba92b54f89cdecebd4153bee3", "text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.", "title": "" }, { "docid": "69ad93c7b6224321d69456c23a4185ce", "text": "Modeling fashion compatibility is challenging due to its complexity and subjectivity. Existing work focuses on predicting compatibility between product images (e.g. an image containing a t-shirt and an image containing a pair of jeans). However, these approaches ignore real-world ‘scene’ images (e.g. selfies); such images are hard to deal with due to their complexity, clutter, variations in lighting and pose (etc.) but on the other hand could potentially provide key context (e.g. the user’s body type, or the season) for making more accurate recommendations. In this work, we propose a new task called ‘Complete the Look’, which seeks to recommend visually compatible products based on scene images. We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images. Our approach measures compatibility both globally and locally via CNNs and attention mechanisms. Extensive experiments show that our method achieves significant performance gains over alternative systems. Human evaluation and qualitative analysis are also conducted to further understand model behavior. We hope this work could lead to useful applications which link large corpora of real-world scenes with shoppable products.", "title": "" }, { "docid": "824480b0f5886a37ca1930ce4484800d", "text": "Conduction loss reduction technique using a small resonant capacitor for a phase shift full bridge converter with clamp diodes is proposed in this paper. The proposed technique can be implemented simply by adding a small resonant capacitor beside the leakage inductor of transformer. Since the voltage across the small resonant capacitor is applied to the small leakage inductor of transformer during freewheeling period, the primary current can be decreased rapidly. This results in the reduced conduction loss on the secondary side of transformer while the proposed technique can still guarantee the wide ZVS ranges. The operational principles and analysis are presented. Experimental results show that the proposed reduction technique of conduction loss can be operated properly.", "title": "" }, { "docid": "7c9c047055d123aff65c9c7a3db59dfc", "text": "Organizations publish the individual’s information in order to utilize the data for the research purpose. But the confidential information about the individual is revealed by the adversary by combining the various releases of the several organizations. This is called as linkage attacks. This attack can be avoided by the SLOMS method which vertically partitions the single quasi table and multiple sensitive tables. The SLOMS method uses MSB-KACA algorithm to generalize the quasi identifier table in order to implement k-Anonymity and bucketizes the sensitive attribute table to implement l-diversity. But there is a chance of probabilistic inference attack due to bucketization. So, the method called t-closeness can be applied over MSB-KACA algorithm which compute the value using Earth Mover Distance(EMD) and set the minimum value as threshold in order to equally distribute the attributes in the table based on the threshold ’t’. Such that the probabilistic inference attack can be avoided. The performance of t-closeness gets improved and evaluated by Disclosure rate which becomes minimal while comparing with MSB-KACA algorithm.", "title": "" }, { "docid": "5732967997a3914e0a9ef37305d18ee4", "text": "Protein palmitoylation is an essential post-translational lipid modification of proteins, and reversibly orchestrates a variety of cellular processes. Identification of palmitoylated proteins with their sites is the foundation for understanding molecular mechanisms and regulatory roles of palmitoylation. Contrasting to the labor-intensive and time-consuming experimental approaches, in silico prediction of palmitoylation sites has attracted much attention as a popular strategy. In this work, we updated our previous CSS-Palm into version 2.0. An updated clustering and scoring strategy (CSS) algorithm was employed with great improvement. The leave-one-out validation and 4-, 6-, 8- and 10-fold cross-validations were adopted to evaluate the prediction performance of CSS-Palm 2.0. Also, an additional new data set not included in training was used to test the robustness of CSS-Palm 2.0. By comparison, the performance of CSS-Palm was much better than previous tools. As an application, we performed a small-scale annotation of palmitoylated proteins in budding yeast. The online service and local packages of CSS-Palm 2.0 were freely available at: http://bioinformatics.lcd-ustc.org/css_palm.", "title": "" }, { "docid": "5a248466c2e82b8453baa483a05bc25b", "text": "Early severe stress and maltreatment produces a cascade of neurobiological events that have the potential to cause enduring changes in brain development. These changes occur on multiple levels, from neurohumoral (especially the hypothalamic-pituitary-adrenal [HPA] axis) to structural and functional. The major structural consequences of early stress include reduced size of the mid-portions of the corpus callosum and attenuated development of the left neocortex, hippocampus, and amygdala. Major functional consequences include increased electrical irritability in limbic structures and reduced functional activity of the cerebellar vermis. There are also gender differences in vulnerability and functional consequences. The neurobiological sequelae of early stress and maltreatment may play a significant role in the emergence of psychiatric disorders during development.", "title": "" }, { "docid": "7d1bdb84425d344155d30f4c26ce47da", "text": "In the information age, data is pervasive. In some applications, data explosion is a significant phenomenon. The massive data volume poses challenges to both human users and computers. In this project, we propose a new model for identifying representative set from a large database. A representative set is a special subset of the original dataset, which has three main characteristics: It is significantly smaller in size compared to the original dataset. It captures the most information from the original dataset compared to other subsets of the same size. It has low redundancy among the representatives it contains. We use information-theoretic measures such as mutual information and relative entropy to measure the representativeness of the representative set. We first design a greedy algorithm and then present a heuristic algorithm that delivers much better performance. We run experiments on two real datasets and evaluate the effectiveness of our representative set in terms of coverage and accuracy. The experiments show that our representative set attains expected characteristics and captures information more efficiently.", "title": "" }, { "docid": "c453dc8f6e69ff218ecb1684fc26b550", "text": "We suppose collections of XML data described by Document Type Definitions (DTDs). This data has been generated by applications and plays a role of OLTP database(s). A star schema, a well-known technique used in data warehousing, can be applied. Then dimension information is supposed to be contained in XML data. We will use the notions of subDTD and view, and formulate referential integrity constraints in XML environment. We use simple pattern matching capabilities of current XML query languages for XML view specification and tree embedding algorithms for these purposes. A dimension hierarchy is defined as a set of logically connected collections of XML data. Facts may be also conceived as elements of an XML document. Due to the structural complexity of XML data the approach requires subtler formal model than it is done with conventional dimension and fact tables described by classical star schemes. In consequence, our approach captures more from heterogeneity of source databases than it is done in classical relational approaches to data warehousing.", "title": "" }, { "docid": "db637c4e90111ebe0218fa4ccc2ce759", "text": "Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of largescale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.", "title": "" }, { "docid": "8e23dc265f4d48caae7a333db72d887e", "text": "We introduce a new mechanism for rooting trust in a cloud computing environment called the Trusted Virtual Environment Module (TVEM). The TVEM helps solve the core security challenge of cloud computing by enabling parties to establish trust relationships where an information owner creates and runs a virtual environment on a platform owned by a separate service provider. The TVEM is a software appliance that provides enhanced features for cloud virtual environments over existing Trusted Platform Module virtualization techniques, which includes an improved application program interface, cryptographic algorithm flexibility, and a configurable modular architecture. We define a unique Trusted Environment Key that combines trust from the information owner and the service provider to create a dual root of trust for the TVEM that is distinct for every virtual environment and separate from the platform’s trust. This paper presents the requirements, design, and architecture of our approach.", "title": "" }, { "docid": "dbdda952c63b7b7a4f8ce68f806e5238", "text": "This paper examines how real-time information gathered as part of intelligent transportation systems can be used to predict link travel times for one through five time periods ahead (of 5-min duration). The study employed a spectral basis artificial neural network (SNN) that utilizes a sinusoidal transformation technique to increase the linear separability of the input features. Link travel times from Houston that had been collected as part of the automatic vehicle identification system of the TranStar system were used as a test bed. It was found that the SNN outperformed a conventional artificial neural network and gave similar results to that of modular neural networks. However, the SNN requires significantly less effort on the part of the modeler than modular neural networks. The results of the best SNN were compared with conventional link travel time prediction techniques including a Kalman filtering model, exponential smoothing model, historical profile, and realtime profile. It was found that the SNN gave the best overall results.", "title": "" }, { "docid": "210e22e098340e4f858b4ceab1c643e6", "text": "Dimethylsulfoxide (DMSO) controlled puff induction and repression (or non-induction) in larval polytene chromosomes of Chironomus tentans were studied for the case of the Balbiani rings (BR). A characteristic reaction pattern, involving BR 1, BR 2 and BR 3, all in salivary gland chromosome IV was found. In vivo exposure of 4th instar larvae (not prepupae) to 10% DMSO at 18° C first evokes an over-stimulation of BR 3 while DMSO-stimulation of puffing at BR 1 and BR 2 always follows that of BR 3. After removal of the drug, a rapid uniform collapse of all puffs occurs, thus more or less restoring the banding pattern of all previously decondensed chromosome segments. Recovery proceeds as BR's and other puffs reappear. By observing the restoration, one can locate the site from which a BR (puff) originates. BR 2, which is normally the most active non-ribosomal gene locus in untreated larvae, here serves as an example. As the sizes of BR 3, BR 1 and BR 2 change, so do the quantities of the transcriptional products in these gene loci (and vice versa), as estimated electron-microscopically in ultrathin sections and autoradiographically in squash preparations. In autoradiograms, the DMSO-stimulated BRs exhibit the most dense concentration of silver grains and therefore the highest rate of transcriptional activity. In DMSO-repressed BRs (and other puffs) the transcription of the locus specific genes is not completely shut off. In chromosomes from nuclei with high labelling intensities the repressed BRs (and other puffs) always exhibit a low level of 3H-uridine incorporation in vivo. The absence of cytologically visible BR (puff) formation therefore does not necessarily indicate complete transcriptional inactivity. Typically, before the stage of puff formation the 3H-uridine labelling first appears in the interband-like regions.", "title": "" }, { "docid": "7cebca85b555c6312f14cfa90fb1b50b", "text": "This paper describes a new evolutionary algorithm that is especially well suited to AI-Assisted Game Design. The approach adopted in this paper is to use observations of AI agents playing the game to estimate the game's quality. Some of best agents for this purpose are General Video Game AI agents, since they can be deployed directly on a new game without game-specific tuning; these agents tend to be based on stochastic algorithms which give robust but noisy results and tend to be expensive to run. This motivates the main contribution of the paper: the development of the novel N-Tuple Bandit Evolutionary Algorithm, where a model is used to estimate the fitness of unsampled points and a bandit approach is used to balance exploration and exploitation of the search space. Initial results on optimising a Space Battle game variant suggest that the algorithm offers far more robust results than the Random Mutation Hill Climber and a Biased Mutation variant, which are themselves known to offer competitive performance across a range of problems. Subjective observations are also given by human players on the nature of the evolved games, which indicate a preference towards games generated by the N-Tuple algorithm.", "title": "" }, { "docid": "461dd4ba16e8a006d4a26470d8a9e10c", "text": "Control-Flow Integrity (CFI) is an intensively studied technique for hardening software security. It enforces a Control-Flow Graph (CFG) by inlining runtime checks into target programs. Many methods have been proposed to construct the enforced CFG, with different degrees of precision and sets of assumptions. However, past CFI work has not made attempt at justifying their CFG construction soundness using formal semantics and proofs. In this paper, we formalize the CFG construction in two major CFI systems, identify their assumptions, and prove their soundness; the soundness proof shows that their computed sets of targets for indirect calls are safe over-approximations.", "title": "" }, { "docid": "d0a854f4994695fbf521e94f82bd1201", "text": "S 2018 PLATFORM & POSTER PRESENTATIONS", "title": "" }, { "docid": "d6c3896357022a27513f63a5e3f8b4d3", "text": "The aging of the world's population presents vast societal and individual challenges. The relatively shrinking workforce to support the growing population of the elderly leads to a rapidly increasing amount of technological innovations in the field of elderly care. In this paper, we present an integrated framework consisting of various intelligent agents with their own expertise and responsibilities working in a holistic manner to assist, care, and accompany the elderly around the clock in the home environment. To support the independence of the elderly for Aging-In-Place (AIP), the intelligent agents must well understand the elderly, be fully aware of the home environment, possess high-level reasoning and learning capabilities, and provide appropriate tender care in the physical, cognitive, emotional, and social aspects. The intelligent agents sense in non-intrusive ways from different sources and provide wellness monitoring, recommendations, and services across diverse platforms and locations. They collaborate together and interact with the elderly in a natural and holistic manner to provide all-around tender care reactively and proactively. We present our implementation of the collaboration framework with a number of realized functionalities of the intelligent agents, highlighting its feasibility and importance in addressing various challenges in AIP.", "title": "" }, { "docid": "2858f5d05b08e0db02ccfab17c52a168", "text": "In the field of predictive modeling, variable selection methods can significantly drive the final outcome. While the focus of the analysis may generally be to get the most accurate predictions, it is incomplete without key driver analysis. These drivers could be demographics, geography, credit worthiness, payments history, usage, pricing, and potentially a host of many other key characteristics. Due to a large number of dimensions, many features of these broad categories are bound to remain untested. A million dollar question is how to get to a subset of effects that must definitely be tested. In this paper, we highlight what we have found to be the most effective ways of feature selection along with illustrative applications and best practices on implementation in SAS®. These methods range from simple correlation procedure (PROC CORR) to more complex techniques involving variable clustering (PROC VARCLUS), decision tree importance list (PROC SPLIT) and EXL‟s proprietary process of random feature selection from models developed on bootstrapped samples. By applying these techniques, we have been able to deliver robust and high quality statistical models with the right mix of dimensions.", "title": "" }, { "docid": "509075d64990cf7258c13dd0dfd5e282", "text": "In recent years we have seen a tremendous growth in applications of passive sensor-enabled RFID technology by researchers; however, their usability in applications such as activity recognition is limited by a key issue associated with their incapability to handle unintentional brownout events leading to missing significant sensed events such as a fall from a chair. Furthermore, due to the need to power and sample a sensor the practical operating range of passive-sensor enabled RFID tags are also limited with respect to passive RFID tags. Although using active or semi-passive tags can provide alternative solutions, they are not without the often undesirable maintenance and limited lifespan issues due to the need for batteries. In this article we propose a new hybrid powered sensor-enabled RFID tag concept which can sustain the supply voltage to the tag circuitry during brownouts and increase the operating range of the tag by combining the concepts from passive RFID tags and semipassive RFID tags, while potentially eliminating shortcomings of electric batteries. We have designed and built our concept, evaluated its desirable properties through extensive experiments and demonstrate its significance in the context of a human activity recognition application.", "title": "" }, { "docid": "0a2a39149013843b0cece63687ebe9e9", "text": "177Lu-labeled PSMA-617 is a promising new therapeutic agent for radioligand therapy (RLT) of patients with metastatic castration-resistant prostate cancer (mCRPC). Initiated by the German Society of Nuclear Medicine, a retrospective multicenter data analysis was started in 2015 to evaluate efficacy and safety of 177Lu-PSMA-617 in a large cohort of patients.\n\n\nMETHODS\nOne hundred forty-five patients (median age, 73 y; range, 43-88 y) with mCRPC were treated with 177Lu-PSMA-617 in 12 therapy centers between February 2014 and July 2015 with 1-4 therapy cycles and an activity range of 2-8 GBq per cycle. Toxicity was categorized by the common toxicity criteria for adverse events (version 4.0) on the basis of serial blood tests and the attending physician's report. The primary endpoint for efficacy was biochemical response as defined by a prostate-specific antigen decline ≥ 50% from baseline to at least 2 wk after the start of RLT.\n\n\nRESULTS\nA total of 248 therapy cycles were performed in 145 patients. Data for biochemical response in 99 patients as well as data for physician-reported and laboratory-based toxicity in 145 and 121 patients, respectively, were available. The median follow-up was 16 wk (range, 2-30 wk). Nineteen patients died during the observation period. Grade 3-4 hematotoxicity occurred in 18 patients: 10%, 4%, and 3% of the patients experienced anemia, thrombocytopenia, and leukopenia, respectively. Xerostomia occurred in 8%. The overall biochemical response rate was 45% after all therapy cycles, whereas 40% of patients already responded after a single cycle. Elevated alkaline phosphatase and the presence of visceral metastases were negative predictors and the total number of therapy cycles positive predictors of biochemical response.\n\n\nCONCLUSION\nThe present retrospective multicenter study of 177Lu-PSMA-617 RLT demonstrates favorable safety and high efficacy exceeding those of other third-line systemic therapies in mCRPC patients. Future phase II/III studies are warranted to elucidate the survival benefit of this new therapy in patients with mCRPC.", "title": "" } ]
scidocsrr
b19be2d05a21a644912b86a5362899fa
Detecting Multipliers of Jihadism on Twitter
[ { "docid": "d49d099d3f560584f2d080e7a1e2711f", "text": "Dark Web forums are heavily used by extremist and terrorist groups for communication, recruiting, ideology sharing, and radicalization. These forums often have relevance to the Iraqi insurgency or Al-Qaeda and are of interest to security and intelligence organizations. This paper presents an automated approach to sentiment and affect analysis of selected radical international Ahadist Dark Web forums. The approach incorporates a rich textual feature representation and machine learning techniques to identify and measure the sentiment polarities and affect intensities expressed in forum communications. The results of sentiment and affect analysis performed on two large-scale Dark Web forums are presented, offering insight into the communities and participants.", "title": "" }, { "docid": "fbcc3a5535d63e5a6dfb4e66bd5d7ad5", "text": "Jihadist groups such as ISIS are spreading online propaganda using various forms of social media such as Twitter and YouTube. One of the most common approaches to stop these groups is to suspend accounts that spread propaganda when they are discovered. This approach requires that human analysts manually read and analyze an enormous amount of information on social media. In this work we make a first attempt to automatically detect messages released by jihadist groups on Twitter. We use a machine learning approach that classifies a tweet as containing material that is supporting jihadists groups or not. Even tough our results are preliminary and more tests needs to be carried out we believe that results indicate that an automated approach to aid analysts in their work with detecting radical content on social media is a promising way forward. It should be noted that an automatic approach to detect radical content should only be used as a support tool for human analysts in their work.", "title": "" } ]
[ { "docid": "c504800ce08654fb5bf49356d2f7fce3", "text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.", "title": "" }, { "docid": "c1d0497c80ffd6cf84b5ce5b09d841af", "text": "Besides sensory characteristics of food, food-evoked emotion is a crucial factor in predicting consumer's food preference and therefore in developing new products. Many measures have been developed to assess food-evoked emotions. The aim of this literature review is (i) to give an exhaustive overview of measures used in current research and (ii) to categorize these methods along measurement level (physiological, behavioral, and cognitive) and emotional processing level (unconscious sensory, perceptual/early cognitive, and conscious/decision making) level. This 3 × 3 categorization may help researchers to compile a set of complementary measures (\"toolbox\") for their studies. We included 101 peer-reviewed articles that evaluate consumer's emotions and were published between 1997 and 2016, providing us with 59 different measures. More than 60% of these measures are based on self-reported, subjective ratings and questionnaires (cognitive measurement level) and assess the conscious/decision-making level of emotional processing. This multitude of measures and their overrepresentation in a single category hinders the comparison of results across studies and building a complete multi-faceted picture of food-evoked emotions. We recommend (1) to use widely applied, validated measures only, (2) to refrain from using (highly correlated) measures from the same category but use measures from different categories instead, preferably covering all three emotional processing levels, and (3) to acquire and share simultaneously collected physiological, behavioral, and cognitive datasets to improve the predictive power of food choice and other models.", "title": "" }, { "docid": "0d23f763744f39614ecef498ed4c2c31", "text": "Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that welltrained DNNs can be easily misled by adversarial examples (AE) – the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from expensive retraining cost and demonstrate marginal robustness improvement against the stateof-the-art attacks like CW family adversarial examples. In this work, we propose a novel low-cost “feature distillation” strategy to purify the adversarial input perturbations of AEs by redesigning the popular image compression framework “JPEG”. The proposed “feature distillation” wisely maximizes the malicious feature loss of AE perturbations during image compression while suppressing the distortions of benign features essential for high accurate DNN classification. Experimental results show that our method can drastically reduce the success rate of various state-of-the-art AE attacks by ∼ 60% on average for both CIFAR-10 and ImageNet benchmarks without harming the testing accuracy, outperforming existing solutions like default JPEG compression and “feature squeezing”.", "title": "" }, { "docid": "18316f4f3928fd49f852090e2396ff77", "text": "OBJECTIVE\nTo provide a conceptual and clinical review of the physiology of the venous system as it is related to cardiac function in health and disease.\n\n\nDATA\nAn integration of venous and cardiac physiology under normal conditions, critical illness, and resuscitation.\n\n\nSUMMARY\nThe usual clinical teaching of cardiac physiology focuses on left ventricular pathophysiology and pathology. Due to the wide array of shock states dealt with by intensivists, an integrated approach that takes into account the function of the venous system and its interaction with the right heart may be more useful. In part II of this two-part review, we describe the physiology of venous return and its interaction with the right heart function as it relates to mechanical ventilation and various shock states including hypovolemic, cardiogenic, obstructive, and septic shock. In particular, we demonstrate how these shock states perturb venous return/right heart interactions. We also show how compensatory mechanisms and therapeutic interventions can tend to return venous return and cardiac output to appropriate values.\n\n\nCONCLUSION\nAn improved understanding of the role of the venous system in pathophysiologic conditions will allow intensivists to better appreciate the complex circulatory physiology of shock and related therapies. This should enable improved hemodynamic management of this disorder.", "title": "" }, { "docid": "00828c9f8d8e0ef17505973d84f92dbf", "text": "A new modeling approach for the design of planar multilayered meander-line polarizers is presented. For the first time a multielement equivalent circuit is adopted to characterize the meander-line unit cell. This equivalent circuit significantly improves the bandwidth performance with respect to the state-of-the-art. In addition to this, a polynomial interpolation matrix approach is employed to take into account the dependence on the meander-line geometrical parameters. This leads to an accuracy comparable to that of a full-wave analysis. At the same time, the computational cost is minimized so as to make this model suitable for real-time tuning and fast optimizations. A four-layer polarizer is designed to validate the presented modeling procedure. Comparison with full-wave simulations confirms its high accuracy over a wide frequency range.", "title": "" }, { "docid": "f2a8396de66221e2a98d8e5fcb74d90d", "text": "Clothoid splines are gaining popularity as a curve representation due to their intrinsically pleasing curvature, which varies piecewise linearly over arc length. However, constructing them from hand-drawn strokes remains difficult. Building on recent results, we describe a novel algorithm for approximating a sketched stroke with a fair (i.e., visually pleasing) clothoid spline. Fairness depends on proper segmentation of the stroke into curve primitives — lines, arcs, and clothoids. Our main idea is to cast the segmentation as a shortest path problem on a carefully constructed weighted graph. The nodes in our graph correspond to a vastly overcomplete set of curve primitives that are fit to every subsegment of the sketch, and edges correspond to transitions of a specified degree of continuity between curve primitives. The shortest path in the graph corresponds to a desirable segmentation of the input curve. Once the segmentation is found, the primitives are fit to the curve using non-linear constrained optimization. We demonstrate that the curves produced by our method have good curvature profiles, while staying close to the user sketch.", "title": "" }, { "docid": "6abd94555aa69d5d27f75db272952a0e", "text": "Text recognition in images is an active research area which attempts to develop a computer application with the ability to automatically read the text from images. Nowadays there is a huge demand of storing the information available on paper documents in to a computer readable form for later use. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. However to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved are: font characteristics of the characters in paper documents and quality of the images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus, there is a need of character recognition mechanisms to perform document image analysis which transforms documents in paper format to electronic format. In this paper, we have reviewed and analyzed different methods for text recognition from images. The objective of this review paper is to summarize the well-known methods for better understanding of the reader.", "title": "" }, { "docid": "39bae837ee110a9ccb572ab50c91b624", "text": "UNLABELLED\nCombined cup and stem anteversion in THA based on femoral anteversion has been suggested as a method to compensate for abnormal femoral anteversion. We investigated the combined anteversion technique using computer navigation. In 47 THAs, the surgeon first estimated the femoral broach anteversion and validated the position by computer navigation. The broach was then measured with navigation. The navigation screen was blocked while the surgeon estimated the anteversion of the broach. This provided two estimates of stem anteversion. The navigated stem anteversion was validated by postoperative CT scans. All cups were implanted using navigation alone. We determined precision (the reproducibility) and bias (how close the average test number is to the true value) of the stem position. Comparing the surgeon estimate to navigation anteversion, the precision of the surgeon was 16.8 degrees and bias was 0.2 degrees ; comparing the navigation of the stem to postoperative CT anteversion, the precision was 4.8 degrees and bias was 0.2 degrees , meaning navigation is accurate. Combined anteversion by postoperative CT scan was 37.6 degrees +/- 7 degrees (standard deviation) (range, 19 degrees -50 degrees ). The combined anteversion with computer navigation was within the safe zone of 25 degrees to 50 degrees for 45 of 47 (96%) hips. Femoral stem anteversion had a wide variability.\n\n\nLEVEL OF EVIDENCE\nLevel II, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence.", "title": "" }, { "docid": "245d0644ff531177db0a09c1ba3f303d", "text": "This paper presents, a new current mode four-quadrant CMOS analog multiplier/divider based on dual translinear loops. Compared with the previous works this circuit has a simpler structure resulting in lower power consumption and higher frequency response. Simulation results, performed using HSPICE with 0.25um technology, confirm performance of the proposed circuit.", "title": "" }, { "docid": "fadabf5ba39d455ca59cc9dc0b37f79b", "text": "We propose a speech enhancement algorithm based on single- and multi-microphone processing techniques. The core of the algorithm estimates a time-frequency mask which represents the target speech and use masking-based beamforming to enhance corrupted speech. Specifically, in single-microphone processing, the received signals of a microphone array are treated as individual signals and we estimate a mask for the signal of each microphone using a deep neural network (DNN). With these masks, in multi-microphone processing, we calculate a spatial covariance matrix of noise and steering vector for beamforming. In addition, we propose a masking-based post-filter to further suppress the noise in the output of beamforming. Then, the enhanced speech is sent back to DNN for mask re-estimation. When these steps are iterated for a few times, we obtain the final enhanced speech. The proposed algorithm is evaluated as a frontend for automatic speech recognition (ASR) and achieves a 5.05% average word error rate (WER) on the real environment test set of CHiME-3, outperforming the current best algorithm by 13.34%.", "title": "" }, { "docid": "4ae0bb75493e5d430037ba03fcff4054", "text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.", "title": "" }, { "docid": "99b4a9cc7e579972d771783adcba149e", "text": "This article reports on a generalizable system model design that analyzes the unstructured customer reviews inside the posts about electronic products on social networking websites. For the purposes of this study, posts on social networking websites have been mined and the keywords are extracted from such posts. The extracted keywords and the ontologies of electronic products and emotions form the base for the sentiment analysis model, which is used to understand online consumer behavior in the market. In order to enhance system accuracy, negating and enhancing terms are considered in the proposed model. As a result, it allows online businesses to use query to analyze the market trends of each product accurately based on the comments from user posts in social networking sites.", "title": "" }, { "docid": "aa9a73ce240dd792ac815405b8ac3bc7", "text": "This paper describes a real-time Speech Emotion Recognition (SER) task formulated as an image classification problem. The shift to an image classification paradigm provided the advantage of using an existing Deep Neural Network (AlexNet) pre-trained on a very large number of images, and thus eliminating the need for a lengthy network training process. Two alternative multi-class SER systems, AlexNet-SVM and FTAlexNet, were investigated. Both systems were shown to achieve state-of-the-art results when tested on a popular Berlin Emotional Speech (EMO-DB) database. Transformation from speech to image classification was achieved by creating RGB images depicting speech spectrograms. The ALEXNet-SVM method passes the spectrogram images as inputs to a pre-trained Convolutional Neural Network (AlexNet) to provide features for the Support Vector Machine (SVM) classifier, whereas the FTAlexNet method simply applies the images to a fine tuned AlexNet to provide emotional class labels. The FTAlexNet offers slightly higher accuracy compared to the AlexNet-SVM, while the AlexNet-SVM requires a lower number of computations due to the elimination of the neural network training procedure. A real-time demo is given on: https://www.youtube.com/watch?v=fuMpF3cUqDU&t=6s.", "title": "" }, { "docid": "ac88402eb0ce5c4edc5b28655991e3da", "text": "Reinforcement learning algorithms enable an agent to optimize its behavior from interacting with a specific environment. Although some very successful applications of reinforcement learning algorithms have been developed, it is still an open research question how to scale up to large dynamic environments. In this paper we will study the use of reinforcement learning on the popular arcade video game Ms. Pac-Man. In order to let Ms. Pac-Man quickly learn, we designed particular smart feature extraction algorithms that produce higher-order inputs from the game-state. These inputs are then given to a neural network that is trained using Q-learning. We constructed higher-order features which are relative to the action of Ms. Pac-Man. These relative inputs are then given to a single neural network which sequentially propagates the action-relative inputs to obtain the different Q-values of different actions. The experimental results show that this approach allows the use of only 7 input units in the neural network, while still quickly obtaining very good playing behavior. Furthermore, the experiments show that our approach enables Ms. Pac-Man to successfully transfer its learned policy to a different maze on which it was not trained before.", "title": "" }, { "docid": "4c9313e27c290ccc41f3874108593bf6", "text": "Very few standards exist for fitting products to people. Footwear is a noteworthy example. This study is an attempt to evaluate the quality of footwear fit using two-dimensional foot outlines. Twenty Hong Kong Chinese students participated in an experiment that involved three pairs of dress shoes and one pair of athletic shoes. The participants' feet were scanned using a commercial laser scanner, and each participant wore and rated the fit of each region of each shoe. The shoe lasts were also scanned and were used to match the foot scans with the last scans. The ANOVA showed significant (p < 0.05) differences among the four pairs of shoes for the overall, fore-foot and rear-foot fit ratings. There were no significant differences among shoes for mid-foot fit rating. These perceived differences were further analysed after matching the 2D outlines of both last and feet. The point-wise dimensional difference between foot and shoe outlines were computed and analysed after normalizing with foot perimeter. The dimensional difference (DD) plots along the foot perimeter showed that fore-foot fit was strongly correlated (R(2) > 0.8) with two of the minimums in the DD-plot while mid-foot fit was strongly correlated (R(2) > 0.9) with the dimensional difference around the arch region and a point on the lateral side of the foot. The DD-plots allow the designer to determine the critical locations that may affect footwear fit in addition to quantifying the nature of misfit so that design changes to shape and material may be possible.", "title": "" }, { "docid": "20c2aea79b80c93783aa3f82a8aa2625", "text": "The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.", "title": "" }, { "docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79", "text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.", "title": "" }, { "docid": "5a912359338b6a6c011e0d0a498b3e8d", "text": "Learning Granger causality for general point processes is a very challenging task. In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes — Hawkes process. According to the relationship between Hawkes process’s impact function and its Granger causality graph, our model represents impact functions using a series of basis functions and recovers the Granger causality graph via group sparsity of the impact functions’ coefficients. We propose an effective learning algorithm combining a maximum likelihood estimator (MLE) with a sparsegroup-lasso (SGL) regularizer. Additionally, the flexibility of our model allows to incorporate the clustering structure event types into learning framework. We analyze our learning algorithm and propose an adaptive procedure to select basis functions. Experiments on both synthetic and real-world data show that our method can learn the Granger causality graph and the triggering patterns of the Hawkes processes simultaneously.", "title": "" }, { "docid": "9f786e59441784d821da00d07d2fc42e", "text": "Employees are the most important asset of the organization. It’s a major challenge for the organization to retain its workforce as a lot of cost is incurred on them directly or indirectly. In order to have competitive advantage over the other organizations, the focus has to be on the employees. As ultimately the employees are the face of the organization as they are the building blocks of the organization. Thus their retention is a major area of concern. So attempt has been made to reduce the turnover rate of the organization. Therefore this paper attempts to review the various antecedents of turnover which affect turnover intentions of the employees.", "title": "" }, { "docid": "52fe696242f399d830d0a675bd766128", "text": "Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an \"intentional stance\" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a \"teleological stance\" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.", "title": "" } ]
scidocsrr
d329a8777725e85d84e5ef4d16d84a8c
Modelling Competitive Sports: Bradley-Terry-Élő Models for Supervised and On-Line Learning of Paired Competition Outcomes
[ { "docid": "8c043576bd1a73b783890cdba3a5e544", "text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.", "title": "" } ]
[ { "docid": "bceb4e66638fba85a5b5d94e8546e4ee", "text": "Data grows at the impressive rate of 50% per year, and 75% of the digital world is a copy! Although keeping multiple copies of data is necessary to guarantee their availability and long term durability, in many situations the amount of data redundancy is immoderate. By keeping a single copy of repeated data, data deduplication is considered as one of the most promising solutions to reduce the storage costs, and improve users experience by saving network bandwidth and reducing backup time. However, this solution must now solve many security issues to be completely satisfying. In this paper we target the attacks from malicious clients that are based on the manipulation of data identifiers and those based on backup time and network traffic observation. We present a deduplication scheme mixing an intraand an inter-user deduplication in order to build a storage system that is secure against the aforementioned type of attacks by controlling the correspondence between files and their identifiers, and making the inter-user deduplication unnoticeable to clients using deduplication proxies. Our method provides global storage space savings, per-client bandwidth network savings between clients and deduplication proxies, and global network bandwidth savings between deduplication proxies and the storage server. The evaluation of our solution compared to a classic system shows that the overhead introduced by our scheme is mostly due to data encryption which is necessary to ensure confidentiality.", "title": "" }, { "docid": "f95e568513847369eba15e154461a3c1", "text": "We address the problem of identifying the domain of onlinedatabases. More precisely, given a set F of Web forms automaticallygathered by a focused crawler and an online databasedomain D, our goal is to select from F only the formsthat are entry points to databases in D. Having a set ofWebforms that serve as entry points to similar online databasesis a requirement for many applications and techniques thataim to extract and integrate hidden-Web information, suchas meta-searchers, online database directories, hidden-Webcrawlers, and form-schema matching and merging.We propose a new strategy that automatically and accuratelyclassifies online databases based on features that canbe easily extracted from Web forms. By judiciously partitioningthe space of form features, this strategy allows theuse of simpler classifiers that can be constructed using learningtechniques that are better suited for the features of eachpartition. Experiments using real Web data in a representativeset of domains show that the use of different classifiersleads to high accuracy, precision and recall. This indicatesthat our modular classifier composition provides an effectiveand scalable solution for classifying online databases.", "title": "" }, { "docid": "fb46f67ba94cb4d7dd7620e2bdf5f00e", "text": "We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50% of mining power.\n Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically.\n We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.", "title": "" }, { "docid": "44368062de68f6faed57d43b8e691e35", "text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.", "title": "" }, { "docid": "b66a2ce976a145827b5b9a5dd2ad2495", "text": "Compared to previous head-mounted displays, the compact and low-cost Oculus Rift has claimed to offer improved virtual reality experiences. However, how and what kinds of user experiences are encountered by people when using the Rift in actual gameplay has not been examined. We present an exploration of 10 participants' experiences of playing a first-person shooter game using the Rift. Despite cybersickness and a lack of control, participants experienced heightened experiences, a richer engagement with passive game elements, a higher degree of flow and a deeper immersion on the Rift than on a desktop setup. Overly demanding movements, such as the large range of head motion required to navigate the game environment were found to adversely affect gaming experiences. Based on these and other findings, we also present some insights for designing games for the Rift.", "title": "" }, { "docid": "bb770a0cb686fbbb4ea1adb6b4194967", "text": "Parental refusal of vaccines is a growing a concern for the increased occurrence of vaccine preventable diseases in children. A number of studies have looked into the reasons that parents refuse, delay, or are hesitant to vaccinate their child(ren). These reasons vary widely between parents, but they can be encompassed in 4 overarching categories. The 4 categories are religious reasons, personal beliefs or philosophical reasons, safety concerns, and a desire for more information from healthcare providers. Parental concerns about vaccines in each category lead to a wide spectrum of decisions varying from parents completely refusing all vaccinations to only delaying vaccinations so that they are more spread out. A large subset of parents admits to having concerns and questions about childhood vaccinations. For this reason, it can be helpful for pharmacists and other healthcare providers to understand the cited reasons for hesitancy so they are better prepared to educate their patients' families. Education is a key player in equipping parents with the necessary information so that they can make responsible immunization decisions for their children.", "title": "" }, { "docid": "3b06ce783d353cff3cdbd9a60037162e", "text": "The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the ‘rules’ for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.", "title": "" }, { "docid": "d3e35963e85ade6e3e517ace58cb3911", "text": "In this paper, we present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers’ sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e, direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.", "title": "" }, { "docid": "96ee31337d66b8ccd3876c1575f9b10c", "text": "Although different modeling techniques have been proposed during the last 300 years, the differential equation formalism proposed by Newton and Leibniz has been the tool of choice for modeling and problem solving Taylor (1996); Wainer (2009). Differential equations provide a formal mathematical method (sometimes also called an analytical method) for studying the entity of interest. Computational methods based on differential equations could not be easily applied in studying human-made dynamic systems (e.g., traffic controllers, robotic arms, automated factories, production plants, computer networks, VLSI circuits). These systems are usually referred to as discrete event systems because their states do not change continuously but, rather, because of the occurrence of events. This makes them asynchronous, inherently concurrent, and highly nonlinear, rendering their modeling and simulation different from that used in traditional approaches. In order to improve the model definition for this class of systems, a number of techniques were introduced, including Petri Nets, Finite State Machines, min-max algebra, Timed Automata, etc. Banks & Nicol. (2005); Cassandras (1993); Cellier & Kofman. (2006); Fishwick (1995); Law & Kelton (2000); Toffoli & Margolus. (1987). Wireless Sensor Network (WSN) is a discrete event system which consists of a network of sensor nodes equipped with sensing, computing, power, and communication modules to monitor certain phenomenon such as environmental data or object tracking Zhao & Guibas (2004). Emerging applications of wireless sensor networks are comprised of asset and warehouse *madani@ciit.net.pk †jawhaikaz@ciit.net.pk ‡mahlknecht@ict.tuwien.ac.at 1", "title": "" }, { "docid": "a412f5facafdb2479521996c05143622", "text": "A temperature and supply independent on-chip reference relaxation oscillator for low voltage design is described. The frequency of oscillation is mainly a function of a PVT robust biasing current. The comparator for the relaxation oscillator is replaced with a high speed common-source stage to eliminate the temperature dependency of the comparator delay. The current sources and voltages are biased by a PVT robust references derived from a bandgap circuitry. This oscillator is designed in TSMC 65 nm CMOS process to operate with a minimum supply voltage of 1.4 V and consumes 100 μW at 157 MHz frequency of oscillation. The oscillator exhibits frequency variation of 1.6% for supply changes from 1.4 V to 1.9 V, and ±1.2% for temperature changes from 20°C to 120°C.", "title": "" }, { "docid": "dc33d2edcfb124af607bcb817589f6e9", "text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.", "title": "" }, { "docid": "9734f4395c306763e6cc5bf13b0ca961", "text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.", "title": "" }, { "docid": "2575bad473ef55281db460617e0a37c8", "text": "Automated license plate recognition (ALPR) has been applied to identify vehicles by their license plates and is critical in several important transportation applications. In order to achieve the recognition accuracy levels typically required in the market, it is necessary to obtain properly segmented characters. A standard method, projection-based segmentation, is challenged by substantial variation across the plate in the regions surrounding the characters. In this paper a reinforcement learning (RL) method is adapted to create a segmentation agent that can find appropriate segmentation paths that avoid characters, traversing from the top to the bottom of a cropped license plate image. Then a hybrid approach is proposed, leveraging the speed and simplicity of the projection-based segmentation technique along with the power of the RL method. The results of our experiments show significant improvement over the histogram projection currently used for character segmentation.", "title": "" }, { "docid": "f38554695eb3ca5b6d62b1445d8826b7", "text": "Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems. However, it remains a challenge to analyze and interpret the underlying process of neuroevolution in such high dimensions. To begin to address this challenge, this paper presents an interactive data visualization tool called VINE (Visual Inspector for NeuroEvolution) aimed at helping neuroevolution researchers and end-users better understand and explore this family of algorithms. VINE works seamlessly with a breadth of neuroevolution algorithms, including ES and GA, and addresses the difficulty of observing the underlying dynamics of the learning process through an interactive visualization of the evolving agent's behavior characterizations over generations. As neuroevolution scales to neural networks with millions or more connections, visualization tools like VINE that offer fresh insight into the underlying dynamics of evolution become increasingly valuable and important for inspiring new innovations and applications.", "title": "" }, { "docid": "ab2159730f00662ba29e25a0e27d1799", "text": "This paper proposes a novel and efficient re-ranking technque to solve the person re-identification problem in the surveillance application. Previous methods treat person re-identification as a special object retrieval problem, and compute the retrieval result purely based on a unidirectional matching between the probe and all gallery images. However, the correct matching may be not included in the top-k ranking result due to appearance changes caused by variations in illumination, pose, viewpoint and occlusion. To obtain more accurate re-identification results, we propose to reversely query every gallery person image in a new gallery composed of the original probe person image and other gallery person images, and revise the initial query result according to bidirectional ranking lists. The behind philosophy of our method is that images of the same person should not only have similar visual content, refer to content similarity, but also possess similar k-nearest neighbors, refer to context similarity. Furthermore, the proposed bidirectional re-ranking method can be divided into offline and online parts, where the majority of computation load is accomplished by the offline part and the online computation complexity is only proportional to the size of the gallery data set, which is especially suited to the real-time required video investigation task. Extensive experiments conducted on a series of standard data sets have validated the effectiveness and efficiency of our proposed method.", "title": "" }, { "docid": "e6cae5bec5bb4b82794caca85d3412a2", "text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.", "title": "" }, { "docid": "d6976361b44aab044c563e75056744d6", "text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).", "title": "" }, { "docid": "5b57eb0b695a1c85d77db01e94904fb1", "text": "Depth map super-resolution is an emerging topic due to the increasing needs and applications using RGB-D sensors. Together with the color image, the corresponding range data provides additional information and makes visual analysis tasks more tractable. However, since the depth maps captured by such sensors are typically with limited resolution, it is preferable to enhance its resolution for improved recognition. In this paper, we present a novel joint trilateral filtering (JTF) algorithm for solving depth map super-resolution (SR) problems. Inspired by bilateral filtering, our JTF utilizes and preserves edge information from the associated high-resolution (HR) image by taking spatial and range information of local pixels. Our proposed further integrates local gradient information of the depth map when synthesizing its HR output, which alleviates textural artifacts like edge discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works.", "title": "" }, { "docid": "61ba52f205c8b497062995498816b60f", "text": "The past century experienced a proliferation of retail formats in the marketplace. However, as a new century begins, these retail formats are being threatened by the emergence of a new kind of store, the online or Internet store. From being almost a novelty in 1995, online retailing sales were expected to reach $7 billion by 2000 [9]. In this increasngly timeconstrained world, Internet stores allow consumers to shop from the convenience of remote locations. Yet most of these Internet stores are losing money [6]. Why is such counterintuitive phenomena prevailing? The explanation may lie in the risks associated with Internet shopping. These risks may arise because consumers are concerned about the security of transmitting credit card information over the Internet. Consumers may also be apprehensive about buying something without touching or feeling it and being unable to return it if it fails to meet their approval. Having said this, however, we must point out that consumers are buying goods on the Internet. This is reflected in the fact that total sales on the Internet are on the increase [8, 11]. Who are the consumers that are patronizing the Internet? Evidently, for them the perception of the risk associated with shopping on the Internet is low or is overshadowed by its relative convenience. This article attempts to determine why certain consumers are drawn to the Internet and why others are not. Since the pioneering research done by Becker [3], it has been accepted that the consumer maximizes his utility subject to not only income constraints but also time constraints. A consumer seeks out his best decision given that he has a limited budget of time and money. While purchasing a product from a store, a consumer has to expend both money and time. Therefore, the consumer patronizes the retail store where his total costs or the money and time spent in the entire process are the least. Since the util-", "title": "" }, { "docid": "1c1988ae64bef3475f36eceaffda0b7d", "text": "Home Office (Grant number: PTA-033-2005-00028). We gratefully acknowledge the three anonymous reviewers, whose comments and suggestions improved an earlier version of this paper. Criminologists have long contended that neighborhoods are important determinants of how individuals perceive their risk of criminal victimization. Yet, despite the theoretical importance and policy-relevance of these claims, the empirical evidence-base is surprisingly thin and inconsistent. Drawing on data from a national probability sample of individuals, linked to independent measures of neighborhood demographic characteristics, visual signs of physical disorder, and reported crime, we test four hypotheses about the mechanisms through which neighborhoods influence fear of crime. Our large sample size, analytical approach and the independence of our empirical measures enable us to overcome some of the limitations that have hampered much previous research into this question. We find that neighborhood structural characteristics, visual signs of disorder, and recorded crime all have direct and independent effects on individual level fear of crime. Additionally, we demonstrate that individual differences in fear of crime are strongly moderated by neighborhood socioeconomic characteristics; between group differences in expressed fear of crime are both exacerbated and ameliorated by the characteristics of the areas in which people live. interests include criminal statistics, neighborhood effects, missing data problems, and survey methodology. Methods at the University of Southampton. His research interests are in the areas of survey methodology, statistical methods, public opinion, and political behaviour.", "title": "" } ]
scidocsrr
5df68dcfb86b34f85a01916e74852a7b
Attending to the present: mindfulness meditation reveals distinct neural modes of self-reference.
[ { "docid": "c6e1c8aa6633ec4f05240de1a3793912", "text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.", "title": "" }, { "docid": "a55eed627afaf39ee308cc9e0e10a698", "text": "Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.", "title": "" }, { "docid": "4b284736c51435f9ab6f52f174dc7def", "text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.", "title": "" }, { "docid": "34257e8924d8f9deec3171589b0b86f2", "text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.", "title": "" } ]
[ { "docid": "5e453defd762bb4ecfae5dcd13182b4a", "text": "We present a comprehensive lifetime prediction methodology for both intrinsic and extrinsic Time-Dependent Dielectric Breakdown (TDDB) failures to provide adequate Design-for-Reliability. For intrinsic failures, we propose applying the √E model and estimating the Weibull slope using dedicated single-via test structures. This effectively prevents lifetime underestimation, and thus relaxes design restrictions. For extrinsic failures, we propose applying the thinning model and Critical Area Analysis (CAA). In the thinning model, random defects reduce effective spaces between interconnects, causing TDDB failures. We can quantify the failure probabilities by using CAA for any design layouts of various LSI products.", "title": "" }, { "docid": "9ff76c8500a15d1c9b4a980b37bca505", "text": "The thesis is about linear genetic programming (LGP), a machine learning approach that evolves computer programs as sequences of imperative instructions. Two fundamental differences to the more common tree-based variant (TGP) may be identified. These are the graph-based functional structure of linear genetic programs, on the one hand, and the existence of structurally noneffective code, on the other hand. The two major objectives of this work comprise (1) the development of more advanced methods and variation operators to produce better and more compact program solutions and (2) the analysis of general EA/GP phenomena in linear GP, including intron code, neutral variations, and code growth, among others. First, we introduce efficient algorithms for extracting features of the imperative and functional structure of linear genetic programs. In doing so, especially the detection and elimination of noneffective code during runtime will turn out as a powerful tool to accelerate the time-consuming step of fitness evaluation in GP. Variation operators are discussed systematically for the linear program representation. We will demonstrate that so called effective instruction mutations achieve the best performance in terms of solution quality. These mutations operate only on the (structurally) effective code and restrict the mutation step size to one instruction. One possibility to further improve their performance is to explicitly increase the probability of neutral variations. As a second, more time-efficient alternative we explicitly control the mutation step size on the effective code (effective step size). Minimum steps do not allow more than one effective instruction to change its effectiveness status. That is, only a single node may be connected to or disconnected from the effective graph component. It is an interesting phenomenon that, to some extent, the effective code becomes more robust against destructions over the generations already implicitly. A special concern of this thesis is to convince the reader that there are some serious arguments for using a linear representation. In a crossover-based comparison LGP has been found superior to TGP over a set of benchmark problems. Furthermore, linear solutions turned out to be more compact than tree solutions due to (1) multiple usage of subgraph results and (2) implicit parsimony pressure by structurally noneffective code. The phenomenon of code growth is analyzed for different linear genetic operators. When applying instruction mutations exclusively almost only neutral variations may be held responsible for the emergence and propagation of intron code. It is noteworthy that linear genetic programs may not grow if all neutral variation effects are rejected and if the variation step size is minimum. For the same reasons effective instruction mutations realize an implicit complexity control in linear GP which reduces a possible negative effect of code growth to a minimum. Another noteworthy result in this context is that program size is strongly increased by crossover while it is hardly influenced by mutation even if step sizes are not explicitly restricted.", "title": "" }, { "docid": "664b9bb1f132a87e2f579945a31852b7", "text": "Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains—astronomy and molecular biology. Introduction", "title": "" }, { "docid": "ddff0a3c6ed2dc036cf5d6b93d2da481", "text": "Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100% (Meteor score increases from 4.82 to 9.65).", "title": "" }, { "docid": "89dbc16a2510e3b0e4a248f428a9ffc0", "text": "Complex networks are ubiquitous in our daily life, with the World Wide Web, social networks, and academic citation networks being some of the common examples. It is well understood that modeling and understanding the network structure is of crucial importance to revealing the network functions. One important problem, known as community detection, is to detect and extract the community structure of networks. More recently, the focus in this research topic has been switched to the detection of overlapping communities. In this paper, based on the matrix factorization approach, we propose a method called bounded nonnegative matrix tri-factorization (BNMTF). Using three factors in the factorization, we can explicitly model and learn the community membership of each node as well as the interaction among communities. Based on a unified formulation for both directed and undirected networks, the optimization problem underlying BNMTF can use either the squared loss or the generalized KL-divergence as its loss function. In addition, to address the sparsity problem as a result of missing edges, we also propose another setting in which the loss function is defined only on the observed edges. We report some experiments on real-world datasets to demonstrate the superiority of BNMTF over other related matrix factorization methods.", "title": "" }, { "docid": "fd0defe3aaabd2e27c7f9d3af47dd635", "text": "A fast test for triangle-triangle intersection by computing signed vertex-plane distances (sufficient if one triangle is wholly to one side of the other) and signed line-line distances of selected edges (otherwise) is presented. This algorithm is faster than previously published algorithms and the code is available online.", "title": "" }, { "docid": "0e600cedfbd143fe68165e20317c46d4", "text": "We propose an efficient real-time automatic license plate recognition (ALPR) framework, particularly designed to work on CCTV video footage obtained from cameras that are not dedicated to the use in ALPR. At present, in license plate detection, tracking and recognition are reasonably well-tackled problems with many successful commercial solutions being available. However, the existing ALPR algorithms are based on the assumption that the input video will be obtained via a dedicated, high-resolution, high-speed camera and is/or supported by a controlled capture environment, with appropriate camera height, focus, exposure/shutter speed and lighting settings. However, typical video forensic applications may require searching for a vehicle having a particular number plate on noisy CCTV video footage obtained via non-dedicated, medium-to-low resolution cameras, working under poor illumination conditions. ALPR in such video content faces severe challenges in license plate localization, tracking and recognition stages. This paper proposes a novel approach for efficient localization of license plates in video sequence and the use of a revised version of an existing technique for tracking and recognition. A special feature of the proposed approach is that it is intelligent enough to automatically adjust for varying camera distances and diverse lighting conditions, a requirement for a video forensic tool that may operate on videos obtained by a diverse set of unspecified, distributed CCTV cameras.", "title": "" }, { "docid": "75952b1d2c9c2f358c4c2e3401a00245", "text": "This book is an outstanding contribution to the philosophical study of language and mind, by one of the most influential thinkers of our time. In a series of penetrating essays, Noam Chomsky cuts through the confusion and prejudice which has infected the study of language and mind, bringing new solutions to traditional philosophical puzzles and fresh perspectives on issues of general interest, ranging from the mind–body problem to the unification of science. Using a range of imaginative and deceptively simple linguistic analyses, Chomsky argues that there is no coherent notion of “language” external to the human mind, and that the study of language should take as its focus the mental construct which constitutes our knowledge of language. Human language is therefore a psychological, ultimately a “biological object,” and should be analysed using the methodology of the natural sciences. His examples and analyses come together in this book to give a unique and compelling perspective on language and the mind.", "title": "" }, { "docid": "3bff3136e5e2823d0cca2f864fe9e512", "text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.", "title": "" }, { "docid": "7c3b5470398a219875ba1a6443119c8e", "text": "Semantic role labeling (SRL) identifies the predicate-argument structure in text with semantic labels. It plays a key role in understanding natural language. In this paper, we present POLYGLOT, a multilingual semantic role labeling system capable of semantically parsing sentences in 9 different languages from 4 different language groups. The core of POLYGLOT are SRL models for individual languages trained with automatically generated Proposition Banks (Akbik et al., 2015). The key feature of the system is that it treats the semantic labels of the English Proposition Bank as “universal semantic labels”: Given a sentence in any of the supported languages, POLYGLOT applies the corresponding SRL and predicts English PropBank frame and role annotation. The results are then visualized to facilitate the understanding of multilingual SRL with this unified semantic representation.", "title": "" }, { "docid": "5bca58cbd1ef80ebf040529578d2a72a", "text": "In this letter, a printable chipless tag with electromagnetic code using split ring resonators is proposed. A 4 b chipless tag that can be applied to paper/plastic-based items such as ID cards, tickets, banknotes and security documents is designed. The chipless tag generates distinct electromagnetic characteristics by various combinations of a split ring resonator. Furthermore, a reader system is proposed to digitize electromagnetic characteristics and convert chipless tag to electromagnetic code.", "title": "" }, { "docid": "b2c03d8e54a2a6840f6688ab9682e24b", "text": "Path following and follow-the-leader motion is particularly desirable for minimally-invasive surgery in confined spaces which can only be reached using tortuous paths, e.g. through natural orifices. While path following and followthe- leader motion can be achieved by hyper-redundant snake robots, their size is usually not applicable for medical applications. Continuum robots, such as tendon-driven or concentric tube mechanisms, fulfill the size requirements for minimally invasive surgery, but yet follow-the-leader motion is not inherently provided. In fact, parameters of the manipulator's section curvatures and translation have to be chosen wisely a priori. In this paper, we consider a tendon-driven continuum robot with extensible sections. After reformulating the forward kinematics model, we formulate prerequisites for follow-the-leader motion and present a general approach to determine a sequence of robot configurations to achieve follow-the-leader motion along a given 3D path. We evaluate our approach in a series of simulations with 3D paths composed of constant curvature arcs and general 3D paths described by B-spline curves. Our results show that mean path errors <;0.4mm and mean tip errors <;1.6mm can theoretically be achieved for constant curvature paths and <;2mm and <;3.1mm for general B-spline curves respectively.", "title": "" }, { "docid": "25bcbb44c843d71b7422905e9dbe1340", "text": "INTRODUCTION\nThe purpose of this study was to evaluate the effect of using the transverse analysis developed at Case Western Reserve University (CWRU) in Cleveland, Ohio. The hypotheses were based on the following: (1) Does following CWRU's transverse analysis improve the orthodontic results? (2) Does following CWRU's transverse analysis minimize the active treatment duration?\n\n\nMETHODS\nA retrospective cohort research study was conducted on a randomly selected sample of 100 subjects. The sample had CWRU's analysis performed retrospectively, and the sample was divided according to whether the subjects followed what CWRU's transverse analysis would have suggested. The American Board of Orthodontics discrepancy index was used to assess the pretreatment records, and quality of the result was evaluated using the American Board of Orthodontics cast/radiograph evaluation. The Mann-Whitney test was used for the comparison.\n\n\nRESULTS\nCWRU's transverse analysis significantly improved the total cast/radiograph evaluation scores (P = 0.041), especially the buccolingual inclination component (P = 0.001). However, it did not significantly affect treatment duration (P = 0.106).\n\n\nCONCLUSIONS\nCWRU's transverse analysis significantly improves the orthodontic results but does not have significant effects on treatment duration.", "title": "" }, { "docid": "e81f1caa398de7f56a70cc4db18d58db", "text": "UNLABELLED\nThis study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.\n\n\nIN CONCLUSION\n1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.", "title": "" }, { "docid": "31cd031708856490f756d4399d7709d5", "text": "Inspecting objects in the industry aims to guarantee product quality allowing problems to be corrected and damaged products to be discarded. Inspection is also widely used in railway maintenance, where wagon components need to be checked due to efficiency and safety concerns. In some organizations, hundreds of wagons are inspected visually by a human inspector, which leads to quality issues and safety risks for the inspectors. This paper describes a wagon component inspection approach using Deep Learning techniques to detect a particular damaged component: the shear pad. We compared our approach for convolutional neural networks with the state of art classification methods to distinguish among three shear pads conditions: absent, damaged, and undamaged shear pad. Our results are very encouraging showing empirical evidence that our approach has better performance than other classification techniques.", "title": "" }, { "docid": "a697f85ad09699ddb38994bd69b11103", "text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.", "title": "" }, { "docid": "d8f8931af18f3e0a6424916dfac717ee", "text": "Twitter data have brought new opportunities to know what happens in the world in real-time, and conduct studies on the human subjectivity on a diversity of issues and topics at large scale, which would not be feasible using traditional methods. However, as well as these data represent a valuable source, a vast amount of noise can be found in them. Because of the brevity of texts and the widespread use of mobile devices, non-standard word forms abound in tweets, which degrade the performance of Natural Language Processing tools. In this paper, a lexical normalization system of tweets written in Spanish is presented. The system suggests normalization candidates for out-of-vocabulary (OOV) words based on similarity of graphemes or phonemes. Using contextual information, the best correction candidate for a word is selected. Experimental results show that the system correctly detects OOV words and the most of cases suggests the proper corrections. Together with this, results indicate a room for improvement in the correction candidate selection. Compared with other methods, the overall performance of the system is above-average and competitive to different approaches in the literature.", "title": "" }, { "docid": "da5c1445453853e23477bfea79fd4605", "text": "This paper presents an 8-bit column-driver IC with improved deviation of voltage output (DVO) for thin-film-transistor (TFT) liquid crystal displays (LCDs). The various DVO results contributed by the output buffer of a column driver are predicted by using Monte Carlo simulation under different variation conditions. Relying on this prediction, a better compromise can be achieved between DVO and chip size. This work was implemented using 0.35-μm CMOS technology and the measured maximum DVO is only 6.2 mV.", "title": "" }, { "docid": "f598677e19789c92c31936440e709c4d", "text": "Temporal datasets, in which data evolves continuously, exist in a wide variety of applications, and identifying anomalous or outlying objects from temporal datasets is an important and challenging task. Different from traditional outlier detection, which detects objects that have quite different behavior compared with the other objects, temporal outlier detection tries to identify objects that have different evolutionary behavior compared with other objects. Usually objects form multiple communities, and most of the objects belonging to the same community follow similar patterns of evolution. However, there are some objects which evolve in a very different way relative to other community members, and we define such objects as evolutionary community outliers. This definition represents a novel type of outliers considering both temporal dimension and community patterns. We investigate the problem of identifying evolutionary community outliers given the discovered communities from two snapshots of an evolving dataset. To tackle the challenges of community evolution and outlier detection, we propose an integrated optimization framework which conducts outlier-aware community matching across snapshots and identification of evolutionary outliers in a tightly coupled way. A coordinate descent algorithm is proposed to improve community matching and outlier detection performance iteratively. Experimental results on both synthetic and real datasets show that the proposed approach is highly effective in discovering interesting evolutionary community outliers.", "title": "" }, { "docid": "04271124470c613da4dd4136ceb61a18", "text": "In this paper, we propose the deep reinforcement relevance network (DRRN), a novel deep architecture, for handling an unbounded action space with applications to language understanding for text-based games. For a particular class of games, a user must choose among a variable number of actions described by text, with the goal of maximizing long-term reward. In these games, the best action is typically that which fits the best to the current situation (modeled as a state in the DRRN), also described by text. Because of the exponential complexity of natural language with respect to sentence length, there is typically an unbounded set of unique actions. Therefore, it is very difficult to pre-define the action set as in the deep Q-network (DQN). To address this challenge, the DRRN extracts high-level embedding vectors from the texts that describe states and actions, respectively, and computes the inner products between the state and action embedding vectors to approximate the Q-function. We evaluate the DRRN on two popular text games, showing superior performance over the DQN.", "title": "" } ]
scidocsrr
5a3ca7db556a984972d8a3c90fc4ba34
A 7-µW 2.4-GHz wake-up receiver with -80 dBm sensitivity and high co-channel interferer tolerance
[ { "docid": "e30cedcb4cb99c4c3b2743c5359cf823", "text": "This paper presents a 116nW wake-up radio complete with crystal reference, interference compensation, and baseband processing, such that a selectable 31-bit code is required to toggle a wake-up signal. The front-end operates over a broad frequency range, tuned by an off-chip band-select filter and matching network, and is demonstrated in the 402-405MHz MICS band and the 915MHz and 2.4GHz ISM bands with sensitivities of -45.5dBm, -43.4dBm, and -43.2dBm, respectively. Additionally, the baseband processor implements automatic threshold feedback to detect the presence of interferers and dynamically adjust the receiver's sensitivity, mitigating the jamming problem inherent to previous energy-detection wake-up radios. The wake-up radio has a raw OOK chip-rate of 12.5kbps, an active area of 0.35mm2 and operates using a 1.2V supply for the crystal reference and RF demodulation, and a 0.5V supply for subthreshold baseband processing.", "title": "" } ]
[ { "docid": "caa30379a2d0b8be2e1b4ddf6e6602c2", "text": "Multi-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems. Due to their complexity and huge design space to explore for such systems, CAD tools and frameworks to customize MPSoCs are mandatory. Some academic and industrial frameworks are available to support bus-based MPSoCs, but few works target NoCs as underlying communication architecture. A framework targeting MPSoC customization must provide abstract models to enable fast design space exploration, flexible application mapping strategies, all coupled to features to evaluate the performance of running applications. This paper proposes a framework to customize NoC-based MPSoCs with support to static and dynamic task mapping and C/SystemC simulation models for processors and memories. A simple, specifically designed microkernel executes in each processor, enabling multitasking at the processor level. Graphical tools enable debug and system verification, individualizing data for each task. Practical results highlight the benefit of using dynamic mapping strategies (total execution time reduction) and abstract models (total simulation time reduction without losing accuracy).", "title": "" }, { "docid": "0ef6e54d7190dde80ee7a30c5ecae0c3", "text": "Games have been an important tool for motivating undergraduate students majoring in computer science and engineering. However, it is difficult to build an entire game for education from scratch, because the task requires high-level programming skills and expertise to understand the graphics and physics. Recently, there have been many different game artificial intelligence (AI) competitions, ranging from board games to the state-of-the-art video games (car racing, mobile games, first-person shooting games, real-time strategy games, and so on). The competitions have been designed such that participants develop their own AI module on top of public/commercial games. Because the materials are open to the public, it is quite useful to adopt them for an undergraduate course project. In this paper, we report our experiences using the Angry Birds AI Competition for such a project-based course. In the course, teams of students consider computer vision, strategic decision-making, resource management, and bug-free coding for their outcome. To promote understanding of game contents generation and extensive testing on the generalization abilities of the student's AI program, we developed software to help them create user-created levels. Students actively participated in the project and the final outcome was comparable with that of successful entries in the 2013 International Angry Birds AI Competition. Furthermore, it leads to the development of a new parallelized Angry Birds AI Competition platform with undergraduate students aiming to use advanced optimization algorithms for their controllers.", "title": "" }, { "docid": "8b0e62dd3a6241eaaa64c40728c2c259", "text": "This thesis discusses aspects of a novel solar concentrating photovoltaic / thermal (PV/T) collector that has been designed to produce both electricity and hot water. The motivation for the development of the Combined Heat and Power Solar (CHAPS) collector is twofold: in the short term, to produce photovoltaic power and solar hot water at a cost which is competitive with other renewable energy technologies, and in the longer term, at a cost which is lower than possible with current technologies. To the author’ s knowledge, the CHAPS collector is the first PV/T system using a reflective linear concentrator with a concentration ratio in the range 20-40x. The work contained in this thesis is a thorough study of all facets of the CHAPS collector, through a combination of theoretical and experimental investigation. A theoretical discussion of the concept of ‘energy value’ is presented, with the aim of developing methodologies that could be used in optimisation studies to compare the value of electrical and thermal energy. Three approaches are discussed; thermodynamic methods, using second law concepts of energy usefulness; economic valuation of the hot water and electricity through levelised energy costs; and environmental valuation, based on the greenhouse gas emissions associated with the generation of hot water and electricity. It is proposed that the value of electrical energy and thermal energy is best compared using a simple ratio. Experimental measurement of the thermal and electrical efficiency of a CHAPS receiver was carried out for a range of operating temperatures and fluid flow rates. The effectiveness of internal fins incorporated to augment heat transfer was examined. The glass surface temperature was measured using an infrared camera, to assist in the calculation of thermal losses, and to help determine the extent of radiation absorbed in the cover materials. FEA analysis, using the software package Strand7, examines the conductive heat transfer within the receiver body to obtain a temperature profile under operating conditions. Electrical efficiency is not only affected by temperature, but by non-uniformities in the radiation flux profile. Highly non-uniform illumination across the cells was found to reduce the efficiency by about 10% relative. The radiation flux profile longitudinal to the receivers was measured by a custom-built flux scanning device. The results show significant fluctuations in the flux profile and, at worst, the minimum flux intensity is as much as 27% lower than the median. A single cell with low flux intensity limits the current and performance of all cells in series, causing a significant drop in overall output. Therefore, a detailed understanding of the causes of flux non-uniformities is essential for the design of a single-axis tracking PV trough concentrator. Simulation of the flux profile was carried out", "title": "" }, { "docid": "035f780309fc777ece17cbfe4aabc01b", "text": "The phenolic composition and antibacterial and antioxidant activities of the green alga Ulva rigida collected monthly for 12 months were investigated. Significant differences in antibacterial activity were observed during the year with the highest inhibitory effect in samples collected during spring and summer. The highest free radical scavenging activity and phenolic content were detected in U. rigida extracts collected in late winter (February) and early spring (March). The investigation of the biological properties of U. rigida fractions collected in spring (April) revealed strong antimicrobial and antioxidant activities. Ethyl acetate and n-hexane fractions exhibited substantial acetylcholinesterase inhibitory capacity with EC50 of 6.08 and 7.6 μg mL−1, respectively. The total lipid, protein, ash, and individual fatty acid contents of U. rigida were investigated. The four most abundant fatty acids were palmitic, oleic, linolenic, and eicosenoic acids.", "title": "" }, { "docid": "53007a9a03b7db2d64dd03973717dc0f", "text": "We present two children with hypoplasia of the left trapezius muscle and a history of ipsilateral transient neonatal brachial plexus palsy without documented trapezius weakness. Magnetic resonance imaging in these patients with unilateral left hypoplasia of the trapezius revealed decreased muscles in the left side of the neck and left supraclavicular region on coronal views, decreased muscle mass between the left splenius capitis muscle and the subcutaneous tissue at the level of the neck on axial views, and decreased size of the left paraspinal region on sagittal views. Three possibilities can explain the association of hypoplasia of the trapezius and obstetric brachial plexus palsy: increased vulnerability of the brachial plexus to stretch injury during delivery because of intrauterine trapezius weakness, a casual association of these two conditions, or an erroneous diagnosis of brachial plexus palsy in patients with trapezial weakness. Careful documentation of neck and shoulder movements can distinguish among shoulder weakness because of trapezius hypoplasia, brachial plexus palsy, or brachial plexus palsy with trapezius hypoplasia. Hence, we recommend precise documentation of neck movements in the initial description of patients with suspected neonatal brachial plexus palsy.", "title": "" }, { "docid": "78cda62ca882bb09efc08f7d4ea1801e", "text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven", "title": "" }, { "docid": "d4fb664caa02b81909bc51291d3fafd7", "text": "This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence. We formulate non-rigid structure from motion (nrsfm) as a global variational energy minimization problem to estimate dense low-rank smooth 3D shapes for every frame along with the camera motion matrices, given dense 2D correspondences. Unlike traditional factorization based approaches to nrsfm, which model the low-rank non-rigid shape using a fixed number of basis shapes and corresponding coefficients, we minimize the rank of the matrix of time-varying shapes directly via trace norm minimization. In conjunction with this low-rank constraint, we use an edge preserving total-variation regularization term to obtain spatially smooth shapes for every frame. Thanks to proximal splitting techniques the optimization problem can be decomposed into many point-wise sub-problems and simple linear systems which can be easily solved on GPU hardware. We show results on real sequences of different objects (face, torso, beating heart) where, despite challenges in tracking, illumination changes and occlusions, our method reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates.", "title": "" }, { "docid": "d32a9b0b4f470f99cdd6a57d18395582", "text": "Information technology (IT) has a tremendous impact on the discipline of accounting by introducing new ways of retrieving and processing information a bout performance deviations and control effectiveness. This paper explores the role of IT f or managing organizational controls by analyzing value drivers for particular accounting information systems that commonly run under the label of Governance, Risk Management, and Compliance (GRC IS ). We apply a grounded theory approach to structure the value drivers of GRC IS into a resear ch f amework. In order to understand the impact of IT, we relate the GRC IS value drivers to control t heories. Practical implications include understanding GRC IS benefits beyond compliance and providing clear strategic reasoning for GRC IS depending on the individual company’s situation. Research implications include the fact that integrating IT into the context of accounting leave s several unsolved yet promising issues in theory which future research might address. This paper is the first to use the lens of organizational control theories on Governance, Risk Management, and Compli ance information systems and establishes a potentially fruitful research agenda for GRC IS as a highly relevant topic for information systems research.", "title": "" }, { "docid": "3eb419ef59ad59e60bf357cfb2e69fba", "text": "Heterogeneous information network (HIN) has been widely adopted in recommender systems due to its excellence in modeling complex context information. Although existing HIN based recommendation methods have achieved performance improvement to some extent, they have two major shortcomings. First, these models seldom learn an explicit representation for path or meta-path in the recommendation task. Second, they do not consider the mutual effect between the meta-path and the involved user-item pair in an interaction. To address these issues, we develop a novel deep neural network with the co-attention mechanism for leveraging rich meta-path based context for top-N recommendation. We elaborately design a three-way neural interaction model by explicitly incorporating meta-path based context. To construct the meta-path based context, we propose to use a priority based sampling technique to select high-quality path instances. Our model is able to learn effective representations for users, items and meta-path based context for implementing a powerful interaction function. The co-attention mechanism improves the representations for meta-path based con- text, users and items in a mutual enhancement way. Extensive experiments on three real-world datasets have demonstrated the effectiveness of the proposed model. In particular, the proposed model performs well in the cold-start scenario and has potentially good interpretability for the recommendation results.", "title": "" }, { "docid": "c44420fbcf9e6da8e22c616a14707f45", "text": "This article discusses the impact of artificially intelligent computers to the process of design, play and educational activities. A computational process which has the necessary intelligence and creativity to take a proactive role in such activities can not only support human creativity but also foster it and prompt lateral thinking. The argument is made both from the perspective of human creativity, where the computational input is treated as an external stimulus which triggers re-framing of humans’ routines and mental associations, but also from the perspective of computational creativity where human input and initiative constrains the search space of the algorithm, enabling it to focus on specific possible solutions to a problem rather than globally search for the optimal. The article reviews four mixed-initiative tools (for design and educational play) based on how they contribute to human-machine co-creativity. These paradigms serve different purposes, afford different human interaction methods and incorporate different computationally creative processes. Assessing how co-creativity is facilitated on a per-paradigm basis strengthens the theoretical argument and provides an initial seed for future work in the burgeoning domain of mixed-initiative interaction.", "title": "" }, { "docid": "4fb6e2a74562e0442fb7bce743ccd95a", "text": "Multiple-group confirmatory factor analysis (MG-CFA) is among the most productive extensions of structural equation modeling. Many researchers conducting cross-cultural or longitudinal studies are interested in testing for measurement and structural invariance. The aim of the present paper is to provide a tutorial in MG-CFA using the freely available R-packages lavaan, semTools, and semPlot. The combination of these packages enable a highly efficient analysis of the measurement models both for normally distributed as well as ordinal data. Data from two freely available datasets – the first with continuous the second with ordered indicators will be used to provide a walk-through the individual steps.", "title": "" }, { "docid": "2595b6e8c505ae7c2799c2e5272d9e22", "text": "High resolution imaging modalities. combined with advances in computer technology has prompted renewed interest and led to significant progress in volumetric reconstruction of medical images. Clinical assessment of this technique and whether it can provide enhanced diagnostic interpretation is currently under investigation by various medical and scientific groups. The purpose of this panel is to evaluate the clinical utility of two major 3D rendering techniques that allow the user to “fly through” and around medical data-sets.", "title": "" }, { "docid": "4e7ce0c3696838f77bffd4ddeb1574a9", "text": "Kidney segmentation in 3D CT images allows extracting useful information for nephrologists. For practical use in clinical routine, such an algorithm should be fast, automatic and robust to contrast-agent enhancement and fields of view. By combining and refining state-of-the-art techniques (random forests and template deformation), we demonstrate the possibility of building an algorithm that meets these requirements. Kidneys are localized with random forests following a coarse-to-fine strategy. Their initial positions detected with global contextual information are refined with a cascade of local regression forests. A classification forest is then used to obtain a probabilistic segmentation of both kidneys. The final segmentation is performed with an implicit template deformation algorithm driven by these kidney probability maps. Our method has been validated on a highly heterogeneous database of 233 CT scans from 89 patients. 80% of the kidneys were accurately detected and segmented (Dice coefficient > 0.90) in a few seconds per volume.", "title": "" }, { "docid": "1fb8701f0ad0a9e894e4195bc02d5c25", "text": "As graphics processing units (GPUs) are broadly adopted, running multiple applications on a GPU at the same time is beginning to attract wide attention. Recent proposals on multitasking GPUs have focused on either spatial multitasking, which partitions GPU resource at a streaming multiprocessor (SM) granularity, or simultaneous multikernel (SMK), which runs multiple kernels on the same SM. However, multitasking performance varies heavily depending on the resource partitions within each scheme, and the application mixes. In this paper, we propose GPU Maestro that performs dynamic resource management for efficient utilization of multitasking GPUs. GPU Maestro can discover the best performing GPU resource partition exploiting both spatial multitasking and SMK. Furthermore, dynamism within a kernel and interference between the kernels are automatically considered because GPU Maestro finds the best performing partition through direct measurements. Evaluations show that GPU Maestro can improve average system throughput by 20.2% and 13.9% over the baseline spatial multitasking and SMK, respectively.", "title": "" }, { "docid": "ee96b4c7d15008f4b8831ecf2d337b1d", "text": "This paper proposes the identification of regions of interest in biospeckle patterns using unsupervised neural networks of the type Self-Organizing Maps. Segmented images are obtained from the acquisition and processing of laser speckle sequences. The dynamic speckle is a phenomenon that occurs when a beam of coherent light illuminates a sample in which there is some type of activity, not visible, which results in a variable pattern over time. In this particular case the method is applied to the evaluation of bacterial chemotaxis. Image stacks provided by a set of experiments are processed to extract features of the intensity dynamics. A Self-Organizing Map is trained and its cells are colored according to a criterion of similarity. During the recall stage the features of patterns belonging to a new biospeckle sample impact on the map, generating a new image using the color of the map cells impacted by the sample patterns. It is considered that this method has shown better performance to identify regions of interest than those that use a single descriptor. To test the method a chemotaxis assay experiment was performed, where regions were differentiated according to the bacterial motility within the sample.", "title": "" }, { "docid": "e1c927d7fbe826b741433c99fff868d0", "text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.", "title": "" }, { "docid": "a48622ff46323acf1c40345d3e61b636", "text": "In this paper we present a novel dataset for a critical aspect of autonomous driving, the joint attention that must occur between drivers and of pedestrians, cyclists or other drivers. This dataset is produced with the intention of demonstrating the behavioral variability of traffic participants. We also show how visual complexity of the behaviors and scene understanding is affected by various factors such as different weather conditions, geographical locations, traffic and demographics of the people involved. The ground truth data conveys information regarding the location of participants (bounding boxes), the physical conditions (e.g. lighting and speed) and the behavior of the parties involved.", "title": "" }, { "docid": "d2e6aa2ab48cdd1907f3f373e0627fa8", "text": "We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in [17] on CIFAR-10 show encouraging results.", "title": "" }, { "docid": "d53726710ce73fbcf903a1537f149419", "text": "We treat in this paper Linear Programming (LP) problems with uncertain data. The focus is on uncertainty associated with hard constraints: those which must be satisfied, whatever is the actual realization of the data (within a prescribed uncertainty set). We suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Counterpart (RC). We then develop the analytical and computational optimization tools to obtain robust solutions of an uncertain LP problem via solving the corresponding explicitly stated convex RC program. In particular, it is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.", "title": "" }, { "docid": "007f741a718d0c4a4f181676a39ed54a", "text": "Following the development of computing and communication technologies, the idea of Internet of Things (IoT) has been realized not only at research level but also at application level. Among various IoT-related application fields, biometrics applications, especially face recognition, are widely applied in video-based surveillance, access control, law enforcement and many other scenarios. In this paper, we introduce a Face in Video Recognition (FivR) framework which performs real-time key-frame extraction on IoT edge devices, then conduct face recognition using the extracted key-frames on the Cloud back-end. With our key-frame extraction engine, we are able to reduce the data volume hence dramatically relief the processing pressure of the cloud back-end. Our experimental results show with IoT edge device acceleration, it is possible to implement face in video recognition application without introducing the middle-ware or cloud-let layer, while still achieving real-time processing speed.", "title": "" } ]
scidocsrr