query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
d91a21842162444aca9a7924048a8291 | Comparing Extant Story Classifiers: Results & New Directions | [
{
"docid": "f042dd6b78c65541e657c48452a1e0e4",
"text": "We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stagethe pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.",
"title": ""
},
{
"docid": "75321b85809e5954e78675c8827fefd5",
"text": "Text annotations are of great use to researchers in the language sciences, and much effort has been invested in creating annotated corpora for an wide variety of purposes. Unfortunately, software support for these corpora tends to be quite limited: it is usually ad-hoc, poorly designed and documented, or not released for public use. I describe an annotation tool, the Story Workbench, which provides a generic platform for text annotation. It is free, open-source, cross-platform, and user friendly. It provides a number of common text annotation operations, including representations (e.g., tokens, sentences, parts of speech), functions (e.g., generation of initial annotations by algorithm, checking annotation validity by rule, fully manual manipulation of annotations) and tools (e.g., distributing texts to annotators via version control, merging doubly-annotated texts into a single file). The tool is extensible at many different levels, admitting new representations, algorithm, and tools. I enumerate ten important features and illustrate how they support the annotation process at three levels: (1) annotation of individual texts by a single annotator, (2) double-annotation of texts by two annotators and an adjudicator, and (3) annotation scheme development. The Story Workbench is scheduled for public release in March 2012. Text annotations are of great use to researchers in the language sciences: a large fraction of that work relies on annotated data to build, train, or test their systems. Good examples are the Penn Treebank, which catalyzed work in developing statistical syntactic parsers, and PropBank, which did the same for semantic role labeling. It is not an exaggeration to say that annotated corpora are a central resource for these fields, and are only growing in importance. Work on narrative shares many of the same problems, and as a consequence has much to gain from advances in language annotation tools and techniques. Despite the importance of annotated data, there remains a missing link: software support is not given nearly the same amount of attention as the annotations themselves. Researchers usually release only the data; if they release any tools at all, they are usually ad-hoc, poorly designed and Copyright c © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. documented, or just not released for public use. Tools do not build on one another. The language sciences need to move to a standard where, if annotated data is released, software for accessing and creating the data are released as a matter of course. Researchers should prepare for it, reviewers should demand it, and readers should expect it. One way of facilitating this is to lower the barrier for creating tools. Many of the phases of the annotation cycle are the same no matter what sort of annotation you are doing a freely available tool, or suite of tools, to support these phases would go a long way. I describe the Story Workbench (Finlayson 2008), a major step toward just such a tool suite. The Story Workbench is free, open-source, extensible, cross-platform, and user friendly. It is a working piece of software, having been in beta testing for over three years, with a public release scheduled for March 2012. It has been used by more than 12 annotators to annotate over 100k words across 17 representations. Two corpora have been created so far with it: the UMIREC corpus (Hervas and Finlayson 2010) comprising 25k words of news and folktales annotated for referring expression structure, and 18k words of Russian folktales annotated in all 17 different representations. The Story Workbench is especially interesting to researchers working on narrative. Understanding a narrative requires not just one representation, not just two, but a dozen or more. The Story Workbench was created specifically to overcome that problem, but is now finding application beyond the realm of narrative research. In particular, in the next section I describe three phases of the annotation process; many, if not most, annotation studies move through these phases. In the next section I enumerate some of the more important features of the Story Workbench, and show how these support the phases. Three Loops of the Annotation Process Conceptually, the process of producing a gold-standard annotated corpus can be split into at least three nested loops. In the widest, top-most loop the researchers design and vet the annotation scheme and annotation tool; embedded therein is the middle loop, where annotation teams produce goldannotated texts; embedded within that is the loop of the individual annotator working on individual texts. These nested loops are illustrated in Figure 1. 21 AAAI Technical Report WS-11-18",
"title": ""
},
{
"docid": "48cdea9a78353111d236f6d0f822dc3a",
"text": "Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyper parameters: the penalty parameter C and the kernel width . This letter analyzes the behavior of the SVM classifier when these hyper parameters take very small or very large values. Our results help in understanding the hyperparameter space that leads to an efficient heuristic method of searching for hyperparameter values with small generalization errors. The analysis also indicates that if complete model selection using the gaussian kernel has been conducted, there is no need to consider linear SVM.",
"title": ""
},
{
"docid": "1ec9b98f0f7509088e7af987af2f51a2",
"text": "In this paper, we describe an automated learning approach to text categorization based on perception learning and a new feature selection metric, called correlation coefficient. Our approach has been teated on the standard Reuters text categorization collection. Empirical results indicate that our approach outperforms the best published results on this % uters collection. In particular, our new feature selection method yields comiderable improvement. We also investigate the usability of our automated hxu-n~ approach by actually developing a system that categorizes texts into a treeof categories. We compare tbe accuracy of our learning approach to a rrddmsed, expert system ap preach that uses a text categorization shell built by Cams gie Group. Although our automated learning approach still gives a lower accuracy, by appropriately inmrporating a set of manually chosen worda to use as f~ures, the combined, semi-automated approach yields accuracy close to the * baaed approach.",
"title": ""
},
{
"docid": "104c9ef558234250d56ef941f09d6a7c",
"text": "The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus",
"title": ""
}
] | [
{
"docid": "5f513e3d58a10d2748983bfa06c11df2",
"text": "AIM\nThe aim of this study is to report a clinical case of oral nevus.\n\n\nBACKGROUND\nNevus is a congenital or acquired benign neoplasia that can be observed in the skin or mucous membranes. It is an uncommon condition in the oral mucosa. When it does occur, the preferred location is on the palate, followed by the cheek mucosa, lip and tongue.\n\n\nCASE REPORT\nIn this case study, we relate the diagnosis and treatment of a 23-year-old female patient with an irregular, pigmented lesion of the oral mucosa that underwent excisional biopsy resulting in a diagnosis of intramucosal nevus.\n\n\nCONCLUSION\nNevus can appear in the oral mucosa and should be removed.\n\n\nCLINICAL SIGNIFICANCE\nIt is important for dental professionals to adequately categorize and treat pigmented lesions in the mouth.",
"title": ""
},
{
"docid": "c50cf41ef8cc85be0558f9132c60b1f5",
"text": "A System Architecture for Context-Aware Mobile Computing William Noah Schilit Computer applications traditionally expect a static execution environment. However, this precondition is generally not possible for mobile systems, where the world around an application is constantly changing. This thesis explores how to support and also exploit the dynamic configurations and social settings characteristic of mobile systems. More specifically, it advances the following goals: (1) enabling seamless interaction across devices; (2) creating physical spaces that are responsive to users; and (3) and building applications that are aware of the context of their use. Examples of these goals are: continuing in your office a program started at home; using a PDA to control someone else’s windowing UI; automatically canceling phone forwarding upon return to your office; having an airport overheaddisplay highlight the flight information viewers are likely to be interested in; easily locating and using the nearest printer or fax machine; and automatically turning off a PDA’s audible e-mail notification when in a meeting. The contribution of this thesis is an architecture to support context-aware computing; that is, application adaptation triggered by such things as the location of use, the collection of nearby people, the presence of accessible devices and other kinds of objects, as well as changes to all these things over time. Three key issues are addressed: (1) the information needs of applications, (2) where applications get various pieces of information and (3) how information can be efficiently distributed. A dynamic environment communication model is introduced as a general mechanism for quickly and efficiently learning about changes occurring in the environment in a fault tolerant manner. For purposes of scalability, multiple dynamic environment servers store user, device, and, for each geographic region, context information. In order to efficiently disseminate information from these components to applications, a dynamic collection of multicast groups is employed. The thesis also describes a demonstration system based on the Xerox PARCTAB, a wireless palmtop computer.",
"title": ""
},
{
"docid": "6c106d560d8894d941851386d96afe2b",
"text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.",
"title": ""
},
{
"docid": "4d1be9aebf7534cce625b95bde4696c6",
"text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.",
"title": ""
},
{
"docid": "417ce84b9a4359ac3fb59b6c6497b7db",
"text": "OBJECTIVE\nWe describe a novel human-machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user's upper-body.\n\n\nAPPROACH\nA calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor.\n\n\nMAIN RESULTS\nOur results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body-machine interface systems as an alternative or complement to brain-machine interfaces for accomplishing cursor control in 2D space.\n\n\nSIGNIFICANCE\nThe present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick.",
"title": ""
},
{
"docid": "5c2b7f85bba45905c324f7d6a10e5e53",
"text": "We use the Sum of Squares method to develop new efficient algorithms for learning well-separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Our contributions are: \n Mixture models with separated means: We study mixtures of poly(<i>k</i>)-many <i>k</i>-dimensional distributions where the means of every pair of distributions are separated by at least <i>k</i><sup>ε</sup>. In the special case of spherical Gaussian mixtures, we give a <i>k</i><sup><i>O</i>(1/ε)</sup>-time algorithm that learns the means assuming separation at least <i>k</i><sup>ε</sup>, for any ε> 0. This is the first algorithm to improve on greedy (“single-linkage”) and spectral clustering, breaking a long-standing barrier for efficient algorithms at separation <i>k</i><sup>1/4</sup>. \n Robust estimation: When an unknown (1−ε)-fraction of <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> are chosen from a sub-Gaussian distribution with mean µ but the remaining points are chosen adversarially, we give an algorithm recovering µ to error ε<sup>1−1/<i>t</i></sup> in time <i>k</i><sup><i>O</i>(<i>t</i>)</sup>, so long as sub-Gaussian-ness up to <i>O</i>(<i>t</i>) moments can be certified by a Sum of Squares proof. This is the first polynomial-time algorithm with guarantees approaching the information-theoretic limit for non-Gaussian distributions. Previous algorithms could not achieve error better than ε<sup>1/2</sup>. As a corollary, we achieve similar results for robust covariance estimation. \n Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> ∈ ℝ<sup><i>k</i></sup> for large <i>k</i> and <i>n</i> = poly(<i>k</i>) with the promise that a subset of <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> were sampled from a probability distribution with bounded moments, recover some information about that distribution.",
"title": ""
},
{
"docid": "443191f41aba37614c895ba3533f80ed",
"text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.",
"title": ""
},
{
"docid": "b5e762a71f0b65c099410e081865d8cb",
"text": "In this paper we discuss a notation to describe task models, which can specify a wide range of temporal relationships among tasks. It is a compact and graphical notation, immediate both to use and understand. Its logical structure and the related automatic tool make it suitable for designing even large sized applications.",
"title": ""
},
{
"docid": "4ddad3c97359faf4b927167800fe77be",
"text": "Micro-expressions are facial expressions which are fleeting and reveal genuine emotions that people try to conceal. These are important clues for detecting lies and dangerous behaviors and therefore have potential applications in various fields such as the clinical field and national security. However, recognition through the naked eye is very difficult. Therefore, researchers in the field of computer vision have tried to develop micro-expression detection and recognition algorithms but lack spontaneous micro-expression databases. In this study, we attempted to create a database of spontaneous micro-expressions which were elicited from neutralized faces. Based on previous psychological studies, we designed an effective procedure in lab situations to elicit spontaneous micro-expressions and analyzed the video data with care to offer valid and reliable codings. From 1500 elicited facial movements filmed under 60fps, 195 micro-expressions were selected. These samples were coded so that the first, peak and last frames were tagged. Action units (AUs) were marked to give an objective and accurate description of the facial movements. Emotions were labeled based on psychological studies and participants' self-report to enhance the validity.",
"title": ""
},
{
"docid": "bc1f7e30b8dcef97c1d8de2db801c4f6",
"text": "In this paper a novel method is introduced based on the use of an unsupervised version of kernel least mean square (KLMS) algorithm for solving ordinary differential equations (ODEs). The algorithm is unsupervised because here no desired signal needs to be determined by user and the output of the model is generated by iterating the algorithm progressively. However, there are several new implementation, fast convergence and also little error. Furthermore, it is also a KLMS with obvious characteristics. In this paper the ability of KLMS is used to estimate the answer of ODE. First a trial solution of ODE is written as a sum of two parts, the first part satisfies the initial condition and the second part is trained using the KLMS algorithm so as the trial solution solves the ODE. The accuracy of the method is illustrated by solving several problems. Also the sensitivity of the convergence is analyzed by changing the step size parameters and kernel functions. Finally, the proposed method is compared with neuro-fuzzy [21] approach. Crown Copyright & 2011 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7b239e83dea095bad2229d66596982c5",
"text": "In this paper, we discuss the application of concept of data quality to big data by highlighting how much complex is to define it in a general way. Already data quality is a multidimensional concept, difficult to characterize in precise definitions even in the case of well-structured data. Big data add two further dimensions of complexity: (i) being “very” source specific, and for this we adopt the interesting UNECE classification, and (ii) being highly unstructured and schema-less, often without golden standards to refer to or very difficult to access. After providing a tutorial on data quality in traditional contexts, we analyze big data by providing insights into the UNECE classification, and then, for each type of data source, we choose a specific instance of such a type (notably deep Web data, sensor-generated data, and Twitters/short texts) and discuss how quality dimensions can be defined in these cases. The overall aim of the paper is therefore to identify further research directions in the area of big data quality, by providing at the same time an up-to-date state of the art on data quality.",
"title": ""
},
{
"docid": "0aaed4206e4155c1535357a98a3d5119",
"text": "OBJECTIVES\nTo investigate the appearance, location and morphology of mandibular lingual foramina (MLF) in the Chinese Han population using cone beam computed tomography (CBCT).\n\n\nMETHODS\nCBCT images of the mandibular body in 200 patients (103 female patients and 97 male patients, age range 10-70 years) were retrospectively analysed to identify MLF. The canal number, location and direction were assessed. Additionally, the diameter of the lingual foramen, the distance between the alveolar crest and the lingual foramen, the distance between the tooth apex and the lingual foramen and the distance from the mandibular border to the lingual foramen were examined to describe the MLF characteristics. Gender and age differences with respect to foramina were also studied.\n\n\nRESULTS\nCBCT can be utilized to visualise lingual foramina. In this study, 683 lingual foramina were detected in 200 CBCT scans, with 538 (78.77%) being ≤1 mm in diameter and 145 (21.23%) being >1 mm. In total, 85.07% of MLF are median lingual canals (MLC) and 14.93% are lateral lingual canals (LLC). Two typical types of lingual foramina were identified according to their relationship with the tooth apex. Most lingual foramina (74.08%) were found below the tooth apex, and those above the tooth apex were much smaller in diameter. Male patients had statistically larger lingual foramina. The distance between the lingual foramen and the tooth apex changed with increasing age.\n\n\nCONCLUSIONS\nDetermination of the presence, position and size of lingual foramina is important before performing a surgical procedure. Careful implant-prosthetic treatment planning is particularly important in male and/or elderly patients because of the structural characteristics of their lingual foramina.",
"title": ""
},
{
"docid": "bc269e27e99f8532c7bd41b9ad45ac9a",
"text": "There are millions of users who tag multimedia content, generating a large vocabulary of tags. Some tags are frequent, while other tags are rarely used following a long tail distribution. For frequent tags, most of the multimedia methods that aim to automatically understand audio-visual content, give excellent results. It is not clear, however, how these methods will perform on rare tags. In this paper we investigate what social tags constitute the long tail and how they perform on two multimedia retrieval scenarios, tag relevance and detector learning. We show common valuable tags within the long tail, and by augmenting them with semantic knowledge, the performance of tag relevance and detector learning improves substantially.",
"title": ""
},
{
"docid": "770ec09c3a1da31ca983ad4398a7d5d0",
"text": "Plant growth and productivity are often limited by high root-zone temperatures (RZT) which restricts the growth of subtropical and temperate crops in the tropics. High RZT temperature coupled with low growth irradiances during cloudy days which mainly lead to poor root development and thus causes negative impact on the mineral uptake and assimilation. However, certain subtropical and temperate crops have successfully been grown aeroponically in the tropics by simply cooling their roots while their aerial portions are subjected to hot fluctuating ambient temperatures. This review first discusses the effects of RZT and growth irradiance on root morphology and its biomass, the effect of RZT on uptake and transport of several macro nutrients such as N [nitrogen, mainly nitrate, (NO3 )], P (H2PO4 , phosphate), K (potassium) and Ca (calcium), and micro nutrient Fe (iron) under different growth irradiances. The impact of RZT and growth irradiance on the assimilation of NO3 (the form of N nutrient given to the aeroponically grown plants) and the site of NO3 assimilation are also addressed. _____________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "6b4b33878553d4b36a583b56c9b13c02",
"text": "BACKGROUND\nIn this study we investigated gastrointestinal (GI) bleeding and its relationship to arteriovenous malformations (AVMs) in patients with the continuous-flow HeartMate II (HMII) left ventricular assist device (LVAD).\n\n\nMETHODS\nThe records of 172 patients who received HMII support between November 2003 and June 2010 were reviewed. Patients were considered to have GI bleeding if they had 1 or more of the following symptoms: guaiac-positive stool; hematemesis; melena; active bleeding at the time of endoscopy or colonoscopy; and blood within the stomach at endoscopy or colonoscopy. The symptom(s) had to be accompanied by a decrease of >1 g/dl in the patient's hemoglobin level. The location of the bleeding was identified as upper GI tract, lower GI tract or both according to esophagogastroduodenoscopy, colonoscopy, small-bowel enteroscopy or mesenteric angiography. Post-LVAD implantation anti-coagulation therapy consisted of warfarin, aspirin and dipyridamole.\n\n\nRESULTS\nThirty-two of the 172 patients (19%) had GI bleeding after 63 ± 62 (range 8 to 241) days of HMII support. Ten patients had GI bleeding from an AVM; these included 3 patients who had 2 bleeding episodes and 2 patients who had 5 episodes each. Sixteen patients had upper GI bleeding (10 hemorrhagic gastritis, 4 gastric AVM, 2 Mallory-Weiss syndrome), 15 had lower GI bleeding (6 diverticulosis, 6 jejunal AVM, 1 drive-line erosion of the colon, 1 sigmoid polyp, 1 ischemic colitis) and 1 had upper and lower GI bleeding (1 colocutaneous and gastrocutaneous fistula). All GI bleeding episodes were successfully managed medically.\n\n\nCONCLUSIONS\nArteriovenous malformations can cause GI bleeding in patients with continuous-flow LVADs. In all cases in this series, GI bleeding was successfully managed without the need for surgical intervention.",
"title": ""
},
{
"docid": "5b31efe9dc8e79d975a488c2b9084aea",
"text": "Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale - manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80% on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem.",
"title": ""
},
{
"docid": "1b27922ab1693a15d230301c3a868afd",
"text": "Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT are computationally complex because of the repeated use of the forward and backward projection. Inspired by this success of deep learning in computer vision applications, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the texture are not fully recovered, which was unfamiliar to the radiologists. To cope with this problem, here we propose a direct residual learning approach on directional wavelet domain to solve this problem and to improve the performance against previous work. In particular, the new network estimates the noise of each input wavelet transform, and then the de-noised wavelet coefficients are obtained by subtracting the noise from the input wavelet transform bands. The experimental results confirm that the proposed network has significantly improved performance, preserving the detail texture of the original images.",
"title": ""
},
{
"docid": "bb7ba369cd3baf1f5ba26aef7b5574fb",
"text": "Static computer vision techniques enable non-intrusive observation and analysis of biometrics such as eye blinks. However, ambiguous eye behaviors such as partial blinks and asymmetric eyelid movements present problems that computer vision techniques relying on static appearance alone cannot solve reliably. Image flow analysis enables reliable and efficient interpretation of these ambiguous eye blink behaviors. In this paper we present a method for using image flow analysis to compute problematic eye blink parameters. The flow analysis produces the magnitude and direction of the eyelid movement. A deterministic finite state machine uses the eyelid movement data to compute blink parameters (e.g., blink count, blink rate, and other transitional statistics) for use in human computer interaction applications across a wide range of disciplines. We conducted extensive experiments employing this method on approximately 750K color video frames of five subjects",
"title": ""
},
{
"docid": "95a376ec68ac3c4bd6b0fd236dca5bcd",
"text": "Long-term suppression of postprandial glucose concentration is an important dietary strategy for the prevention and treatment of type 2 diabetes. Because previous reports have suggested that seaweed may exert anti-diabetic effects in animals, the effects of Wakame or Mekabu intake with 200 g white rice, 50 g boiled soybeans, 60 g potatoes, and 40 g broccoli on postprandial glucose, insulin and free fatty acid levels were investigated in healthy subjects. Plasma glucose levels at 30 min and glucose area under the curve (AUC) at 0-30 min after the Mekabu meal were significantly lower than that after the control meal. Plasma glucose and glucose AUC were not different between the Wakame and control meals. Postprandial serum insulin and its AUC and free fatty acid concentration were not different among the three meals. In addition, fullness, satisfaction, and wellness scores were not different among the three meals. Thus, consumption of 70 g Mekabu with a white rice-based breakfast reduces postprandial glucose concentration.",
"title": ""
},
{
"docid": "de6cd32ceadfd5f4ddd0a20cc4ce36e1",
"text": "This paper presents a novel time-domain algorithm for detecting and attenuating the acoustic effect of wind noise in speech signals originating from mobile terminals. The detection part makes use of metrics that exploits the properties of the spectral envelop of wind noise as well as its non-periodic and non-harmonic nature. LPC analyses of various orders are carried out and the results used to distinguish between wind and speech frames and to estimate the magnitude and location of the wind noise ‘resonance’. The suppression part entails constructing a parameterized postfilter of an appropriate order having a ‘null’ where the wind noise ‘resonance’ is. Wind-only frames are used to estimate the wind noise energy, from which the emphasis parameters of the post-filter are adjusted to provide an appropriate attenuation. The proposed scheme may be combined with background-noise suppression algorithms, or with speech-formant-enhancing post-filters in the context of a speech codec.",
"title": ""
}
] | scidocsrr |
f9c78ed4a0d9436847d63c2d620452d7 | Automatic Learning of Fine Operating Rules for Online Power System Security Control | [
{
"docid": "56785d7f01cb2e1ab8754cbb931a9d0b",
"text": "This paper describes an online dynamic security assessment scheme for large-scale interconnected power systems using phasor measurements and decision trees. The scheme builds and periodically updates decision trees offline to decide critical attributes as security indicators. Decision trees provide online security assessment and preventive control guidelines based on real-time measurements of the indicators from phasor measurement units. The scheme uses a new classification method involving each whole path of a decision tree instead of only classification results at terminal nodes to provide more reliable security assessment results for changes in system conditions. The approaches developed are tested on a 2100-bus, 2600-line, 240-generator operational model of the Entergy system. The test results demonstrate that the proposed scheme is able to identify key security indicators and give reliable and accurate online dynamic security predictions.",
"title": ""
}
] | [
{
"docid": "438e934fd2b149c0c756bbf97216cb1f",
"text": "NoSQL databases manage the bulk of data produced by modern Web applications such as social networks. This stems from their ability to partition and spread data to all available nodes, allowing NoSQL systems to scale. Unfortunately, current solutions' scale out is oblivious to the underlying data access patterns, resulting in both highly skewed load across nodes and suboptimal node configurations.\n In this paper, we first show that judicious placement of HBase partitions taking into account data access patterns can improve overall throughput by 35%. Next, we go beyond current state of the art elastic systems limited to uninformed replica addition and removal by: i) reconfiguring existing replicas according to access patterns and ii) adding replicas specifically configured to the expected access pattern.\n MeT is a prototype for a Cloud-enabled framework that can be used alone or in conjunction with OpenStack for the automatic and heterogeneous reconfiguration of a HBase deployment. Our evaluation, conducted using the YCSB workload generator and a TPC-C workload, shows that MeT is able to i) autonomously achieve the performance of a manual configured cluster and ii) quickly reconfigure the cluster according to unpredicted workload changes.",
"title": ""
},
{
"docid": "f2daa3fd822be73e3663520cc6afe741",
"text": "Low health literacy (LHL) remains a formidable barrier to improving health care quality and outcomes. Given the lack of precision of single demographic characteristics to predict health literacy, and the administrative burden and inability of existing health literacy measures to estimate health literacy at a population level, LHL is largely unaddressed in public health and clinical practice. To help overcome these limitations, we developed two models to estimate health literacy. We analyzed data from the 2003 National Assessment of Adult Literacy (NAAL), using linear regression to predict mean health literacy scores and probit regression to predict the probability of an individual having ‘above basic’ proficiency. Predictors included gender, age, race/ethnicity, educational attainment, poverty status, marital status, language spoken in the home, metropolitan statistical area (MSA) and length of time in U.S. All variables except MSA were statistically significant, with lower educational attainment being the strongest predictor. Our linear regression model and the probit model accounted for about 30% and 21% of the variance in health literacy scores, respectively, nearly twice as much as the variance accounted for by either education or poverty alone. Multivariable models permit a more accurate estimation of health literacy than single predictors. Further, such models can be applied to readily available administrative or census data to produce estimates of average health literacy and identify communities that would benefit most from appropriate, targeted interventions in the clinical setting to address poor quality care and outcomes related to LHL.",
"title": ""
},
{
"docid": "86222b9b30111a606c8d0df2143f70cd",
"text": "Wylie C. Hembree, Peggy T. Cohen-Kettenis, Louis Gooren, Sabine E. Hannema, Walter J. Meyer, M. Hassan Murad, Stephen M. Rosenthal, Joshua D. Safer, Vin Tangpricha, and Guy G. T’Sjoen New York Presbyterian Hospital, Columbia University Medical Center, New York, New York 10032 (Retired); VU University Medical Center, 1007 MB Amsterdam, Netherlands (Retired); VU University Medical Center, 1007 MB Amsterdam, Netherlands (Retired); Leiden University Medical Center, 2300 RC Leiden, Netherlands; University of Texas Medical Branch, Galveston, Texas 77555; Mayo Clinic EvidenceBased Practice Center, Rochester, Minnesota 55905; University of California San Francisco, Benioff Children’s Hospital, San Francisco, California 94143; Boston University School of Medicine, Boston, Massachusetts 02118; Emory University School of Medicine and the Atlanta VA Medical Center, Atlanta, Georgia 30322; and Ghent University Hospital, 9000 Ghent, Belgium",
"title": ""
},
{
"docid": "7bd091ed5539b90e5864308895b0d5d4",
"text": "We discuss the design of a high-performance field programmable gate array (FPGA) architecture that efficiently prototypes asynchronous (clockless) logic. In this FPGA architecture, low-level application logic is described using asynchronous dataflow functions that obey a token-based compute model. We implement these dataflow functions using finely pipelined asynchronous circuits that achieve high computation rates. This asynchronous dataflow FPGA architecture maintains most of the performance benefits of a custom asynchronous design, while also providing postfabrication logic reconfigurability. We report results for two asynchronous dataflow FPGA designs that operate at up to 400 MHz in a typical TSMC 0.25 /spl mu/m CMOS process.",
"title": ""
},
{
"docid": "b262ea4a0a8880d044c77acc84b0c859",
"text": "Online social networks may be important avenues for building and maintaining social capital as adult’s age. However, few studies have explicitly examined the role online communities play in the lives of seniors. In this exploratory study, U.S. seniors were interviewed to assess the impact of Facebook on social capital. Interpretive thematic analysis reveals Facebook facilitates connections to loved ones and may indirectly facilitate bonding social capital. Awareness generated via Facebook often lead to the sharing and receipt of emotional support via other channels. As such, Facebook acted as a catalyst for increasing social capital. The implication of “awareness” as a new dimension of social capital theory is discussed. Additionally, Facebook was found to have potential negative impacts on seniors’ current relationships due to open access to personal information. Finally, common concerns related to privacy, comfort with technology, and inappropriate content were revealed.",
"title": ""
},
{
"docid": "b715631367001fb60b4aca9607257923",
"text": "This paper describes a new predictive algorithm that can be used for programming large arrays of analog computational memory elements within 0.2% of accuracy for 3.5 decades of currents. The average number of pulses required are 7-8 (20 mus each). This algorithm uses hot-electron injection for accurate programming and Fowler-Nordheim tunneling for global erase. This algorithm has been tested for programming 1024times16 and 96times16 floating-gate arrays in 0.25 mum and 0.5 mum n-well CMOS processes, respectively",
"title": ""
},
{
"docid": "889e20ac7d27caeb0c7158f194161d03",
"text": "We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both stateof-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.",
"title": ""
},
{
"docid": "8722d618a6bd890826aa48311e85915c",
"text": "Recent non-volatile memory (NVM) technologies, such as PCM, STT-MRAM and ReRAM, can act as both main memory and storage. This has led to research into NVM programming models, where persistent data structures remain in memory and are accessed directly through CPU loads and stores. Existing mechanisms for transactional updates are not appropriate in such a setting as they are optimized for block-based storage. We present REWIND, a usermode library approach to managing transactional updates directly from user code written in an imperative generalpurpose language. REWIND relies on a custom persistent in-memory data structure for the log that supports recoverable operations on itself. The scheme also employs a combination of non-temporal updates, persistent memory fences, and lightweight logging. Experimental results on synthetic transactional workloads and TPC-C show the overhead of REWIND compared to its non-recoverable equivalent to be within a factor of only 1.5 and 1.39 respectively. Moreover, REWIND outperforms state-of-the-art approaches for data structure recoverability as well as general purpose and NVM-aware DBMS-based recovery schemes by up to two orders of magnitude.",
"title": ""
},
{
"docid": "28cba5bf535dabdfadfd1f634a574d52",
"text": "There are several complex business processes in the higher education. As the number of university students has been tripled in Hungary the automation of these task become necessary. The Near Field Communication (NFC) technology provides a good opportunity to support the automated execution of several education related processes. Recently a new challenge is identified at the Budapest University of Technology and Economics. As most of the lecture notes had become available in electronic format the students especially the inexperienced freshman ones did not attend to the lectures significantly decreasing the rate of successful exams. This drove to the decision to elaborate an accurate and reliable information system for monitoring the student's attendance at the lectures. Thus we have developed a novel, NFC technology based business use case of student attendance monitoring. In order to meet the requirements of the use case we have implemented a highly autonomous distributed environment assembled by NFC enabled embedded devices, so-called contactless terminals and a scalable backoffice. Beside the opportunity of contactless card based student identification the terminals support biometric identification by fingerprint reading. These features enable the implementation of flexible and secure identification scenarios. The attendance monitoring use case has been tested in a pilot project involving about 30 access terminals and more that 1000 students. In this paper we are introducing the developed attendance monitoring use case, the implemented NFC enabled system, and the experiences gained during the pilot project.",
"title": ""
},
{
"docid": "ae95673f736e76b4089ba839b19925de",
"text": "Cloud computing is emerging as a promising field offering a variety of computing services to end users. These services are offered at different prices using various pricing schemes and techniques. End users will favor the service provider offering the best QoS with the lowest price. Therefore, applying a fair pricing model will attract more customers and achieve higher revenues for service providers. This work focuses on comparing many employed and proposed pricing models techniques and highlights the pros and cons of each. The comparison is based on many aspects such as fairness, pricing approach, and utilization period. Such an approach provides a solid ground for designing better models in the future. We have found that most approaches are theoretical and not implemented in the real market, although their simulation results are very promising. Moreover, most of these approaches are biased toward the service provider.",
"title": ""
},
{
"docid": "0486fd4a26a6afb2611da907e7fd3627",
"text": "3D Morphable Face Models are a powerful tool in computer vision. They consist of a PCA model of face shape and colour information and allow to reconstruct a 3D face from a single 2D image. 3D Morphable Face Models are used for 3D head pose estimation, face analysis, face recognition, and, more recently, facial landmark detection and tracking. However, they are not as widely used as 2D methods the process of building and using a 3D model is much more involved. In this paper, we present the Surrey Face Model, a multi-resolution 3D Morphable Model that we make available to the public for non-commercial purposes. The model contains different mesh resolution levels and landmark point annotations as well as metadata for texture remapping. Accompanying the model is a lightweight open-source C++ library designed with simplicity and ease of integration as its foremost goals. In addition to basic functionality, it contains pose estimation and face frontalisation algorithms. With the tools presented in this paper, we aim to close two gaps. First, by offering different model resolution levels and fast fitting functionality, we enable the use of a 3D Morphable Model in time-critical applications like tracking. Second, the software library makes it easy for the community to adopt the 3D Morphable Face Model in their research, and it offers a public place for collaboration.",
"title": ""
},
{
"docid": "68649624bbd2aa73acd98df12f06fd28",
"text": "Grey wolf optimizer (GWO) is one of recent metaheuristics swarm intelligence methods. It has been widely tailored for a wide variety of optimization problems due to its impressive characteristics over other swarm intelligence methods: it has very few parameters, and no derivation information is required in the initial search. Also it is simple, easy to use, flexible, scalable, and has a special capability to strike the right balance between the exploration and exploitation during the search which leads to favourable convergence. Therefore, the GWO has recently gained a very big research interest with tremendous audiences from several domains in a very short time. Thus, in this review paper, several research publications using GWO have been overviewed and summarized. Initially, an introductory information about GWO is provided which illustrates the natural foundation context and its related optimization conceptual framework. The main operations of GWO are procedurally discussed, and the theoretical foundation is described. Furthermore, the recent versions of GWO are discussed in detail which are categorized into modified, hybridized and paralleled versions. The main applications of GWO are also thoroughly described. The applications belong to the domains of global optimization, power engineering, bioinformatics, environmental applications, machine learning, networking and image processing, etc. The open source software of GWO is also provided. The review paper is ended by providing a summary conclusion of the main foundation of GWO and suggests several possible future directions that can be further investigated.",
"title": ""
},
{
"docid": "3f9baef82df0a5bd2f486a6bb62c8949",
"text": "There is an explosion in the number of labs analyzing cannabinoids in marijuana (Cannabis sativa L., Cannabaceae) but existing methods are inefficient, require expert analysts, and use large volumes of potentially environmentally damaging solvents. The objective of this work was to develop and validate an accurate method for analyzing cannabinoids in cannabis raw materials and finished products that is more efficient and uses fewer toxic solvents. An HPLC-DAD method was developed for eight cannabinoids in cannabis flowers and oils using a statistically guided optimization plan based on the principles of green chemistry. A single-laboratory validation determined the linearity, selectivity, accuracy, repeatability, intermediate precision, limit of detection, and limit of quantitation of the method. Amounts of individual cannabinoids above the limit of quantitation in the flowers ranged from 0.02 to 14.9% w/w, with repeatability ranging from 0.78 to 10.08% relative standard deviation. The intermediate precision determined using HorRat ratios ranged from 0.3 to 2.0. The LOQs for individual cannabinoids in flowers ranged from 0.02 to 0.17% w/w. This is a significant improvement over previous methods and is suitable for a wide range of applications including regulatory compliance, clinical studies, direct patient medical services, and commercial suppliers.",
"title": ""
},
{
"docid": "dc736509fbed0afcebc967ca31ffc4d5",
"text": "and William K. Wootters IBM Research Division, T. J. Watson Research Center, Yorktown Heights, New York 10598 Norman Bridge Laboratory of Physics 12-33, California Institute of Technology, Pasadena, California 91125 Département d’Informatique et de Recherche Ope ́rationelle, Succursale Centre-Ville, Montre ́al, Canada H3C 3J7 AT&T Shannon Laboratory, 180 Park Avenue, Building 103, Florham Park, New Jersey 07932 Physics Department, Williams College, Williamstown, Massachusetts 01267 ~Received 17 June 1998 !",
"title": ""
},
{
"docid": "c7d17145605864aa28106c14954dcae5",
"text": "Person re-identification (ReID) is to identify pedestrians observed from different camera views based on visual appearance. It is a challenging task due to large pose variations, complex background clutters and severe occlusions. Recently, human pose estimation by predicting joint locations was largely improved in accuracy. It is reasonable to use pose estimation results for handling pose variations and background clutters, and such attempts have obtained great improvement in ReID performance. However, we argue that the pose information was not well utilized and hasn't yet been fully exploited for person ReID. In this work, we introduce a novel framework called Attention-Aware Compositional Network (AACN) for person ReID. AACN consists of two main components: Pose-guided Part Attention (PPA) and Attention-aware Feature Composition (AFC). PPA is learned and applied to mask out undesirable background features in pedestrian feature maps. Furthermore, pose-guided visibility scores are estimated for body parts to deal with part occlusion in the proposed AFC module. Extensive experiments with ablation analysis show the effectiveness of our method, and state-of-the-art results are achieved on several public datasets, including Market-1501, CUHK03, CUHK01, SenseReID, CUHK03-NP and DukeMTMC-reID.",
"title": ""
},
{
"docid": "fea1bc4b60abe7435c4953f2eb4b5dae",
"text": "Facing a large number of personal photos and limited resource of mobile devices, cloud plays an important role in photo storing, sharing and searching. Meanwhile, some recent reputation damage and stalk events caused by photo leakage increase people's concern about photo privacy. Though most would agree that photo search function and privacy are both valuable, few cloud system supports both of them simultaneously. The center of such an ideal system is privacy-preserving outsourced image similarity measurement, which is extremely challenging when the cloud is untrusted and a high extra overhead is disliked. In this work, we introduce a framework POP, which enables privacy-seeking mobile device users to outsource burdensome photo sharing and searching safely to untrusted servers. Unauthorized parties, including the server, learn nothing about photos or search queries. This is achieved by our carefully designed architecture and novel non-interactive privacy-preserving protocols for image similarity computation. Our framework is compatible with the state-of-the-art image search techniques, and it requires few changes to existing cloud systems. For efficiency and good user experience, our framework allows users to define personalized private content by a simple check-box configuration and then enjoy the sharing and searching services as usual. All privacy protection modules are transparent to users. The evaluation of our prototype implementation with 31,772 real-life images shows little extra communication and computation overhead caused by our system.",
"title": ""
},
{
"docid": "e63836b5053b7f56d5ad5081a7ef79b7",
"text": "This paper presents interfaces for exploring large collections of fonts for design tasks. Existing interfaces typically list fonts in a long, alphabetically-sorted menu that can be challenging and frustrating to explore. We instead propose three interfaces for font selection. First, we organize fonts using high-level descriptive attributes, such as \"dramatic\" or \"legible.\" Second, we organize fonts in a tree-based hierarchical menu based on perceptual similarity. Third, we display fonts that are most similar to a user's currently-selected font. These tools are complementary; a user may search for \"graceful\" fonts, select a reasonable one, and then refine the results from a list of fonts similar to the selection. To enable these tools, we use crowdsourcing to gather font attribute data, and then train models to predict attribute values for new fonts. We use attributes to help learn a font similarity metric using crowdsourced comparisons. We evaluate the interfaces against a conventional list interface and find that our interfaces are preferred to the baseline. Our interfaces also produce better results in two real-world tasks: finding the nearest match to a target font, and font selection for graphic designs.",
"title": ""
},
{
"docid": "d00957d93af7b2551073ba84b6c0f2a6",
"text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn",
"title": ""
},
{
"docid": "d03390ba2dacef4b657e724c019b2b66",
"text": "Recent efforts to add new services to the Internet have increased interest in software-based routers that are easy to extend and evolve. This paper describes our experiences using emerging network processors---in particular, the Intel IXP1200---to implement a router. We show it is possible to combine an IXP1200 development board and a PC to build an inexpensive router that forwards minimum-sized packets at a rate of 3.47Mpps. This is nearly an order of magnitude faster than existing pure PC-based routers, and sufficient to support 1.77Gbps of aggregate link bandwidth. At lesser aggregate line speeds, our design also allows the excess resources available on the IXP1200 to be used robustly for extra packet processing. For example, with 8 × 100Mbps links, 240 register operations and 96 bytes of state storage are available for each 64-byte packet. Using a hierarchical architecture we can guarantee line-speed forwarding rates for simple packets with the IXP1200, and still have extra capacity to process exceptional packets with the Pentium. Up to 310Kpps of the traffic can be routed through the Pentium to receive 1510 cycles of extra per-packet processing.",
"title": ""
},
{
"docid": "239a257b6f4962118f590f76114233b0",
"text": "Many problems of computer vision are (mathematically) ill-posed in the sense that there are many solutions; those problems are therefore in need of some form of regularization that guarantees a sensible and unique solution. This is also true for problems in low-level vision, which are addressing visual information at a basic level (e.g. pixels of an image), and are of interest for this work. Markov Random Fields (MRFs) are widely used probabilistic models of “prior knowledge”, which are used for regularization in a variety of computer vision problems, in particular those in low-level vision; we focus on generic MRF models for natural images and apply them to image restoration tasks. Learning MRFs from training data with a popular approach like the generic maximum likelihood (ML) method is often difficult, however, because of its computational complexity and the requirement to draw samples from the MRF. Because of these difficulties, a number of alternative learning methods have been proposed over the years, of which score matching (SM) is a promising one that has not been properly explored in the context of MRF models. Armed with an efficient sampler, we propose a flexible MRF model for natural images that we train under various circumstances. Instead of evaluating MRFs using a specific application and inference technique, as is common in the literature, we compare them in a fully application-neutral setting by means of their generative properties, i.e. how well they capture the statistics of natural images. We find that estimation with score matching is problematic for MRF image priors, and tentatively attribute this to the use of heavy-tailed potentials, which are required for MRF models to match the statistics of natural images. Hence, we also take a different route and exploit our efficient sampler to improve learning with contrastive divergence (CD), an efficient learning method closely related to ML, which has successfully been applied to MRF parameter learning in the past. We let score matching and contrastive divergence compete to learn the parameters of MRFs, which enables us to better understand the weaknesses and strengths of both methods. Using contrastive divergence, we learn MRFs that capture the statistics of natural images very well. We additionally find that popular MRF models from the literature exhibit poor generative properties, despite their good application performance in the context of maximum a-posteriori (MAP) estimation; they surprisingly even outperform our good generative models. By computing the posterior mean (MMSE) using sampling, we are able to achieve excellent results in image restoration tasks with our applicationneutral generative MRFs, that can even compete with application-specific discriminative approaches. Zusammenfassung Viele Probleme des Maschinellen Sehens sind (mathematisch) nicht korrekt gestellt in dem Sinne, dass es meist viele Lösungen gibt; solche Probleme benötigen deshalb eine gewisse Form der Regularisierung, die eine vernünftige und eindeutige Lösung garantiert. Das gilt auch für Probleme im Bereich “Low-Level Vision”, die sich mit visuellen Information auf einem niedrigen Level befassen (z.B. Pixel eines Bildes) und von Belang für diese Arbeit sind. Markov Random Fields (MRFs) sind weithin genutzte probabilistische Modelle von “Vorwissen”, die für Regularisierung in vielfältigen Problemen des Maschinellen Sehens verwendet werden, insbesondere jene in “Low-Level Vision”; wir konzentrieren uns auf generische MRF-Modelle für natürliche Bilder und wenden diese auf Probleme der Bildwiederherstellung an. MRFs mit beliebten Ansätzen wie der allgemeinen Maximum-Likelihood (ML) Methode von Trainingsdaten zu lernen ist jedoch oft schwer, angesichts des Rechenaufwands und der Anforderung Stichproben des MRF-Modells zu produzieren (“Sampling”). Diese Schwierigkeiten haben dazu geführt dass im Laufe der Jahre einige alternative iii Lernverfahren vorgeschlagen wurden, von denen Score Matching (SM) ein vielversprechendes ist, das jedoch im Kontext von MRFs nicht gründlich erforscht wurde. Ausgerüstet mit einem effizienten Sampler schlagen wir ein flexibles MRF-Modell für natürliche Bilder vor, welches wir unter verschiedenen Gegebenheiten trainieren. Anstatt MRFs anhand einer Kombination von spezifischer Anwendung und Inferenzverfahren zu bewerten, wie in der Literatur üblich, vergleichen wir sie in einem vollkommen anwendungsneutralem Rahmen durch ihre generativen Eigenschaften, d.h. wie gut sie die statistischen Eigenschaften von natürlichen Bildern modellieren. Wir stellen fest dass Score Matching problematisch für das Lernen von MRF-Modellen von Bildern ist, und schreiben dies vorläufig der Verwendung von Heavy-tailed-Verteilungen zu, welche benötigt werden um die statistischen Eigenschaften von natürlichen Bildern mit MRFs zu modellieren. Deshalb schlagen wir auch einen anderen Weg ein und verwenden unseren effizienten Sampler um das Lernen mit Contrastive Divergence (CD) zu verbessern, welches ein effizientes Lernverfahren ähnlich der MLMethode ist und bereits in der Vergangenheit erfolgreich zum Lernen von MRFs verwendet wurde. Wir lassen Score Matching und Contrastive Divergence gegeneinander antreten die Parameter von MRFs zu lernen, was uns ermöglicht die Stärken und Schwächen beider Verfahren besser zu verstehen. Mittels Contrastive Divergence lernen wir MRFs welche die statistischen Eigenschaften von natürlichen Bildern sehr gut modellieren. Wir stellen zudem fest dass populäre MRF-Modelle aus der Literatur schlechte generative Eigenschaften aufweisen, ungeachtet ihrer guten Anwendungs-Ergebnisse im Zusammenhang mit Maximum-A-Posteriori (MAP) Schätzung; sie sind erstaunlicherweise sogar besser als unsere guten generativen Modelle. Durch Berechnung des Erwartungswertes der A-posterioriVerteilung (MMSE) mittels Sampling erzielen unsere anwendungsneutralen generativen MRFs exzellente Resultate in Bildwiederherstellungs-Aufgaben und können sogar mit anwendungsspezifischen diskriminativen Ansätzen konkurrieren.",
"title": ""
}
] | scidocsrr |
741eff4391228e2d1bbcd1937f7b9170 | The Space of Possible Mind Designs | [
{
"docid": "f76808350f95de294c2164feb634465a",
"text": "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: \"A curious aspect of the theory of evolution is that everybody thinks he understands it.\" (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about Artificial Intelligence than they actually do.",
"title": ""
}
] | [
{
"docid": "3180f7bd813bcd64065780bc9448dc12",
"text": "This paper reports on email classification and filtering, more specifically on spam versus ham and phishing versus spam classification, based on content features. We test the validity of several novel statistical feature extraction methods. The methods rely on dimensionality reduction in order to retain the most informative and discriminative features. We successfully test our methods under two schemas. The first one is a classic classification scenario using a 10-fold cross-validation technique for several corpora, including four ground truth standard corpora: Ling-Spam, SpamAssassin, PU1, and a subset of the TREC 2007 spam corpus, and one proprietary corpus. In the second schema, we test the anticipatory properties of our extracted features and classification models with two proprietary datasets, formed by phishing and spam emails sorted by date, and with the public TREC 2007 spam corpus. The contributions of our work are an exhaustive comparison of several feature selection and extraction methods in the frame of email classification on different benchmarking corpora, and the evidence that especially the technique of biased discriminant analysis offers better discriminative features for the classification, gives stable classification results notwithstanding the amount of features chosen, and robustly retains their discriminative value over time and data setups. These findings are especially useful in a commercial setting, where short profile rules are built based on a limited number of features for filtering emails.",
"title": ""
},
{
"docid": "ec58915a7fd321bcebc748a369153509",
"text": "For wireless charging of electric vehicle (EV) batteries, high-frequency magnetic fields are generated from magnetically coupled coils. The large air-gap between two coils may cause high leakage of magnetic fields and it may also lower the power transfer efficiency (PTE). For the first time, in this paper, we propose a new set of coil design formulas for high-efficiency and low harmonic currents and a new design procedure for low leakage of magnetic fields for high-power wireless power transfer (WPT) system. Based on the proposed design procedure, a pair of magnetically coupled coils with magnetic field shielding for a 1-kW-class golf-cart WPT system is optimized via finite-element simulation and the proposed design formulas. We built a 1-kW-class wireless EV charging system for practical measurements of the PTE, the magnetic field strength around the golf cart, and voltage/current spectrums. The fabricated system has achieved a PTE of 96% at the operating frequency of 20.15 kHz with a 156-mm air gap between the coils. At the same time, the highest magnetic field strength measured around the golf cart is 19.8 mG, which is far below the relevant electromagnetic field safety guidelines (ICNIRP 1998/2010). In addition, the third harmonic component of the measured magnetic field is 39 dB lower than the fundamental component. These practical measurement results prove the effectiveness of the proposed coil design formulas and procedure of a WPT system for high-efficiency and low magnetic field leakage.",
"title": ""
},
{
"docid": "fa04e8e2e263d18ee821c7aa6ebed08e",
"text": "In this study we examined the effect of physical activity based labels on the calorie content of meals selected from a sample fast food menu. Using a web-based survey, participants were randomly assigned to one of four menus which differed only in their labeling schemes (n=802): (1) a menu with no nutritional information, (2) a menu with calorie information, (3) a menu with calorie information and minutes to walk to burn those calories, or (4) a menu with calorie information and miles to walk to burn those calories. There was a significant difference in the mean number of calories ordered based on menu type (p=0.02), with an average of 1020 calories ordered from a menu with no nutritional information, 927 calories ordered from a menu with only calorie information, 916 calories ordered from a menu with both calorie information and minutes to walk to burn those calories, and 826 calories ordered from the menu with calorie information and the number of miles to walk to burn those calories. The menu with calories and the number of miles to walk to burn those calories appeared the most effective in influencing the selection of lower calorie meals (p=0.0007) when compared to the menu with no nutritional information provided. The majority of participants (82%) reported a preference for physical activity based menu labels over labels with calorie information alone and no nutritional information. Whether these labels are effective in real-life scenarios remains to be tested.",
"title": ""
},
{
"docid": "1571e5a85d837a0878362473aadce808",
"text": "Image or video exchange over the Internet of Things (IoT) is a requirement in diverse applications, including smart health care, smart structures, and smart transportations. This paper presents a modular and extensible quadrotor architecture and its specific prototyping for automatic tracking applications. The architecture is extensible and based on off-the-shelf components for easy system prototyping. A target tracking and acquisition application is presented in detail to demonstrate the power and flexibility of the proposed design. Complete design details of the platform are also presented. The designed module implements the basic proportional-integral-derivative control and a custom target acquisition algorithm. Details of the sliding-window-based algorithm are also presented. This algorithm performs $20\\times $ faster than comparable approaches in OpenCV with equal accuracy. Additional modules can be integrated for more complex applications, such as search-and-rescue, automatic object tracking, and traffic congestion analysis. A hardware architecture for the newly introduced Better Portable Graphics (BPG) compression algorithm is also introduced in the framework of the extensible quadrotor architecture. Since its introduction in 1987, the Joint Photographic Experts Group (JPEG) graphics format has been the de facto choice for image compression. However, the new compression technique BPG outperforms the JPEG in terms of compression quality and size of the compressed file. The objective is to present a hardware architecture for enhanced real-time compression of the image. Finally, a prototyping platform of a hardware architecture for a secure digital camera (SDC) integrated with the secure BPG (SBPG) compression algorithm is presented. The proposed architecture is suitable for high-performance imaging in the IoT and is prototyped in Simulink. To the best of our knowledge, this is the first ever proposed hardware architecture for SBPG compression integrated with an SDC.",
"title": ""
},
{
"docid": "3790949d99130d7222c689ac9295d931",
"text": "Enterprise resource planning (ERP) systems have been used in integrating information and accelerating its distribution across functions and departments with the aim to increase organizations’ operational performance. Thus, it is worth measuring ERP system performance based on its impact to critical performance of an organization: this requires a systematic method that bridges ERP performance measurement and key organizational performance. The hierarchical balanced scorecard (HBSC) model with respect to multiple criteria decision-making is such a systematic approach to ERP performance measurement. An ERP evaluation framework that integrates the balanced scorecard dimensions, linguistic variables, and non-additive fuzzy integral provides an objective approach to measuring both the performance level of the ERP system and its contribution to the strategic objectives of high-tech firms. Taking Taiwan’s high-tech firms as an example, this study demonstrates the effectiveness of this integrated approach to measure the performance of ERP systems at the post-implementation stage under evaluators’ subjective, uncertainty, and vagueness judgments. ã 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a271371ba28be10b67e31ecca6f3aa88",
"text": "The toxicity and repellency of the bioactive chemicals of clove (Syzygium aromaticum) powder, eugenol, eugenol acetate, and beta-caryophyllene were evaluated against workers of the red imported fire ant, Solenopsis invicta Buren. Clove powder applied at 3 and 12 mg/cm2 provided 100% ant mortality within 6 h, and repelled 99% within 3 h. Eugenol was the fastest acting compound against red imported fire ant compared with eugenol acetate, beta-caryophyllene, and clove oil. The LT50 values inclined exponentially with the increase in the application rate of the chemical compounds tested. However, repellency did not increase with the increase in the application rate of the chemical compounds tested, but did with the increase in exposure time. Eugenol, eugenol acetate, as well as beta-caryophyllene and clove oil may provide another tool for red imported fire ant integrated pest management, particularly in situations where conventional insecticides are inappropriate.",
"title": ""
},
{
"docid": "09b77e632fb0e5dfd7702905e51fc706",
"text": "Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.",
"title": ""
},
{
"docid": "c200b79726ca0b441bc1311975bf0008",
"text": "This article introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90nm to 22nm and beyond. At microarchitectural level, McPAT includes models for the fundamental components of a complete chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, and integrated system components such as memory controllers and Ethernet controllers. At circuit level, McPAT supports detailed modeling of critical-path timing, area, and power. At technology level, McPAT models timing, area, and power for the device types forecast in the ITRS roadmap. McPAT has a flexible XML interface to facilitate its use with many performance simulators.\n Combined with a performance simulator, McPAT enables architects to accurately quantify the cost of new ideas and assess trade-offs of different architectures using new metrics such as Energy-Delay-Area2 Product (EDA2P) and Energy-Delay-Area Product (EDAP). This article explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting trade-offs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies from cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks for manycore designs at the 22nm technology shows that 8-core clustering gives the best energy-delay product, whereas when die area is taken into account, 4-core clustering gives the best EDA2P and EDAP.",
"title": ""
},
{
"docid": "00223ccf5b5aebfc23c76afb7192e3f7",
"text": "Computer Security System / technology have passed through several changes. The trends have been from what you know (e.g. password, PIN, etc) to what you have (ATM card, Driving License, etc) and presently to who you are (Biometry) or combinations of two or more of the trios. This technology (biometry) has come to solve the problems identified with knowledge-based and token-based authentication systems. It is possible to forget your password and what you have can as well be stolen. The security of determining who you are is referred to as BIOMETRIC. Biometric, in a nutshell, is the use of your body as password. This paper explores the various methods of biometric identification that have evolved over the years and the features used for each modality.",
"title": ""
},
{
"docid": "6e130fa88972e0e33e23beb14c522900",
"text": "Myricetin is a flavonoid that is abundant in fruits and vegetables and has protective effects against cancer and diabetes. However, the mechanism of action of myricetin against gastric cancer (GC) is not fully understood. We researched myricetin on the proliferation, apoptosis, and cell cycle in GC HGC-27 and SGC7901 cells, to explore the underlying mechanism of action. Cell Counting Kit (CCK)-8 assay, Western blotting, cell cycle analysis, and apoptosis assay were used to evaluate the effects of myricetin on cell proliferation, apoptosis, and the cell cycle. To analyze the binding properties of ribosomal S6 kinase 2 (RSK2) with myricetin, surface plasmon resonance (SPR) analysis was performed. CCK8 assay showed that myricetin inhibited GC cell proliferation. Flow cytometry analysis showed that myricetin induces apoptosis and cell cycle arrest in GC cells. Western blotting indicated that myricetin influenced apoptosis and cell cycle arrest of GC cells by regulating related proteins. SPR analysis showed strong binding affinity of RSK2 and myricetin. Myricetin bound to RSK2, leading to increased expression of Mad1, and contributed to inhibition of HGC-27 and SGC7901 cell proliferation. Our results suggest the therapeutic potential of myricetin in GC.",
"title": ""
},
{
"docid": "066d3a381ffdb2492230bee14be56710",
"text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.",
"title": ""
},
{
"docid": "0de38657b70acdaead3226d6ebd2f7ff",
"text": "We present the results of a parametric study devised to allow us to optimally design a patch fed planar dielectric slab waveguide extended hemi-elliptical lens antenna. The lens antenna, 11lambda times 13lambda in the lens plane and 0.6lambda thick, constructed from polystyrene and weighing only 90 g is fabricated and characterized at 28.5 GHz for both single and multiple operating configurations. The lens when optimized for single beam operation achieves 18.5 dB measured gain (85% aperture efficiency), 40deg and 4.1deg half power beam width for E plane and H plane respectively and 10% impedance bandwidth for -10 dB return loss. While for optimized for multiple beam operation it is shown that the lens can accommodate up to 9 feeds and that beam symmetry can be maintained over a scan angle of 27deg with a gain of 14.9 to 17.7 dB, and first side lobe levels of -11 to -7 dB respectively. Over the frequency range 26 to 30 GHz the lens maintains a worst case return loss of -10 dB and port to port feed isolation of better than -25 dB. Further it is shown that residual leaked energy from the structure is less than -48 dBm at 1 cm, thus making a low profile enclosure possible. We also show that by simultaneous excitation of two adjacent ports we can obtain difference patterns with null depths of up to -36 dB.",
"title": ""
},
{
"docid": "3a68175de0dbc4c89b66678976898d1f",
"text": "The rapid accumulation of data in social media (in million and billion scales) has imposed great challenges in information extraction, knowledge discovery, and data mining, and texts bearing sentiment and opinions are one of the major categories of user generated data in social media. Sentiment analysis is the main technology to quickly capture what people think from these text data, and is a research direction with immediate practical value in big data era. Learning such techniques will allow data miners to perform advanced mining tasks considering real sentiment and opinions expressed by users in additional to the statistics calculated from the physical actions (such as viewing or purchasing records) user perform, which facilitates the development of real-world applications. However, the situation that most tools are limited to the English language might stop academic or industrial people from doing research or products which cover a wider scope of data, retrieving information from people who speak different languages, or developing applications for worldwide users. More specifically, sentiment analysis determines the polarities and strength of the sentiment-bearing expressions, and it has been an important and attractive research area. In the past decade, resources and tools have been developed for sentiment analysis in order to provide subsequent vital applications, such as product reviews, reputation management, call center robots, automatic public survey, etc. However, most of these resources are for the English language. Being the key to the understanding of business and government issues, sentiment analysis resources and tools are required for other major languages, e.g., Chinese. In this tutorial, audience can learn the skills for retrieving sentiment from texts in another major language, Chinese, to overcome this obstacle. The goal of this tutorial is to introduce the proposed sentiment analysis technologies and datasets in the literature, and give the audience the opportunities to use resources and tools to process Chinese texts from the very basic preprocessing, i.e., word segmentation and part of speech tagging, to sentiment analysis, i.e., applying sentiment dictionaries and obtaining sentiment scores, through step-by-step instructions and a hand-on practice. The basic processing tools are from CKIP Participants can download these resources, use them and solve the problems they encounter in this tutorial. This tutorial will begin from some background knowledge of sentiment analysis, such as how sentiment are categorized, where to find available corpora and which models are commonly applied, especially for the Chinese language. Then a set of basic Chinese text processing tools for word segmentation, tagging and parsing will be introduced for the preparation of mining sentiment and opinions. After bringing the idea of how to pre-process the Chinese language to the audience, I will describe our work on compositional Chinese sentiment analysis from words to sentences, and an application on social media text (Facebook) as an example. All our involved and recently developed related resources, including Chinese Morphological Dataset, Augmented NTU Sentiment Dictionary (ANTUSD), E-hownet with sentiment information, Chinese Opinion Treebank, and the CopeOpi Sentiment Scorer, will also be introduced and distributed in this tutorial. The tutorial will end by a hands-on session of how to use these materials and tools to process Chinese sentiment.",
"title": ""
},
{
"docid": "7ee557666c7a2ace5b9bab2b32e8b406",
"text": "In order to realize a massive MIMO concept, small size and low power consumption over the wide-band frequency range are challenges for RF frontends module. This paper describes a highly integrated RF frontend module for high SHF wide-band massive MIMO in 5G. The RF frontend module is designed with 0.15 µm GaAs process and assembled on 5 × 5 mm2 QFN package. By employing Doherty amplifier configuration using a parasitic output capacitance neutralization technique, it achieves low power consumption over wide frequency band. The integrated frontend architecture is an attractive solution for massive MIMO systems in 5G, and will contribute to 5G deployment. Additionally, the use of digital techniques is attractive future option beyond 4G (toward 5G), and amplifiers with them potentially lead to reduction of power consumption. Some prototyped switching-mode amplifiers are also presented.",
"title": ""
},
{
"docid": "f226d14c95fca32dc55b554619ec8691",
"text": "Motivation to learn is affected by a student’s self-efficacy, goal orientation, locus of control and perceived task difficulty. In the classroom, teachers know how to motivate their students and how to exploit this knowledge to adapt or optimize their instruction when a student shows signs of demotivation. In on-line learning environments it is much more difficult to assess the level of motivation of the student and to have adaptive intervention strategies and rules of application to help prevent attrition. We have developed MotSaRT – a motivational strategies recommender tool to support on-line teachers in motivating learners. The design is informed by the Social Cognitive Theory constructs outlined above and a survey on motivation intervention strategies carried out with sixty on-line teachers. The survey results were analysed using a data mining algorithm (J48 decision trees) which resulted in a set of decision rules for recommending motivational strategies. The recommender tool, MotSaRT, has been developed based on these decision rules. Its functionality enables the teacher to specify the learner’s motivation profile. MotSaRT then recommends the most likely intervention strategies to increase motivation. A pilot study is currently being carried out using the MotSaRT tool.",
"title": ""
},
{
"docid": "d9edc458cee2261b78214132c2e4b811",
"text": "Since its discovery, the asymmetric Fano resonance has been a characteristic feature of interacting quantum systems. The shape of this resonance is distinctively different from that of conventional symmetric resonance curves. Recently, the Fano resonance has been found in plasmonic nanoparticles, photonic crystals, and electromagnetic metamaterials. The steep dispersion of the Fano resonance profile promises applications in sensors, lasing, switching, and nonlinear and slow-light devices.",
"title": ""
},
{
"docid": "7a7d43299511f5852080b4a5989c4b0c",
"text": "Precision phenotyping, especially the use of image analysis, allows researchers to gain information on plant properties and plant health. Aerial image detection with unmanned aerial vehicles (UAVs) provides new opportunities in precision farming and precision phenotyping. Precision farming has created a critical need for spatial data on plant density. The plant number reflects not only the final field emergence but also allows a more precise assessment of the final yield parameters. The aim of this work is to advance UAV use and image analysis as a possible highthroughput phenotyping technique. In this study, four different maize cultivars were planted in plots with different seeding systems (in rows and equidistantly spaced) and different nitrogen fertilization levels (applied at 50, 150 and 250 kg N/ha). The experimental field, encompassing 96 plots, was overflown at a 50-m height with an octocopter equipped with a 10-megapixel camera taking a picture every 5 s. Images were recorded between BBCH 13–15 (it is a scale to identify the phenological development stage of a plant which is here the 3to 5-leaves development stage) when the color of young leaves differs from older leaves. Close correlations up to R2 = 0.89 were found between in situ and image-based counted plants adapting a decorrelation stretch contrast enhancement procedure, which enhanced color differences in the images. On average, the error between visually and digitally counted plants was ≤5%. Ground cover, as determined by analyzing green pixels, ranged between 76% and 83% at these stages. However, the correlation between ground cover and digitally counted plants was very low. The presence of weeds and blurry effects on the images represent possible errors in counting plants. In conclusion, the final field emergence of maize can rapidly be assessed and allows more precise assessment of the final yield parameters. The use of UAVs and image processing has the potential to optimize farm management and to support field experimentation for agronomic and breeding purposes.",
"title": ""
},
{
"docid": "dad7dbbb31f0d9d6268bfdc8303d1c9c",
"text": "This letter proposes a reconfigurable microstrip patch antenna with polarization states being switched among linear polarization (LP), left-hand (LH) and right-hand (RH) circular polarizations (CP). The CP waves are excited by two perturbation elements of loop slots in the ground plane. A p-i-n diode is placed on every slot to alter the current direction, which determines the polarization state. The influences of the slots and p-i-n diodes on antenna performance are minimized because the slots and diodes are not on the patch. The simulated and measured results verified the effectiveness of the proposed antenna configuration. The experimental bandwidths of the -10-dB reflection coefficient for LHCP and RHCP are about 60 MHz, while for LP is about 30 MHz. The bandwidths of the 3-dB axial ratio for both CP states are 20 MHz with best value of 0.5 dB at the center frequency on the broadside direction. Gains for two CP operations are 6.4 dB, and that for the LP one is 5.83 dB. This reconfigurable patch antenna with agile polarization has good performance and concise structure, which can be used for 2.4 GHz wireless communication systems.",
"title": ""
},
{
"docid": "b51e706aacdf95819e5f6747f7dd6b12",
"text": "The goal of this research is to develop a functional adhesive robot skin with micro suction cups, which realizes two new functions: adaptive adhesion to rough/curved surfaces and anisotropic adhesion. Both functions are realized by integration of asymmetric micro cups. This skin can be applied to various robot mechanisms such as robot hands, wall-climbing robot feet and so on as a kind of robot skins. This paper especially reports the concept of this adhesive robot skin and its fundamental characteristics. The experiments show the developed skin realizes novel characteristics, high adhesion even on rough surface and anisotropic adhesion.",
"title": ""
},
{
"docid": "671bcd8c52fd6ad3cb2806ffa0cedfda",
"text": "In this paper we present a class of soft-robotic systems with superior load bearing capacity and expanded degrees of freedom. Spatial parallel soft robotic systems utilize spatial arrangement of soft actuators in a manner similar to parallel kinematic machines. In this paper we demonstrate that such an arrangement of soft actuators enhances stiffness and yield dramatic motions. The current work utilizes tri-chamber actuators made from silicone rubber to demonstrate the viability of the concept.",
"title": ""
}
] | scidocsrr |
a37d97d7daf1e1719341b7c4d52f5a89 | Ring Resonator Bandpass Filter With Switchable Bandwidth Using Stepped-Impedance Stubs | [
{
"docid": "076118903a99feababc0b4edcbf5686b",
"text": "A reconfigurable bandpass filter is demonstrated using a dual-mode microstrip triangular patch resonator. The proposed circuit uses a single switch to control its fractional bandwidth while keeping a fixed center frequency of 10 GHz. The switching mechanism is accomplished by a p-i-n diode that connects and isolates a tuning stub from the rest of the circuit. The on and off state of the diode effectively controls the resonance frequencies of the two poles, produced by the dual-mode behavior of the resonator. The filter achieves a measured 1.9:1 tunable passband ratio. The circuit presented here represents the first single switch bandwidth reconfigurable filter that requires no size compensation to maintain a fixed center frequency.",
"title": ""
}
] | [
{
"docid": "8af3b1f6b06ff91dee4473bfb50c420d",
"text": "Crowdsensing technologies are rapidly evolving and are expected to be utilized on commercial applications such as location-based services. Crowdsensing collects sensory data from daily activities of users without burdening users, and the data size is expected to grow into a population scale. However, quality of service is difficult to ensure for commercial use. Incentive design in crowdsensing with monetary rewards or gamifications is, therefore, attracting attention for motivating participants to collect data to increase data quantity. In contrast, we propose Steered Crowdsensing, which controls the incentives of users by using the game elements on location-based services for directly improving the quality of service rather than data size. For a feasibility study of steered crowdsensing, we deployed a crowdsensing system focusing on application scenarios of building processes on wireless indoor localization systems. In the results, steered crowdsensing realized deployments faster than non-steered crowdsensing while having half as many data.",
"title": ""
},
{
"docid": "c6642eb97aafc069056dcb42d7bf5b71",
"text": "An improved technique for electroejaculation is described, with the results of applying it to 84 men with spinal injuries and five men with ejaculatory failure from other causes. Semen was obtained from most patients, but good semen from very few. Only one pregnancy has yet been achieved. The technique has diagnostic applications.",
"title": ""
},
{
"docid": "204ae059e0856f8531b67b707ee3f068",
"text": "In highly regulated industries such as aerospace, the introduction of new quality standard can provide the framework for developing and formulating innovative novel business models which become the foundation to build a competitive, customer-centric enterprise. A number of enterprise modeling methods have been developed in recent years mainly to offer support for enterprise design and help specify systems requirements and solutions. However, those methods are inefficient in providing sufficient support for quality systems links and assessment. The implementation parts of the processes linked to the standards remain unclear and ambiguous for the practitioners as a result of new standards introduction. This paper proposed to integrate new revision of AS/EN9100 aerospace quality elements through systematic integration approach which can help the enterprises in business re-engineering process. The assessment capability model is also presented to identify impacts on the existing system as a result of introducing new standards.",
"title": ""
},
{
"docid": "d2a30b640306c878297f656e308c1279",
"text": "We consolidate an unorganized point cloud with noise, outliers, non-uniformities, and in particular interference between close-by surface sheets as a preprocess to surface generation, focusing on reliable normal estimation. Our algorithm includes two new developments. First, a weighted locally optimal projection operator produces a set of denoised, outlier-free and evenly distributed particles over the original dense point cloud, so as to improve the reliability of local PCA for initial estimate of normals. Next, an iterative framework for robust normal estimation is introduced, where a priority-driven normal propagation scheme based on a new priority measure and an orientation-aware PCA work complementarily and iteratively to consolidate particle normals. The priority setting is reinforced with front stopping at thin surface features and normal flipping to enable robust handling of the close-by surface sheet problem. We demonstrate how a point cloud that is well-consolidated by our method steers conventional surface generation schemes towards a proper interpretation of the input data.",
"title": ""
},
{
"docid": "a75a1d34546faa135f74aa5e6142de05",
"text": "Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the weak to strong learning guarantees for the existing gradient boosting work for strongly-smooth, strongly-convex objectives under this new measure of performance, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.",
"title": ""
},
{
"docid": "99bd908e217eb9f56c40abd35839e9b3",
"text": "How does the physical structure of an arithmetic expression affect the computational processes engaged in by reasoners? In handwritten arithmetic expressions containing both multiplications and additions, terms that are multiplied are often placed physically closer together than terms that are added. Three experiments evaluate the role such physical factors play in how reasoners construct solutions to simple compound arithmetic expressions (such as \"2 + 3 × 4\"). Two kinds of influence are found: First, reasoners incorporate the physical size of the expression into numerical responses, tending to give larger responses to more widely spaced problems. Second, reasoners use spatial information as a cue to hierarchical expression structure: More narrowly spaced subproblems within an expression tend to be solved first and tend to be multiplied. Although spatial relationships besides order are entirely formally irrelevant to expression semantics, reasoners systematically use these relationships to support their success with various formal properties.",
"title": ""
},
{
"docid": "bd4234dc626b4c56d0170948ac5d5de3",
"text": "ISSN: 1049-4820 (Print) 1744-5191 (Online) Journal homepage: http://www.tandfonline.com/loi/nile20 Gamification and student motivation Patrick Buckley & Elaine Doyle To cite this article: Patrick Buckley & Elaine Doyle (2016) Gamification and student motivation, Interactive Learning Environments, 24:6, 1162-1175, DOI: 10.1080/10494820.2014.964263 To link to this article: https://doi.org/10.1080/10494820.2014.964263",
"title": ""
},
{
"docid": "d7485845d35bdc7e05de89e63a21fc71",
"text": "Traditional access control models are often found to be inadequate for digital libraries. This is because the user population for digital libraries is very dynamic and not completely known in advance. In addition, the objects stored in a digital library are characterized by fine-grained behavioral interfaces and highly-contextualized access restrictions that require a user’s access privileges to be updated dynamically. These motivate us to propose a trust-based authorization model for digital libraries. Access privileges can be associated with both objects and content classes. Trust levels associated with these specify the minimum acceptable level of trust needed of a user to allow access to the objects. We use a vector trust model to calculate the system’s trust about a user. The model uses a number of different types of information about a user, for example, prior usage history, credentials, recommendations etc., to calculate the trust level in a dynamic manner and thus achieve a fine-grained access control.",
"title": ""
},
{
"docid": "552bff56ebd53cf07748bea4175178db",
"text": "In this paper we discuss how conventional business contracts can be converted into smart contracts—their electronic equivalents that can be used to systematically monitor and enforce contractual rights, obligations and prohibitions at run time. We explain that emerging blockchain technology is certainly a promising platform for implementing smart contracts but argue that there is a large class of applications, where blockchain is inadequate due to performance, scalability, and consistency requirements, and also due to language expressiveness and cost issues that are hard to solve. We explain that in some situations a centralised approach that does not rely on blockchain is a better alternative due to its simplicity, scalability, and performance. We suggest that in applications where decentralisation and transparency are essential, developers can advantageously combine the two approaches into hybrid solutions where some operations are enforced by enforcers deployed on–blockchains and the rest by enforcers deployed on trusted third parties.",
"title": ""
},
{
"docid": "3b9c3d39c4e023a6960822516261a70c",
"text": "Power electronic converters enable wind turbines, operating at variable speed, to generate electricity more efficiently. Among variable speed operating turbine generators, permanent magnetic synchronous generator (PMSG) has got more attentions due to low cost and maintenance requirements. In addition, the converter in a wind turbine with PMSG decouples the turbine from the power grid, which favors them for grid codes. In this paper, the performance of back-to-back (B2B) converter control of a wind turbine system with PMSG is investigated on a faulty grid. The switching strategy of the grid side converter is designed to improve voltage drop caused by the fault in the grid while maximum available active power of wind turbine system is injected to the grid and the DC link voltage in the converter is regulated. The methodology of the converter control is elaborated in details and its performance on a sample faulty grid is assessed through simulation.",
"title": ""
},
{
"docid": "254f2ef4608ea3c959e049073ad063f8",
"text": "Recently, the long-term evolution (LTE) is considered as one of the most promising 4th generation (4G) mobile standards to increase the capacity and speed of mobile handset networks [1]. In order to realize the LTE wireless communication system, the diversity and multiple-input multiple-output (MIMO) systems have been introduced [2]. In a MIMO mobile user terminal such as handset or USB dongle, at least two uncorrelated antennas should be placed within an extremely restricted space. This task becomes especially difficult when a MIMO planar antenna is designed for LTE band 13 (the corresponding wavelength is 390 mm). Due to the limited space available for antenna elements, the antennas are strongly coupled with each other and have narrow bandwidth.",
"title": ""
},
{
"docid": "03c74ae78bfe862499c4cb1e18a58ae7",
"text": "Age-associated disease and disability are placing a growing burden on society. However, ageing does not affect people uniformly. Hence, markers of the underlying biological ageing process are needed to help identify people at increased risk of age-associated physical and cognitive impairments and ultimately, death. Here, we present such a biomarker, ‘brain-predicted age’, derived using structural neuroimaging. Brain-predicted age was calculated using machine-learning analysis, trained on neuroimaging data from a large healthy reference sample (N=2001), then tested in the Lothian Birth Cohort 1936 (N=669), to determine relationships with age-associated functional measures and mortality. Having a brain-predicted age indicative of an older-appearing brain was associated with: weaker grip strength, poorer lung function, slower walking speed, lower fluid intelligence, higher allostatic load and increased mortality risk. Furthermore, while combining brain-predicted age with grey matter and cerebrospinal fluid volumes (themselves strong predictors) not did improve mortality risk prediction, the combination of brain-predicted age and DNA-methylation-predicted age did. This indicates that neuroimaging and epigenetics measures of ageing can provide complementary data regarding health outcomes. Our study introduces a clinically-relevant neuroimaging ageing biomarker and demonstrates that combining distinct measurements of biological ageing further helps to determine risk of age-related deterioration and death.",
"title": ""
},
{
"docid": "12cd45e8832650d620695d4f5680148f",
"text": "OBJECTIVE\nCurrent systems to evaluate outcomes from tissue-engineered cartilage (TEC) are sub-optimal. The main purpose of our study was to demonstrate the use of second harmonic generation (SHG) microscopy as a novel quantitative approach to assess collagen deposition in laboratory made cartilage constructs.\n\n\nMETHODS\nScaffold-free cartilage constructs were obtained by condensation of in vitro expanded Hoffa's fat pad derived stromal cells (HFPSCs), incubated in the presence or absence of chondrogenic growth factors (GF) during a period of 21 d. Cartilage-like features in constructs were assessed by Alcian blue staining, transmission electron microscopy (TEM), SHG and two-photon excited fluorescence microscopy. A new scoring system, using second harmonic generation microscopy (SHGM) index for collagen density and distribution, was adapted to the existing \"Bern score\" in order to evaluate in vitro TEC.\n\n\nRESULTS\nSpheroids with GF gave a relative high Bern score value due to appropriate cell morphology, cell density, tissue-like features and proteoglycan content, whereas spheroids without GF did not. However, both TEM and SHGM revealed striking differences between the collagen framework in the spheroids and native cartilage. Spheroids required a four-fold increase in laser power to visualize the collagen matrix by SHGM compared to native cartilage. Additionally, collagen distribution, determined as the area of tissue generating SHG signal, was higher in spheroids with GF than without GF, but lower than in native cartilage.\n\n\nCONCLUSION\nSHG represents a reliable quantitative approach to assess collagen deposition in laboratory engineered cartilage, and may be applied to improve currently established scoring systems.",
"title": ""
},
{
"docid": "efde28bc545de68dbb44f85b198d85ff",
"text": "Blockchain technology is regarded as highly disruptive, but there is a lack of formalization and standardization of terminology. Not only because there are several (sometimes propriety) implementation platforms, but also because the academic literature so far is predominantly written from either a purely technical or an economic application perspective. The result of the confusion is an offspring of blockchain solutions, types, roadmaps and interpretations. For blockchain to be accepted as a technology standard in established industries, it is pivotal that ordinary internet users and business executives have a basic yet fundamental understanding of the workings and impact of blockchain. This conceptual paper provides a theoretical contribution and guidance on what blockchain actually is by taking an ontological approach. Enterprise Ontology is used to make a clear distinction between the datalogical, infological and essential level of blockchain transactions and smart contracts.",
"title": ""
},
{
"docid": "e83c2fb41895329f8ce6d57ec6908f6a",
"text": "This paper studies how to conduct efficiency assessment using data envelopment analysis (DEA) in interval and/or fuzzy input–output environments. A new pair of interval DEA models is constructed on the basis of interval arithmetic, which differs from the existing DEA models handling interval data in that the former is a linear CCR model without the need of extra variable alternations and uses a fixed and unified production frontier (i.e. the same constraint set) to measure the efficiencies of decision-making units (DMUs) with interval input and output data, while the latter is usually a nonlinear optimization problem with the need of extra variable alternations or scale transformations and utilizes variable production frontiers (i.e. different constraint sets) to measure interval efficiencies. Ordinal preference information and fuzzy data are converted into interval data through the estimation of permissible intervals and -levelsets, respectively, and are incorporated into the interval DEA models. The proposed interval DEA models are developed for measuring the lower and upper bounds of the best relative efficiency of each DMU with interval input and output data, which are different from the interval formed by the worst and the best relative efficiencies of each DMU. A minimax regret-based approach (MRA) is introduced to compare and rank This research was supported by the project on Human Social Science of MOE, P.R.China under the Grant No. 01JA790082, the National Natural Science Foundation of China (NSFC) under the Grant No: 70271056, and also in part by the European Commission under the Grant No: IPS-2000-00030, the UK Engineering and Physical Science Research Council (EPSRC) under the Grant No: GR/R32413/01, and Fok Ying Tung Education Foundation under the Grant No: 71080. ∗ Corresponding author. Manchester Business School East, The University of Manchester, P.O. Box 88, Manchester M60 1QD, UK. Tel.: +44 161 2750788; fax: +44 161 2003505. E-mail addresses: msymwang@hotmail.com , Yingming.Wang@Manchester.ac.uk (Y.-M. Wang). 0165-0114/$ see front matter © 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.fss.2004.12.011 348 Y.-M. Wang et al. / Fuzzy Sets and Systems 153 (2005) 347–370 the efficiency intervals of DMUs. Two numerical examples are provided to show the applications of the proposed interval DEA models and the preference ranking approach. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "062c80d81f5abef7d2a138f151da8729",
"text": "Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms lead some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior (P 3) method, showing that any inverse problem can be handled by sequentially applying image denoising steps. This relies heavily on the ADMM optimization technique in order to obtain this chained denoising interpretation. Is this the only way in which tasks in image processing can exploit the image denoising engine? In this paper we provide an alternative, more powerful and more flexible framework for achieving the same goal. As opposed to the P 3 method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem. We propose an explicit image-adaptive Laplacian-based regularization functional, making the overall objective functional clearer and better defined. With a complete flexibility to choose the iterative optimization procedure for minimizing the above functional, RED is capable of incorporating any image denoising algorithm, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. We test this approach and demonstrate state-of-the-art results in the image deblurring and super-resolution problems. keywords: Image Denoising Engine, Plug-and-Play Prior, Laplacian Regularization, Inverse Problems. ∗The Electrical Engineering Department, The Technion Israel Institute of Technology. ‡Google Research, Mountain View, California. 1 ar X iv :1 61 1. 02 86 2v 3 [ cs .C V ] 3 S ep 2 01 7",
"title": ""
},
{
"docid": "5acf7238bfeec18e2af314485b285aa3",
"text": "Convolutional neural networks (CNNs) have achieved remarkable success in various computer vision tasks, which are extremely powerful to deal with massive training data by using tens of millions of parameters. However, CNNs often cost significant memory and computation consumption, which prohibits their usage in resource-limited environments such as mobile or embedded devices. To address the above issues, the existing approaches typically focus on either accelerating the convolutional layers or compressing the fully-connected layers separatedly, without pursuing a joint optimum. In this paper, we overcome such a limitation by introducing a holistic CNN compression framework, termed LRDKT, which works throughout both convolutional and fully-connected layers. First, a low-rank decomposition (LRD) scheme is proposed to remove redundancies across both convolutional kernels and fullyconnected matrices, which has a novel closed-form solver to significantly improve the efficiency of the existing iterative optimization solvers. Second, a novel knowledge transfer (KT) based training scheme is introduced. To recover the accumulated accuracy loss and overcome the vanishing gradient, KT explicitly aligns outputs and intermediate responses from a teacher (original) network to its student (compressed) network. We have comprehensively analyzed and evaluated the compression and speedup ratios of the proposed model on MNIST and ILSVRC 2012 benchmarks. In both benchmarks, the proposed scheme has demonstrated superior performance gains over the state-of-the-art methods. We also demonstrate the proposed compression scheme for the task of transfer learning, including domain adaptation and object detection, which show exciting performance gains over the state-of-the-arts. Our source code and compressed models are available at https://github.com/ShaohuiLin/LRDKT.",
"title": ""
},
{
"docid": "5d6e1a7dfa5bc4cc1332d225342a01f7",
"text": "Hashing seeks an embedding of high-dimensional objects into a similarity-preserving low-dimensional Hamming space such that similar objects are indexed by binary codes with small Hamming distances. A variety of hashing methods have been developed, but most of them resort to a single view (representation) of data. However, objects are often described by multiple representations. For instance, images are described by a few different visual descriptors (such as SIFT, GIST, and HOG), so it is desirable to incorporate multiple representations into hashing, leading to multi-view hashing. In this paper we present a deep network for multi-view hashing, referred to as deep multi-view hashing, where each layer of hidden nodes is composed of view-specific and shared hidden nodes, in order to learn individual and shared hidden spaces from multiple views of data. Numerical experiments on image datasets demonstrate the useful behavior of our deep multi-view hashing (DMVH), compared to recently-proposed multi-modal deep network as well as existing shallow models of hashing.",
"title": ""
},
{
"docid": "c7070e41e6ac244ee7155fd88444cbaf",
"text": "A System-on-Chip (SoC) integrates multiple discrete components into a single chip, for example by placing CPU cores, network interfaces and I/O controllers on the same die. While SoCs have dominated high-end embedded products for over a decade, system-level integration is a relatively new trend in servers, and is driven by the opportunity to lower cost (by reducing the number of discrete parts) and power (by reducing the pin crossings from the cores to the I/O). Today, the mounting cost pressures in scale-out dat-acenters demand technologies that can decrease the Total Cost of Ownership (TCO). At the same time, the diminshing return of dedicating the increasing number of available transistors to more cores and caches is creating a stronger case for SoC-based servers.\n This paper examines system-level integration design options for the scale-out server market, specifically targeting datacenter-scale throughput computing workloads. We develop tools to model the area and power of a variety of discrete and integrated server configurations. We evaluate the benefits, trade-offs, and trends of system-level integration for warehouse-scale datacenter servers, and identify the key \"uncore\" components that reduce cost and power. We perform a comprehensive design space exploration at both SoC and datacenter level, identify the sweet spots, and highlight important scaling trends of performance, power, area, and cost from 45nm to 16nm. Our results show that system integration yields substantial benefits, enables novel aggregated configurations with a much higher compute density, and significantly reduces total chip area and dynamic power versus a discrete-component server.\n Finally, we use utilization traces and architectural profiles of real machines to evaluate the dynamic power consumption of typical scale-out cloud applications, and combine them in an overall TCO model. Our results show that, for example at 16nm, SoC-based servers can achieve more than a 26% TCO reduction at datacenter scale.",
"title": ""
},
{
"docid": "fb7e323f5d63161884858052b20e5dc3",
"text": "We propose a novel exoskeleton for grasping hand rehabilitation based on anthropometry. The proposed design has one degree of freedom (DOF) for each finger, yielding coordinated movement across the distal interphalangeal (DIP), proximal interphalangeal (PIP), and metacarpophalangeal (MCP) joints for each finger. The dimension of each segment is determined by hand anthropometric data obtained from measurements. Each finger is controlled by one motor to allow for independent movement of each finger, which is fundamental for hand dexterity. The design was guided by a proposed mechanical model (the exo-finger model) which was verified by simulation and validated by the movement recorded by prototype fingers. It is concluded that, in the present study, anthropometry-based structural design provides a framework for the development of exoskeletal robotic devices.",
"title": ""
}
] | scidocsrr |
cfb4be29303dce5949787eac62887389 | Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings | [
{
"docid": "001574691c427d235bd6d86a98fb9227",
"text": "Entity linking systems link noun-phrase mentions in text to their corresponding Wikipedia articles. However, NLP applications would gain from the ability to detect and type all entities mentioned in text, including the long tail of entities not prominent enough to have their own Wikipedia articles. In this paper we show that once the Wikipedia entities mentioned in a corpus of textual assertions are linked, this can further enable the detection and fine-grained typing of the unlinkable entities. Our proposed method for detecting unlinkable entities achieves 24% greater accuracy than a Named Entity Recognition baseline, and our method for fine-grained typing is able to propagate over 1,000 types from linked Wikipedia entities to unlinkable entities. Detection and typing of unlinkable entities can increase yield for NLP applications such as typed question answering.",
"title": ""
},
{
"docid": "1eef5f6e1b6903c46b8ae625ac1beec4",
"text": "Distant supervision has become the leading method for training large-scale relation extractors, with nearly universal adoption in recent TAC knowledge-base population competitions. However, there are still many questions about the best way to learn such extractors. In this paper we investigate four orthogonal improvements: integrating named entity linking (NEL) and coreference resolution into argument identification for training and extraction, enforcing type constraints of linked arguments, and partitioning the model by relation type signature. We evaluate sentential extraction performance on two datasets: the popular set of NY Times articles partially annotated by Hoffmann et al. (2011) and a new dataset, called GORECO, that is comprehensively annotated for 48 common relations. We find that using NEL for argument identification boosts performance over the traditional approach (named entity recognition with string match), and there is further improvement from using argument types. Our best system boosts precision by 44% and recall by 70%.",
"title": ""
}
] | [
{
"docid": "bd518e3748cc5af8d7ef5b686d2f3c5b",
"text": "Authorship identification is the task of identifying the author of a given text from a set of suspects. The main concern of this task is to define an appropriate characterization of texts that captures the writing style of authors. Although deep learning was recently used in different natural language processing tasks, it has not been used in author identification (to the best of our knowledge). In this paper, deep learning is used for feature extraction of documents represented using variable size character n-grams. We apply A Stacked Denoising Auto-Encoder (SDAE) for extracting document features with different settings, and then a support vector machine classifier is used for classification. The results show that the proposed system outperforms its counterparts.",
"title": ""
},
{
"docid": "ffea50948eab00d47f603d24bcfc1bfd",
"text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.",
"title": ""
},
{
"docid": "6a1e614288a7977b72c8037d9d7725fb",
"text": "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.",
"title": ""
},
{
"docid": "9e3d3783aa566b50a0e56c71703da32b",
"text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.",
"title": ""
},
{
"docid": "cfe92b50318c2df44ce169b3dc818211",
"text": "As illegal and unhealthy content on the Internet has gradually increased in recent years, there have been constant calls for Internet content regulation. But any regulation comes at a cost. Based on the principles of the cost-benefit theory, this article conducts an in-depth discussion on China’s current Internet content regulation, so as to reveal its latent patterns.",
"title": ""
},
{
"docid": "3c5a5ee0b855625c959593a08d6e1e24",
"text": "We present Scalable Host-tree Embeddings for Efficient Partitioning (Sheep), a distributed graph partitioning algorithm capable of handling graphs that far exceed main memory. Sheep produces high quality edge partitions an order of magnitude faster than both state of the art offline (e.g., METIS) and streaming partitioners (e.g., Fennel). Sheep’s partitions are independent of the input graph distribution, which means that graph elements can be assigned to processing nodes arbitrarily without affecting the partition quality. Sheep transforms the input graph into a strictly smaller elimination tree via a distributed map-reduce operation. By partitioning this tree, Sheep finds an upper-bounded communication volume partitioning of the original graph. We describe the Sheep algorithm and analyze its spacetime requirements, partition quality, and intuitive characteristics and limitations. We compare Sheep to contemporary partitioners and demonstrate that Sheep creates competitive partitions, scales to larger graphs, and has better runtime.",
"title": ""
},
{
"docid": "9a38fec3cddffd75a05950fdcfaebe1c",
"text": "This paper focuses on the use of magnetoresistive and sonar sensors for imminent collision detection in cars. The magnetoresistive sensors are used to measure the magnetic field from another vehicle in close proximity, to estimate relative position, velocity, and orientation of the vehicle from the measurements. First, an analytical formulation is developed for the planar variation of the magnetic field from a car as a function of 2-D position and orientation. While this relationship can be used to estimate position and orientation, a challenge is posed by the fact that the parameters in the analytical function vary with the type and model of the encountered car. Since the type of vehicle encountered is not known a priori, the parameters in the magnetic field function are unknown. The use of both sonar and magnetoresistive sensors and an adaptive estimator is shown to address this problem. While the sonar sensors do not work at very small intervehicle distance and have low refresh rates, their use during a short initial time duration leads to a reliable estimator. Experimental results are presented for both a laboratory wheeled car door and for a full-scale passenger sedan. The results show that planar position and orientation can be accurately estimated for a range of relative motions at different oblique angles.",
"title": ""
},
{
"docid": "340f4f9336dd0884bb112345492b47f9",
"text": "Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i.e., compresses and paraphrases) to generate a concise overall summary. We use a novel sentence-level policy gradient method to bridge the nondifferentiable computation between these two neural networks in a hierarchical way, while maintaining language fluency. Empirically, we achieve the new state-of-theart on all metrics (including human evaluation) on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Moreover, by first operating at the sentence-level and then the word-level, we enable parallel decoding of our neural generative model that results in substantially faster (10-20x) inference speed as well as 4x faster training convergence than previous long-paragraph encoder-decoder models. We also demonstrate the generalization of our model on the test-only DUC2002 dataset, where we achieve higher scores than a state-of-the-art model.",
"title": ""
},
{
"docid": "bd44d64b17aeeac66fa35b557c16c39e",
"text": "The development of autonomous vehicles is a highly relevant research topic in mobile robotics. Road recognition using visual information is an important capability for autonomous navigation in urban environments. Over the last three decades, a large number of visual road recognition approaches have been appeared in the literature. This paper proposes a novel visual road detection system based on multiple artificial neural networks that can identify the road based on color and texture. Several features are used as inputs of the artificial neural network such as: average, entropy, energy and variance from different color channels (RGB, HSV, YUV). As a result, our system is able to estimate the classification and the confidence factor of each part of the environment detected by the camera. Experimental tests have been performed in several situations in order to validate the proposed approach.",
"title": ""
},
{
"docid": "a45818ee6b078e3b153aae7995558e4f",
"text": "The reliability of the transmission of the switching signal of IGBT in a static converter is crucial. In fact, if the switching signals are badly transmitted, the power converter can be short-circuited with dramatic consequences. Thus, the operating of such a system can be stopped with heavy economic consequences, as it is the case for an electric train. Many techniques have been developed to achieve solutions for a safe transmission of switching signals with a good galvanic insulation. In very high-voltage, over 10 kV, an optimal solution is to use optic fibres. This technology is limited by the fibre degradation in high temperature atmosphere. Actually, this problem exists in trains. The common use of the radio frequency transmission (RFT) can be exploited to achieve an original IGBT wireless driver. This solution seems to be interesting because high temperature do not interfere with radio frequency transmission. However, radiated electromagnetic interferences (EMI) are drastically important in such an electrical environment, EMI can disturb the RFT. In order to optimise the transmission of switching signals, we have decided to transmit the signals through the energy supplying link. This last device is constituted by a double galvanic insulation transformer (DGIT). The difficulty is to transmit the energy, which is used for the IGBT driver supply and the switching signals in the same loop wire. The paper will highlight this aspect",
"title": ""
},
{
"docid": "8ed1f9194914b5529b4e89444b5feb45",
"text": "support for the camera interfaces. My colleagues Felix Woelk and Kevin Köser I would like to thank for many fruitful discussions. I thank our system administrator Torge Storm for always fixing my machine and providing enough data space for all my sequences which was really a hard job. Of course I also would like to thank the other members of the group Jan Woetzel, Daniel Grest, Birger Streckel and Renate Staecker for their help, the discussions and providing the exciting working environment. Last but not least, I would like to express my gratitude to my wife Miriam for always supporting me and my work. I also want to thank my sons Joshua and Noah for suffering under my paper writing. Finally I thank my parents for always supporting my education and my work.",
"title": ""
},
{
"docid": "8a5e4a6f418975f352a6b9e3d8958d50",
"text": "BACKGROUND\nDysphagia is associated with poor outcome in stroke patients. Studies investigating the association of dysphagia and early dysphagia screening (EDS) with outcomes in patients with acute ischemic stroke (AIS) are rare. The aims of our study are to investigate the association of dysphagia and EDS within 24 h with stroke-related pneumonia and outcomes.\n\n\nMETHODS\nOver a 4.5-year period (starting November 2007), all consecutive AIS patients from 15 hospitals in Schleswig-Holstein, Germany, were prospectively evaluated. The primary outcomes were stroke-related pneumonia during hospitalization, mortality, and disability measured on the modified Rankin Scale ≥2-5, in which 2 indicates an independence/slight disability to 5 severe disability.\n\n\nRESULTS\nOf 12,276 patients (mean age 73 ± 13; 49% women), 9,164 patients (74%) underwent dysphagia screening; of these patients, 55, 39, 4.7, and 1.5% of patients had been screened for dysphagia within 3, 3 to <24, 24 to ≤72, and >72 h following admission. Patients who underwent dysphagia screening were likely to be older, more affected on the National Institutes of Health Stroke Scale score, and to have higher rates of neurological symptoms and risk factors than patients who were not screened. A total of 3,083 patients (25.1%; 95% CI 24.4-25.8) had dysphagia. The frequency of dysphagia was higher in patients who had undergone dysphagia screening than in those who had not (30 vs. 11.1%; p < 0.001). During hospitalization (mean 9 days), 1,271 patients (10.2%; 95% CI 9.7-10.8) suffered from stroke-related pneumonia. Patients with dysphagia had a higher rate of pneumonia than those without dysphagia (29.7 vs. 3.7%; p < 0.001). Logistic regression revealed that dysphagia was associated with increased risk of stroke-related pneumonia (OR 3.4; 95% CI 2.8-4.2; p < 0.001), case fatality during hospitalization (OR 2.8; 95% CI 2.1-3.7; p < 0.001) and disability at discharge (OR 2.0; 95% CI 1.6-2.3; p < 0.001). EDS within 24 h of admission appeared to be associated with decreased risk of stroke-related pneumonia (OR 0.68; 95% CI 0.52-0.89; p = 0.006) and disability at discharge (OR 0.60; 95% CI 0.46-0.77; p < 0.001). Furthermore, dysphagia was independently correlated with an increase in mortality (OR 3.2; 95% CI 2.4-4.2; p < 0.001) and disability (OR 2.3; 95% CI 1.8-3.0; p < 0.001) at 3 months after stroke. The rate of 3-month disability was lower in patients who had received EDS (52 vs. 40.7%; p = 0.003), albeit an association in the logistic regression was not found (OR 0.78; 95% CI 0.51-1.2; p = 0.2).\n\n\nCONCLUSIONS\nDysphagia exposes stroke patients to a higher risk of pneumonia, disability, and death, whereas an EDS seems to be associated with reduced risk of stroke-related pneumonia and disability.",
"title": ""
},
{
"docid": "2f41595a29363f78a46d5488e1011371",
"text": "Increasing numbers of software vulnerabilities are discovered every year whether they are reported publicly or discovered internally in proprietary code. These vulnerabilities can pose serious risk of exploit and result in system compromise, information leaks, or denial of service. We leveraged the wealth of C and C++ open-source code available to develop a largescale function-level vulnerability detection system using machine learning. To supplement existing labeled vulnerability datasets, we compiled a vast dataset of millions of open-source functions and labeled it with carefully-selected findings from three different static analyzers that indicate potential exploits. Using these datasets, we developed a fast and scalable vulnerability detection tool based on deep feature representation learning that directly interprets lexed source code. We evaluated our tool on code from both real software packages and the NIST SATE IV benchmark dataset. Our results demonstrate that deep feature representation learning on source code is a promising approach for automated software vulnerability detection.",
"title": ""
},
{
"docid": "ccafd3340850c5c1a4dfbedd411f1d62",
"text": "The paper predicts changes in global and regional incidences of armed conflict for the 2010–2050 period. The predictions are based on a dynamic multinomial logit model estimation on a 1970–2009 cross-sectional dataset of changes between no armed conflict, minor conflict, and major conflict. Core exogenous predictors are population size, infant mortality rates, demographic composition, education levels, oil dependence, ethnic cleavages, and neighborhood characteristics. Predictions are obtained through simulating the behavior of the conflict variable implied by the estimates from this model. We use projections for the 2011–2050 period for the predictors from the UN World Population Prospects and the International Institute for Applied Systems Analysis. We treat conflicts, recent conflict history, and neighboring conflicts as endogenous variables. Out-of-sample validation of predictions for 2007–2009 (based on estimates for the 1970–2000 period) indicates that the model predicts well, with an AUC of 0.937. Using a p > 0.30 threshold for positive prediction, the True Positive Rate 7–9 years into the future is 0.79 and the False Positive Rate 0.085. We predict a continued decline in the proportion of the world’s countries that have internal armed conflict, from about 15% in 2009 to 7% in 2050. The decline is particularly strong in the Western Asia and North Africa region, and less clear in Africa South of Sahara. The remaining conflict countries will increasingly be concentrated in East, Central, and Southern Africa and in East and South Asia. ∗An earlier version of this paper was presented to the ISA Annual Convention 2009, New York, 15–18 Feb. The research was funded by the Norwegian Research Council grant no. 163115/V10. Thanks to Ken Benoit, Mike Colaresi, Scott Gates, Nils Petter Gleditsch, Joe Hewitt, Bjørn Høyland, Andy Mack, Näıma Mouhleb, Gerald Schneider, and Phil Schrodt for valuable comments.",
"title": ""
},
{
"docid": "6d26e03468a9d9c5b9952a5c07743db3",
"text": "Graphs are a powerful tool to model structured objects, but it is nontrivial to measure the similarity between two graphs. In this paper, we construct a two-graph model to represent human actions by recording the spatial and temporal relationships among local features. We also propose a novel family of context-dependent graph kernels (CGKs) to measure similarity between graphs. First, local features are used as the vertices of the two-graph model and the relationships among local features in the intra-frames and inter-frames are characterized by the edges. Then, the proposed CGKs are applied to measure the similarity between actions represented by the two-graph model. Graphs can be decomposed into numbers of primary walk groups with different walk lengths and our CGKs are based on the context-dependent primary walk group matching. Taking advantage of the context information makes the correctly matched primary walk groups dominate in the CGKs and improves the performance of similarity measurement between graphs. Finally, a generalized multiple kernel learning algorithm with a proposed l12-norm regularization is applied to combine these CGKs optimally together and simultaneously train a set of action classifiers. We conduct a series of experiments on several public action datasets. Our approach achieves a comparable performance to the state-of-the-art approaches, which demonstrates the effectiveness of the two-graph model and the CGKs in recognizing human actions.",
"title": ""
},
{
"docid": "76498da43cc81fed8f4b3f9350147e62",
"text": "Lifted inference algorithms exploit repeated structure in probabilistic models to answer queries efficiently. Previous work such as de Salvo Braz et al.’s first-order variable elimination (FOVE) has focused on the sharing of potentials across interchangeable random variables. In this paper, we also exploit interchangeability within individual potentials by introducing counting formulas, which indicate how many of the random variables in a set have each possible value. We present a new lifted inference algorithm, C-FOVE, that not only handles counting formulas in its input, but also creates counting formulas for use in intermediate potentials. C-FOVE can be described succinctly in terms of six operators, along with heuristics for when to apply them. Because counting formulas capture dependencies among large numbers of variables compactly, C-FOVE achieves asymptotic speed improvements compared to FOVE.",
"title": ""
},
{
"docid": "39cea9dd78cb90e6b9bc0a73e862c8cd",
"text": "Selecting high discriminative genes from gene expression data has become an important research. Not only can this improve the performance of cancer classification, but it can also cut down the cost of medical diagnoses when a large number of noisy, redundant genes are filtered. In this paper, a hybrid Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) method is used for gene selection, and Support Vector Machine (SVM) is adopted as the classifier. The proposed approach is tested on three benchmark gene expression datasets: Leukemia, Colon and breast cancer data. Experimental results show that the proposed method can reduce the dimensionality of the dataset, and confirm the most informative gene subset and improve classification accuracy.",
"title": ""
},
{
"docid": "9844969d343ab61e5c8548475a2f19c0",
"text": "Portability is recognized as a desirable attribute for the vast majority of software products. Yet the literature on portability techniques is sparse and largely anecdotal, and portability is typically achieved by ad hoc methods. This paper proposes a framework for incorporating portability considerations into the software process. Unlike reuse, portability can be effectively attained for individual projects, both large and small. Maximizing portability, however, is more than an implementation detail; it requires reexamination of every phase of the software lifecycle. Here we identify issues and propose guidelines for increasing and exploiting portability during each of the key activities of software development and maintenance.",
"title": ""
},
{
"docid": "740e103c2f1a8283476a9e901f719be8",
"text": "The design of a novel practical 28 GHz beam steering phased array antenna for future fifth generation mobile device applications is presented in this communication. The proposed array antenna has 16 cavity-backed slot antenna elements that are implemented via the metallic back casing of the mobile device, in which two eight-element phased arrays are built on the left- and right-side edges of the mobile device. Each eight-element phased array can yield beam steering at broadside and gain of >15 dBi can be achieved at boresight. The measured 10 dB return loss bandwidth of the proposed cavity-backed slot antenna element was approximately 27.5–30 GHz. In addition, the impacts of user’s hand effects are also investigated.",
"title": ""
},
{
"docid": "cd2a59ce2615ce630ddd75c23e9843d5",
"text": "Despite of their potential value as collaborative knowledge editing systems, semantic wikis present a number of usability challenges. In particular, there are several mismatches between the simple user interaction mechanisms of wikis (which are the key to the success of wikis) and the need for users to create, edit and understand structured knowledge content (e.g., in the form of RDF or OWL ontologies). In this paper, we present a Controlled Natural Language (CNL) approach to support collaborative ontology development on Semantic MediaWiki (SMW). In order to support the expressivity required for OWL ontology development we were obliged to extend the representational substructure of the SMW system with an OWL meta model using a template-based mechanism. To improve usability, we provided a guided input interface based on semantic forms and output several CNL verbalisers including Rabbit English, Rabbit Chinese (Yayan) and Ace English. As such, this work may provide a potentially effective mechanism for encouraging the largescale collaborative creation of semantically-enriched online content.",
"title": ""
}
] | scidocsrr |
ea689b16762355c7b04d5e33c8451e4d | Inside the Mind of the Insider: Towards Insider Threat Detection Using Psychophysiological Signals | [
{
"docid": "d2b5f28a7f32de167ec4c907472af90b",
"text": "Brain-computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies.",
"title": ""
},
{
"docid": "2864c9a396910aedbfa79ed54da6ab3e",
"text": "This paper describes the design for a content based approach to detecting insider misuse by an analyst producing reports in an environment supported by a document control system. The approach makes use of Hidden Markov Models to represent stages in the EvidenceBased Intelligence Analysis Process Model (EBIAPM). This approach is seen as a potential application for the Process Query System / Tracking and Fusion Engine (PQS/TRAFEN). Actions taken by the insider are viewed as processes that can be detected in PQS/TRAFEN. Text categorization of the content of analyst’s queries, documents accessed, and work product are used to disambiguate multiple EBIAPM processes.",
"title": ""
}
] | [
{
"docid": "83d50f7c66b14116bfa627600ded28d6",
"text": "Diet can affect cognitive ability and behaviour in children and adolescents. Nutrient composition and meal pattern can exert immediate or long-term, beneficial or adverse effects. Beneficial effects mainly result from the correction of poor nutritional status. For example, thiamin treatment reverses aggressiveness in thiamin-deficient adolescents. Deleterious behavioural effects have been suggested; for example, sucrose and additives were once suspected to induce hyperactivity, but these effects have not been confirmed by rigorous investigations. In spite of potent biological mechanisms that protect brain activity from disruption, some cognitive functions appear sensitive to short-term variations of fuel (glucose) availability in certain brain areas. A glucose load, for example, acutely facilitates mental performance, particularly on demanding, long-duration tasks. The mechanism of this often described effect is not entirely clear. One aspect of diet that has elicited much research in young people is the intake/omission of breakfast. This has obvious relevance to school performance. While effects are inconsistent in well-nourished children, breakfast omission deteriorates mental performance in malnourished children. Even intelligence scores can be improved by micronutrient supplementation in children and adolescents with very poor dietary status. Overall, the literature suggests that good regular dietary habits are the best way to ensure optimal mental and behavioural performance at all times. Then, it remains controversial whether additional benefit can be gained from acute dietary manipulations. In contrast, children and adolescents with poor nutritional status are exposed to alterations of mental and/or behavioural functions that can be corrected, to a certain extent, by dietary measures.",
"title": ""
},
{
"docid": "5249a94aa9d9dbb211bb73fa95651dfd",
"text": "Power and energy have become increasingly important concerns in the design and implementation of today's multicore/manycore chips. In this paper, we present two priority-based CPU scheduling algorithms, Algorithm Cache Miss Priority CPU Scheduler (CM-PCS) and Algorithm Context Switch Priority CPU Scheduler (CS-PCS), which take advantage of often ignored dynamic performance data, in order to reduce power consumption by over 20 percent with a significant increase in performance. Our algorithms utilize Linux cpusets and cores operating at different fixed frequencies. Many other techniques, including dynamic frequency scaling, can lower a core's frequency during the execution of a non-CPU intensive task, thus lowering performance. Our algorithms match processes to cores better suited to execute those processes in an effort to lower the average completion time of all processes in an entire task, thus improving performance. They also consider a process's cache miss/cache reference ratio, number of context switches and CPU migrations, and system load. Finally, our algorithms use dynamic process priorities as scheduling criteria. We have tested our algorithms using a real AMD Opteron 6134 multicore chip and measured results directly using the “KillAWatt” meter, which samples power periodically during execution. Our results show not only a power (energy/execution time) savings of 39 watts (21.43 percent) and 38 watts (20.88 percent), but also a significant improvement in the performance, performance per watt, and execution time · watt (energy) for a task consisting of 24 concurrently executing benchmarks, when compared to the default Linux scheduler and CPU frequency scaling governor.",
"title": ""
},
{
"docid": "879a843206f4492c894791e372ab789e",
"text": "Among all the renewable resources, wind and solar are the most popular resources due to its ease of availability and its ease conversion into electricity. Each renewable resource uses DC/DC boost converter separately with MPPT control to generate power. To increase the efficiency of photovoltaic (PV) system and wind energy system, the maximum power point tracking (MPPT) technology is employed. Perturb and Observe MPPT technique is used for PV system in which dc voltage is used as perturbation variable. While in wind energy system, perturbation variable as a dc current is used in modified perturb and observe MPPT algorithm. Modified perturb and observe algorithm is stable and tracks fast for sudden wind speed change conditions. Maximum Power Point Tracking (MPPT) technique used with boost converter extracts maximum power from the source when it is available. Simulation of both the renewable energy sources is carried out separately in PSIM 9.0 with different MPPT types of techniques.",
"title": ""
},
{
"docid": "a173d8dcdcdabed9c9a0ca25cc9065cf",
"text": "Context-awareness is an essential component of systems developed in areas like Intelligent Environments, Pervasive & Ubiquitous Computing and Ambient Intelligence. In these emerging fields, there is a need for computerized systems to have a higher understanding of the situations in which to provide services or functionalities, to adapt accordingly. The literature shows that researchers modify existing engineering methods in order to better fit the needs of context-aware computing. These efforts are typically disconnected from each other and generally focus on solving specific development issues. We encourage the creation of a more holistic and unified engineering process that is tailored for the demands of these systems. For this purpose, we study the state-of-the-art in the development of context-aware systems, focusing on: A) Methodologies for developing context-aware systems, analysing the reasons behind their lack of adoption and features that the community wish they can use; B) Context-aware system engineering challenges and techniques applied during the most common development stages; C) Context-aware systems conceptualization.",
"title": ""
},
{
"docid": "c55cab85bc7f1903e4355168e6e4e07b",
"text": "Objectives: Several quantitative studies have now examined the relationship between quality of life (QoL) and bipolar disorder (BD) and have generally indicated that QoL is markedly impaired in patients with BD. However, little qualitative research has been conducted to better describe patients’ own experiences of how BD impacts upon life quality. We report here on a series of in-depth qualitative interviews we conducted as part of the item generation phase for a disease-specific scale to assess QoL in BD. Methods: We conducted 52 interviews with people with BD (n=35), their caregivers (n=5) and healthcare professionals (n=12) identified by both convenience and purposive sampling. Clinical characteristics of the affected sample ranged widely between individuals who had been clinically stable for several years through to inpatients who were recovering from a severe episode of depression or mania. Interviews were tape recorded, transcribed verbatim and analyzed thematically. Results: Although several interwoven themes emerged from the data, we chose to focus on 6 for the purposes of this paper: routine, independence, stigma and disclosure, identity, social support and spirituality. When asked to prioritize the areas they thought were most important in determining QoL, the majority of participants ranked social support as most important, followed by mental health. Conclusions: Findings indicate that there is a complex, multifaceted relationship between BD and QoL. Most of the affected individuals we interviewed reported that BD had a profoundly negative effect upon their life quality, particularly in the areas of education, vocation, financial functioning, and social and intimate relationships. However, some people also reported that having BD opened up new doors of opportunity.",
"title": ""
},
{
"docid": "edd90663591924aadc86fae5a0e43744",
"text": "Many interesting real-life mining applications rely on modeling data as sequences of discrete multi-attribute records. Mining models for network intrusion detection view data as sequences of TCP/IP packets. Text information extraction systems model the input text as a sequence of words and delimiters. Customer data mining applications profile buying habits of customers as a sequence of items purchased. In computational biology, DNA, RNA and protein data are all best modeled as sequences. Classifying, clustering and characterizing such sequence data presents interesting issues in feature engineering, discretization and pattern discovery. In this seminar we will review techniques ranging from item set counting, MDL-based discretization and Markov modeling to perform various supervised and unsupervised pattern discovery tasks on sequences. We will present case studies from network intrusion detection and DNA sequence mining to illustrate these techniques. Sunita Sarawagi researches in the fields of databases, data mining, machine learning and data warehousing. She is a member of the faculty at IIT Bombay. Prior to that she was a research staff member at IBM Almaden Research Center. She got her PhD in databases from the University of California at Berkeley. Proceedings of the 19th International Conference on Data Engineering (ICDE’03) 1063-6382/03 $ 17.00 © 2003 IEEE",
"title": ""
},
{
"docid": "a2a09c544172a3212ccc6d7a7ea7ac43",
"text": "Extending semantic parsing systems to new domains and languages is a highly expensive, time-consuming process, so making effective use of existing resources is critical. In this paper, we describe a transfer learning method using crosslingual word embeddings in a sequence-tosequence model. On the NLmaps corpus, our approach achieves state-of-the-art accuracy of 85.7% for English. Most importantly, we observed a consistent improvement for German compared with several baseline domain adaptation techniques. As a by-product of this approach, our models that are trained on a combination of English and German utterances perform reasonably well on codeswitching utterances which contain a mixture of English and German, even though the training data does not contain any code-switching. As far as we know, this is the first study of code-switching in semantic parsing. We manually constructed the set of code-switching test utterances for the NLmaps corpus and achieve 78.3% accuracy on this dataset.",
"title": ""
},
{
"docid": "67a461b000b8a952b3c32fe0ab2437cf",
"text": "Camallanus tridentatus is redescribed on the basis of the examination of specimens obtained from the stomach, caeca and intestine of the naturally infected arapaima Arapaima gigas (Schinz) from the Mexiana Island, Amazon River Delta, Brazil. Data on the surface morphology of adults inferred from confocal laser scanning and scanning electron microscopical observations are also provided. The study revealed some taxonomically important, previously unreported morphological features in this species, such as the presence of the poorly sclerotized left spicule and deirids. C. tridentatus distinctly differs from other congeneric species parasitizing freshwater fishes in South America mainly in the structure of the buccal capsule and the female caudal end. C. maculatus Martins, Garcia, Piazza and Ghiraldelli is considered a junior synonymm of Camallanus cotti Fujita.",
"title": ""
},
{
"docid": "f1e858974e84dfa8cb518e9f4f55d812",
"text": "To achieve peak performance of an algorithm (in particular for problems in AI), algorithm configuration is often necessary to determine a well-performing parameter configuration. So far, most studies in algorithm configuration focused on proposing better algorithm configuration procedures or on improving a particular algorithm’s performance. In contrast, we use all the collected empirical performance data gathered during algorithm configuration runs to generate extensive insights into an algorithm, given problem instances and the used configurator. To this end, we provide a tool, called CAVE , that automatically generates comprehensive reports and insightful figures from all available empirical data. CAVE aims to help algorithm and configurator developers to better understand their experimental setup in an automated fashion. We showcase its use by thoroughly analyzing the well studied SAT solver spear on a benchmark of software verification instances and by empirically verifying two long-standing assumptions in algorithm configuration and parameter importance: (i) Parameter importance changes depending on the instance set at hand and (ii) Local and global parameter importance analysis do not necessarily agree with each other.",
"title": ""
},
{
"docid": "09c5da2fbf8a160ba27221ff0c5417ac",
"text": " The burst fracture of the spine was first described by Holdsworth in 1963 and redefined by Denis in 1983 as being a fracture of the anterior and middle columns of the spine with or without an associated posterior column fracture. This injury has received much attention in the literature as regards its radiological diagnosis and also its clinical managment. The purpose of this article is to review the way that imaging has been used both to diagnose the injury and to guide management. Current concepts of the stability of this fracture are presented and our experience in the use of magnetic resonance imaging in deciding treatment options is discussed.",
"title": ""
},
{
"docid": "705dbe0e0564b1937da71f33d17164b8",
"text": "0191-8869/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.paid.2011.11.011 ⇑ Tel.: +1 309 298 1622; fax: +1 309 298 2369. E-mail address: cj-carpenter2@wiu.edu A survey (N = 292) was conducted that measured self-promoting Facebook behaviors (e.g. posting status updates and photos of oneself, updating profile information) and several anti-social behaviors (e.g. seeking social support more than one provides it, getting angry when people do not comment on one’s status updates, retaliating against negative comments). The grandiose exhibitionism subscale of the narcissistic personality inventory was hypothesized to predict the self-promoting behaviors. The entitlement/exploitativeness subscale was hypothesized to predict the anti-social behaviors. Results were largely consistent with the hypothesis for the self-promoting behaviors but mixed concerning the anti-social behaviors. Trait self-esteem was also related in the opposite manner as the Narcissism scales to some Facebook behaviors. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "914c985dc02edd09f0ee27b75ecee6a4",
"text": "Whether the development of face recognition abilities truly reflects changes in how faces, specifically, are perceived, or rather can be attributed to more general perceptual or cognitive development, is debated. Event-related potential (ERP) recordings on the scalp offer promise for this issue because they allow brain responses to complex visual stimuli to be relatively well isolated from other sensory, cognitive and motor processes. ERP studies in 5- to 16-year-old children report large age-related changes in amplitude, latency (decreases) and topographical distribution of the early visual components, the P1 and the occipito-temporal N170. To test the face specificity of these effects, we recorded high-density ERPs to pictures of faces, cars, and their phase-scrambled versions from 72 children between the ages of 4 and 17, and a group of adults. We found that none of the previously reported age-dependent changes in amplitude, latency or topography of the P1 or N170 were specific to faces. Most importantly, when we controlled for age-related variations of the P1, the N170 appeared remarkably similar in amplitude and topography across development, with much smaller age-related decreases in latencies than previously reported. At all ages the N170 showed equivalent face-sensitivity: it had the same topography and right hemisphere dominance, it was absent for meaningless (scrambled) stimuli, and larger and earlier for faces than cars. The data also illustrate the large amount of inter-individual and inter-trial variance in young children's data, which causes the N170 to merge with a later component, the N250, in grand-averaged data. Based on our observations, we suggest that the previously reported \"bi-fid\" N170 of young children is in fact the N250. Overall, our data indicate that the electrophysiological markers of face-sensitive perceptual processes are present from 4 years of age and do not appear to change throughout development.",
"title": ""
},
{
"docid": "d780db3ec609d74827a88c0fa0d25f56",
"text": "Highly automated test vehicles are rare today, and (independent) researchers have often limited access to them. Also, developing fully functioning system prototypes is time and effort consuming. In this paper, we present three adaptions of the Wizard of Oz technique as a means of gathering data about interactions with highly automated vehicles in early development phases. Two of them address interactions between drivers and highly automated vehicles, while the third one is adapted to address interactions between pedestrians and highly automated vehicles. The focus is on the experimental methodology adaptations and our lessons learned.",
"title": ""
},
{
"docid": "89ca22c24d3b6fc397e8098e62d8d4a7",
"text": "This paper introduces the design and development of a novel pressure-sensitive foot insole for real-time monitoring of plantar pressure distribution during walking. The device consists of a flexible insole with 64 pressure-sensitive elements and an integrated electronic board for high-frequency data acquisition, pre-filtering, and wireless transmission to a remote data computing/storing unit. The pressure-sensitive technology is based on an optoelectronic technology developed at Scuola Superiore Sant'Anna. The insole is a low-cost and low-power battery-powered device. The design and development of the device is presented along with its experimental characterization and validation with healthy subjects performing a task of walking at different speeds, and benchmarked against an instrumented force platform.",
"title": ""
},
{
"docid": "8fd830d62cceb6780d0baf7eda399fdf",
"text": "Little work from the Natural Language Processing community has targeted the role of quantities in Natural Language Understanding. This paper takes some key steps towards facilitating reasoning about quantities expressed in natural language. We investigate two different tasks of numerical reasoning. First, we consider Quantity Entailment, a new task formulated to understand the role of quantities in general textual inference tasks. Second, we consider the problem of automatically understanding and solving elementary school math word problems. In order to address these quantitative reasoning problems we first develop a computational approach which we show to successfully recognize and normalize textual expressions of quantities. We then use these capabilities to further develop algorithms to assist reasoning in the context of the aforementioned tasks.",
"title": ""
},
{
"docid": "8e16b62676e5ef36324c738ffd5f737d",
"text": "Virtualization technology has shown immense popularity within embedded systems due to its direct relationship with cost reduction, better resource utilization, and higher performance measures. Efficient hypervisors are required to achieve such high performance measures in virtualized environments, while taking into consideration the low memory footprints as well as the stringent timing constraints of embedded systems. Although there are a number of open-source hypervisors available such as Xen, Linux KVM and OKL4 Micro visor, this is the first paper to present the open-source embedded hypervisor Extensible Versatile hyper Visor (Xvisor) and compare it against two of the commonly used hypervisors KVM and Xen in-terms of comparison factors that affect the whole system performance. Experimental results on ARM architecture prove Xvisor's lower CPU overhead, higher memory bandwidth, lower lock synchronization latency and lower virtual timer interrupt overhead and thus overall enhanced virtualized embedded system performance.",
"title": ""
},
{
"docid": "2923652ff988572a40d682e2a459707a",
"text": "Clustering analysis is a descriptive task that seeks to identify homogeneous groups of objects based on the values of their attributes. This paper proposes a new algorithm for K-medoids clustering which runs like the K-means algorithm and tests several methods for selecting initial medoids. The proposed algorithm calculates the distance matrix once and uses it for finding new medoids at every iterative step. We evaluate the proposed algorithm using real and artificial data and compare with the results of other algorithms. The proposed algorithm takes the reduced time in computation with comparable performance as compared to the Partitioning Around Medoids.",
"title": ""
},
{
"docid": "b7de7a1c14e3bc54cc7551ecba66e8ca",
"text": "We present a new approach to capture video at high spatial and spectral resolutions using a hybrid camera system. Composed of an RGB video camera, a grayscale video camera and several optical elements, the hybrid camera system simultaneously records two video streams: an RGB video with high spatial resolution, and a multispectral video with low spatial resolution. After registration of the two video streams, our system propagates the multispectral information into the RGB video to produce a video with both high spectral and spatial resolution. This propagation between videos is guided by color similarity of pixels in the spectral domain, proximity in the spatial domain, and the consistent color of each scene point in the temporal domain. The propagation algorithm is designed for rapid computation to allow real-time video generation at the original frame rate, and can thus facilitate real-time video analysis tasks such as tracking and surveillance. Hardware implementation details and design tradeoffs are discussed. We evaluate the proposed system using both simulations with ground truth data and on real-world scenes. The utility of this high resolution multispectral video data is demonstrated in dynamic white balance adjustment and tracking.",
"title": ""
},
{
"docid": "9868528306d429cce0453d5450806edd",
"text": "In this paper, we present an approach to solve a physicsbased reinforcement learning challenge “Learning to Run” with objective to train physiologically-based human model to navigate a complex obstacle course as quickly as possible. The environment is computationally expensive, has a high-dimensional continuous action space and is stochastic. We benchmark state of the art policy-gradient methods and test several improvements, such as layer normalization, parameter noise, action and state reflecting, to stabilize training and improve its sample-efficiency. We found that the Deep Deterministic Policy Gradient method is the most efficient method for this environment and the improvements we have introduced help to stabilize training. Learned models are able to generalize to new physical scenarios, e.g. different obstacle courses.",
"title": ""
},
{
"docid": "6ce2529ff446db2d647337f30773cdc9",
"text": "The physical demands in soccer have been studied intensively, and the aim of the present review is to provide an overview of metabolic changes during a game and their relation to the development of fatigue. Heart-rate and body-temperature measurements suggest that for elite soccer players the average oxygen uptake during a match is around 70% of maximum oxygen uptake (VO2max). A top-class player has 150 to 250 brief intense actions during a game, indicating that the rates of creatine-phosphate (CP) utilization and glycolysis are frequently high during a game, which is supported by findings of reduced muscle CP levels and severalfold increases in blood and muscle lactate concentrations. Likewise, muscle pH is lowered and muscle inosine monophosphate (IMP) elevated during a soccer game. Fatigue appears to occur temporarily during a game, but it is not likely to be caused by elevated muscle lactate, lowered muscle pH, or change in muscle-energy status. It is unclear what causes the transient reduced ability of players to perform maximally. Muscle glycogen is reduced by 40% to 90% during a game and is probably the most important substrate for energy production, and fatigue toward the end of a game might be related to depletion of glycogen in some muscle fibers. Blood glucose and catecholamines are elevated and insulin lowered during a game. The blood free-fatty-acid levels increase progressively during a game, probably reflecting an increasing fat oxidation compensating for the lowering of muscle glycogen. Thus, elite soccer players have high aerobic requirements throughout a game and extensive anaerobic demands during periods of a match leading to major metabolic changes, which might contribute to the observed development of fatigue during and toward the end of a game.",
"title": ""
}
] | scidocsrr |
28c9db7b55ce28cedb2ff95d8e39303e | A Hybrid Cloud Approach for Secure Authorized Deduplication | [
{
"docid": "5d827a27d9fb1fe4041e21dde3b8ce44",
"text": "Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.",
"title": ""
},
{
"docid": "d9c244815775043d47b09cbb79a7b122",
"text": "Cloud storage is an emerging service model that enables individuals and enterprises to outsource the storage of data backups to remote cloud providers at a low cost. However, cloud clients must enforce security guarantees of their outsourced data backups. We present Fade Version, a secure cloud backup system that serves as a security layer on top of today's cloud storage services. Fade Version follows the standard version-controlled backup design, which eliminates the storage of redundant data across different versions of backups. On top of this, Fade Version applies cryptographic protection to data backups. Specifically, it enables fine-grained assured deletion, that is, cloud clients can assuredly delete particular backup versions or files on the cloud and make them permanently inaccessible to anyone, while other versions that share the common data of the deleted versions or files will remain unaffected. We implement a proof-of-concept prototype of Fade Version and conduct empirical evaluation atop Amazon S3. We show that Fade Version only adds minimal performance overhead over a traditional cloud backup service that does not support assured deletion.",
"title": ""
},
{
"docid": "c89de16110a66d65f8ae7e3476fe90ef",
"text": "In this paper, a new notion which we call private data deduplication protocol, a deduplication technique for private data storage is introduced and formalized. Intuitively, a private data deduplication protocol allows a client who holds a private data proves to a server who holds a summary string of the data that he/she is the owner of that data without revealing further information to the server. Our notion can be viewed as a complement of the state-of-the-art public data deduplication protocols of Halevi et al [7]. The security of private data deduplication protocols is formalized in the simulation-based framework in the context of two-party computations. A construction of private deduplication protocols based on the standard cryptographic assumptions is then presented and analyzed. We show that the proposed private data deduplication protocol is provably secure assuming that the underlying hash function is collision-resilient, the discrete logarithm is hard and the erasure coding algorithm can erasure up to α-fraction of the bits in the presence of malicious adversaries in the presence of malicious adversaries. To the best our knowledge this is the first deduplication protocol for private data storage.",
"title": ""
},
{
"docid": "528b17b55172cbf22e77a14db4334ba6",
"text": "Recently, Halevi et al. (CCS '11) proposed a cryptographic primitive called proofs of ownership (PoW) to enhance security of client-side deduplication in cloud storage. In a proof of ownership scheme, any owner of the same file F can prove to the cloud storage that he/she owns file F in a robust and efficient way, in the bounded leakage setting where a certain amount of efficiently-extractable information about file F is leaked. Following this work, we propose a secure client-side deduplication scheme, with the following advantages: our scheme protects data confidentiality (and some partial information) against both outside adversaries and honest-but-curious cloud storage server, while Halevi et al. trusts cloud storage server in data confidentiality; our scheme is proved secure w.r.t. any distribution with sufficient min-entropy, while Halevi et al. (the last and the most practical construction) is particular to a specific type of distribution (a generalization of \"block-fixing\" distribution) of input files.\n The cost of our improvements is that we adopt a weaker leakage setting: We allow a bounded amount one-time leakage of a target file before our scheme starts to execute, while Halevi et al. allows a bounded amount multi-time leakage of the target file before and after their scheme starts to execute. To the best of our knowledge, previous works on client-side deduplication prior Halevi et al. do not consider any leakage setting.",
"title": ""
}
] | [
{
"docid": "01556a0912c170951d0c59c9efd74d1b",
"text": "Wearable sensors have garnered considerable recent interest owing to their tremendous promise for a plethora of applications. Yet the absence of reliable non-invasive chemical sensors has greatly hindered progress in the area of on-body sensing. Electrochemical sensors offer considerable promise as wearable chemical sensors that are suitable for diverse applications owing to their high performance, inherent miniaturization, and low cost. A wide range of wearable electrochemical sensors and biosensors has been developed for real-time non-invasive monitoring of electrolytes and metabolites in sweat, tears, or saliva as indicators of a wearer's health status. With continued innovation and attention to key challenges, such non-invasive electrochemical sensors and biosensors are expected to open up new exciting avenues in the field of wearable wireless sensing devices and body-sensor networks, and thus find considerable use in a wide range of personal health-care monitoring applications, as well as in sport and military applications.",
"title": ""
},
{
"docid": "2232c9f24e6a87257172c140477cadec",
"text": "The freshwater angel fish (Pterophyllum scalare Schultze, 1823) is South American cichlid become very popular among aquarists. There is little information on their culture and aquarium husbandry. In this study growth performance and survival rate of angelfish subjected to different feeding frequencies were evaluated. Four groups of angel fish juveniles (0.87 ± 0.01 g; 3.98 ± 0.08 mm) were fed either four meals per day (F1), two meals per day (F2), one meal per day (F3) and every other day (F4) for 90 days. Final live weight and specific growth rate (SGR) values of group F1 and F2 were significantly higher than those of the other groups (P < 0.05). There was no significant difference (P > 0.05) in survival rate among the treatments. The best feed conversion ration (FCR) was obtained from four daily feeding (F1) (P < 0.05). Condition factor (CF) did not show a significant difference (P > 0.05) among experimental groups. In conclusion, the best results in growth performance were obtained by feeding four meals per day (F1) and two meals per day (F2), so they were recommended for angel fish feeding.",
"title": ""
},
{
"docid": "179e5b887f15b4ecf4ba92031a828316",
"text": "High efficiency power supply solutions for data centers are gaining more attention, in order to minimize the fast growing power demands of such loads, the 48V Voltage Regulator Module (VRM) for powering CPU is a promising solution replacing the legacy 12V VRM by which the bus distribution loss, cost and size can be dramatically minimized. In this paper, a two-stage 48V/12V/1.8V–250W VRM is proposed, the first stage is a high efficiency, high power density isolated — unregulated DC/DC converter (DCX) based on LLC resonant converter stepping the input voltage from 48V to 12V. The Matrix transformer concept was utilized for designing the high frequency transformer of the first stage, an enhanced termination loop for the synchronous rectifiers and a non-uniform winding structure is proposed resulting in significant increase in both power density and efficiency of the first stage converter. The second stage is a 4-phases buck converter stepping the voltage from 12V to 1.8V to the CPU. Since the CPU runs in the sleep mode most of the time a light load efficiency improvement method by changing the bus voltage from 12V to 6 V during light load operation is proposed showing more than 8% light load efficiency enhancement than fixed bus voltage. Experimental results demonstrate the high efficiency of the proposed solution reaching peak of 91% with a significant light load efficiency improvement.",
"title": ""
},
{
"docid": "2575dbf042cf926da3aa2cb27d1d5a24",
"text": "Because of the spread of the Internet, social platforms become big data pools. From there we can learn about the trends, culture and hot topics. This project focuses on analyzing the data from Instagram. It shows the relationship of Instagram filter data with location and number of likes to give users filter suggestion on achieving more likes based on their location. It also analyzes the popular hashtags in different locations to show visual culture differences between different cities. ACM Classification",
"title": ""
},
{
"docid": "b042f6478ef34f4be8ee9b806ddf6011",
"text": "By using an extensive framework for e-learning enablers and disablers (including 37 factors) this paper sets out to identify which of these challenges are most salient for an e-learning course in Sri Lanka. The study includes 1887 informants and data has been collected from year 2004 to 2007, covering opinions of students and staff. A quantitative approach is taken to identify the most important factors followed by a qualitative analysis to explain why and how they are important. The study identified seven major challenges in the following areas: Student support, Flexibility, Teaching and Learning Activities, Access, Academic confidence, Localization and Attitudes. In this paper these challenges will be discussed and solutions suggested.",
"title": ""
},
{
"docid": "0a12dad57eff5457d126289955c60c79",
"text": "Spreadsheets are perhaps the most ubiquitous form of end-user programming software. This paper describes a corpus, called Fuse, containing 2,127,284 URLs that return spreadsheets (and their HTTP server responses), and 249,376 unique spreadsheets, contained within a public web archive of over 26.83 billion pages. Obtained using nearly 60,000 hours of computation, the resulting corpus exhibits several useful properties over prior spreadsheet corpora, including reproducibility and extendability. Our corpus is unencumbered by any license agreements, available to all, and intended for wide usage by end-user software engineering researchers. In this paper, we detail the data and the spreadsheet extraction process, describe the data schema, and discuss the trade-offs of Fuse with other corpora.",
"title": ""
},
{
"docid": "1169d70de6d0c67f52ecac4d942d2224",
"text": "All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8% for a field test with 276 drivers, resulting in a relative error reduction of 55% over driver models that use raw pedal operation signals without spectral analysis",
"title": ""
},
{
"docid": "9c832a2f70b4ff39c9572b73e739a409",
"text": "We investigate the behavior of convolutional neural networks (CNN) in the presence of label noise. We show empirically that CNN prediction for a given test sample depends on the labels of the training samples in its local neighborhood. This is similar to the way that the K-nearest neighbors (K-NN) classifier works. With this understanding, we derive an analytical expression for the expected accuracy of a KNN, and hence a CNN, classifier for any level of noise. In particular, we show that K-NN, and CNN, are resistant to label noise that is randomly spread across the training set, but are very sensitive to label noise that is concentrated. Experiments on real datasets validate our analytical expression by showing that they match the empirical results for varying degrees of label noise.",
"title": ""
},
{
"docid": "ddcbf324196c384a3eb2d5ee214e038e",
"text": "Where you can find the insider attack and cyber security beyond the hacker easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this insider attack and cyber security beyond the hacker book. It is about this book that will give wellness for all people from many societies.",
"title": ""
},
{
"docid": "cb4966a838bbefccbb1b74e5f541ce76",
"text": "Theories of human behavior are an important but largely untapped resource for software engineering research. They facilitate understanding of human developers’ needs and activities, and thus can serve as a valuable resource to researchers designing software engineering tools. Furthermore, theories abstract beyond specific methods and tools to fundamental principles that can be applied to new situations. Toward filling this gap, we investigate the applicability and utility of Information Foraging Theory (IFT) for understanding information-intensive software engineering tasks, drawing upon literature in three areas: debugging, refactoring, and reuse. In particular, we focus on software engineering tools that aim to support information-intensive activities, that is, activities in which developers spend time seeking information. Regarding applicability, we consider whether and how the mathematical equations within IFT can be used to explain why certain existing tools have proven empirically successful at helping software engineers. Regarding utility, we applied an IFT perspective to identify recurring design patterns in these successful tools, and consider what opportunities for future research are revealed by our IFT perspective.",
"title": ""
},
{
"docid": "13950622dd901145f566359cc5c00703",
"text": "The Indian buffet process is a stochastic process defining a probability distribution over equivalence classes of sparse binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features, or that involve bipartite graphs in which the size of at least one class of nodes is unknown. We give a detailed derivation of this distribution, and illustrate its use as a prior in an infinite latent feature model. We then review recent applications of the Indian buffet process in machine learning, discuss its extensions, and summarize its connections to other stochastic processes.",
"title": ""
},
{
"docid": "4f059822d0da0ada039b11c1d65c7c32",
"text": "Lead time reduction is a key concern of many industrial buyers of capital facilities given current economic conditions. Supply chain initiatives in manufacturing settings have led owners to expect that dramatic reductions in lead time are possible in all phases of their business, including the delivery of capital materials. Further, narrowing product delivery windows and increasing pressure to be first-tomarket create significant external pressure to reduce lead time. In this paper, a case study is presented in which an owner entered the construction supply chain to procure and position key long-lead materials. The materials were held at a position in the supply chain selected to allow some flexibility for continued customization, but dramatic reduction in the time-to-site. Simulation was used as a tool to consider time-to-site tradeoffs for multiple inventory locations so as to better match the needs of the construction effort.",
"title": ""
},
{
"docid": "13d1b0637c12d617702b4f80fd7874ef",
"text": "Linear-time algorithms for testing the planarity of a graph are well known for over 35 years. However, these algorithms are quite involved and recent publications still try to give simpler linear-time tests. We give a conceptually simple reduction from planarity testing to the problem of computing a certain construction of a 3-connected graph. This implies a linear-time planarity test. Our approach is radically different from all previous linear-time planarity tests; as key concept, we maintain a planar embedding that is 3-connected at each point in time. The algorithm computes a planar embedding if the input graph is planar and a Kuratowski-subdivision otherwise.",
"title": ""
},
{
"docid": "e43ede0fe674fe92fbfa2f76165cf034",
"text": "In this communication, a compact circularly polarized (CP) substrate integrated waveguide (SIW) horn antenna is proposed and investigated. Through etching a sloping slot on the common broad wall of two SIWs, mode coupling is generated between the top and down SIWs, and thus, a new field component as TE01 mode is produced. During the coupling process along the sloping slot, the difference in guide wavelengths of the two orthogonal modes also brings a phase shift between the two modes, which provides a possibility for radiating the CP wave. Moreover, the two different ports will generate the electric field components of TE01 mode with the opposite direction, which indicates the compact SIW horn antenna with a dual CP property can be realized as well. Measured results indicate that the proposed antenna operates with a wide 3-dB axial ratio bandwidth of 11.8% ranging from 17.6 to 19.8 GHz. The measured results are in good accordance with the simulated ones.",
"title": ""
},
{
"docid": "4b1ae8c52831341727b62687f26f300f",
"text": "A novel region-based active contour model (ACM) is proposed in this paper. It is implemented with a special processing named Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) method, which first selectively penalizes the level set function to be binary, and then uses a Gaussian smoothing kernel to regularize it. The advantages of our method are as follows. First, a new region-based signed pressure force (SPF) function is proposed, which can efficiently stop the contours at weak or blurred edges. Second, the exterior and interior boundaries can be automatically detected with the initial contour being anywhere in the image. Third, the proposed ACM with SBGFRLS has the property of selective local or global segmentation. It can segment not only the desired object but also the other objects. Fourth, the level set function can be easily initialized with a binary function, which is more efficient to construct than the widely used signed distance function (SDF). The computational cost for traditional re-initialization can also be reduced. Finally, the proposed algorithm can be efficiently implemented by the simple finite difference scheme. Experiments on synthetic and real images demonstrate the advantages of the proposed method over geodesic active contours (GAC) and Chan–Vese (C–V) active contours in terms of both efficiency and accuracy. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3d6014310599589f82c34e4e72b7b423",
"text": "The discovery of anomalies and, more in general, of events of interest at sea is one of the main challenges of Maritime Situational Awareness. This paper introduces an event-based methodology for knowledge discovery without querying directly a large volume of raw data. The proposed architecture analyses the maritime traffic data to detect maritime traffic patterns and events and aggregate them in an Event Map, namely a georeferenced grid. The Event Map offers visualisation capabilities and, more importantly, is used as access interface to the maritime traffic knowledge database. The proposed methodology offers real-time access to the extracted maritime knowledge, and the possibility to perform more structured queries with respect to traditional basic queries (e. g. vessel proximity within a certain distance). The proposed approach is demonstrated and assessed using real-world Automatic Identification System (AIS) data, revealing computational improvements and enriched monitoring capabilities.",
"title": ""
},
{
"docid": "fc9b4cb8c37ffefde9d4a7fa819b9417",
"text": "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 2.11% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 3.53%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.",
"title": ""
},
{
"docid": "794c597a786486ac4d91d861d89eb242",
"text": "Human learners appear to have inherent ways to transfer knowledge between tasks. That is, we recognize and apply relevant knowledge from previous learning experiences when we encounter new tasks. The more related a new task is to our previous experience, the more easily we can master it. Common machine learning algorithms, in contrast, traditionally address isolated tasks. Transfer learning attempts to improve on traditional machine learning by transferring knowledge learned in one or more source tasks and using it to improve learning in a related target task (see Figure 1). Techniques that enable knowledge transfer represent progress towards making machine learning as efficient as human learning. This chapter provides an introduction to the goals, settings, and challenges of transfer learning. It surveys current research in this area, giving an overview of the state of the art and outlining the open problems. ABStrAct",
"title": ""
},
{
"docid": "ae9bc4e21d6e2524f09e5f5fbb9e4251",
"text": "Arvaniti, Ladd and Mennen (1998) reported a phenomenon of ‘segmental anchoring’: the beginning and end of a linguistically significant pitch movement are anchored to specific locations in segmental structure, which means that the slope and duration of the pitch movement vary according to the segmental material with which it is associated. This finding has since been replicated and extended in several languages. One possible analysis is that autosegmental tones corresponding to the beginning and end of the pitch movement show secondary association with points in structure; however, problems with this analysis have led some authors to cast doubt on the ‘hypothesis’ of segmental anchoring. I argue here that segmental anchoring is not a hypothesis expressed in terms of autosegmental phonology, but rather an empirical phonetic finding. The difficulty of describing segmental anchoring as secondary association does not disprove the ‘hypothesis’, but shows the error of using a symbolic phonological device (secondary association) to represent gradient differences of phonetic detail that should be expressed quantitatively. I propose that treating pitch movements as gestures (in the sense of Articulatory Phonology) goes some way to resolving some of the theoretical questions raised by segmental anchoring, but suggest that pitch gestures have a variety of ‘domains’ which are in need of empirical study before we can successfully integrate segmental anchoring into our understanding of speech production.",
"title": ""
},
{
"docid": "578696bf921cc5d4e831786c67845346",
"text": "Identifying and monitoring multiple disease biomarkers and other clinically important factors affecting the course of a disease, behavior or health status is of great clinical relevance. Yet conventional statistical practice generally falls far short of taking full advantage of the information available in multivariate longitudinal data for tracking the course of the outcome of interest. We demonstrate a method called multi-trajectory modeling that is designed to overcome this limitation. The method is a generalization of group-based trajectory modeling. Group-based trajectory modeling is designed to identify clusters of individuals who are following similar trajectories of a single indicator of interest such as post-operative fever or body mass index. Multi-trajectory modeling identifies latent clusters of individuals following similar trajectories across multiple indicators of an outcome of interest (e.g., the health status of chronic kidney disease patients as measured by their eGFR, hemoglobin, blood CO2 levels). Multi-trajectory modeling is an application of finite mixture modeling. We lay out the underlying likelihood function of the multi-trajectory model and demonstrate its use with two examples.",
"title": ""
}
] | scidocsrr |
c27b18e4d89aafe7e8f93c466a7b757e | Ex Machina: Personal Attacks Seen at Scale | [
{
"docid": "e6cae5bec5bb4b82794caca85d3412a2",
"text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
},
{
"docid": "8bb74088e1920a3bbf65b8429575b913",
"text": "Deliberative, argumentative discourse is an important component of opinion formation, belief revision, and knowledge discovery; it is a cornerstone of modern civil society. Argumentation is productively studied in branches ranging from theoretical artificial intelligence to political rhetoric, but empirical analysis has suffered from a lack of freely available, unscripted argumentative dialogs. This paper presents the Internet Argument Corpus (IAC), a set of 390, 704 posts in 11, 800 discussions extracted from the online debate site 4forums.com. A 2866 thread/130, 206 post extract of the corpus has been manually sided for topic of discussion, and subsets of this topic-labeled extract have been annotated for several dialogic and argumentative markers: degrees of agreement with a previous post, cordiality, audiencedirection, combativeness, assertiveness, emotionality of argumentation, and sarcasm. As an application of this resource, the paper closes with a discussion of the relationship between discourse marker pragmatics, agreement, emotionality, and sarcasm in the IAC corpus.",
"title": ""
}
] | [
{
"docid": "4c9aa3eb2b84577cbe505668c2aec80f",
"text": "This paper extends existing word segmentation models to take non-linguistic context into account. It improves the token F-score of a top performing segmentation models by 2.5% on a 27k utterances dataset. We posit that word segmentation is easier in-context because the learner is not trying to access irrelevant lexical items. We use topics from a Latent Dirichlet Allocation model as a proxy for “activities” contexts, to label the Providence corpus. We present Adaptor Grammar models that use these context labels, and we study their performance with and without context annotations at test time.",
"title": ""
},
{
"docid": "eed9000c395f5a5fe327744c712e9b04",
"text": "A core challenge in Business Process Management is the continuous, bi-directional translation between (1) a business requirements view on the process space of an enterprise and (2) the actual process space of this enterprise, constituted by the multiplicity of IT systems, resources, and human labor. Semantic Business Process Management (SBPM) [HeLD'05] is a novel approach of increasing the level of automation in the translation between these two spheres, and is currently driven by major players from the ERP, BPM, and Semantic Web Services domain, namely SAP. One core paradigm of SPBM is to represent the two spheres and their parts using ontology languages and to employ machine reasoning for the automated or semi-automated translation. In this paper, we (1) outline the representational requirements of SBPM, (2) propose a set of ontologies and formalisms, and (3) define the scope of these ontologies by giving competency questions, which is a common technique in the ontology engineering process.",
"title": ""
},
{
"docid": "858f6840881ae7b284149402f279185e",
"text": "Voting in elections is the basis of democracy, but citizens may not be able or willing to go to polling stations to vote on election days. Remote e-voting via the Internet provides the convenience of voting on the voter's own computer or mobile device, but Internet voting systems are vulnerable to many common attacks, affecting the integrity of an election. Distributing the processing of votes over many web servers installed in tamper-resistant, secure environments can improve security: this is possible by using the Smart Card Web Server (SCWS) on a mobile phone Subscriber Identity Module (SIM). This paper proposes a generic model for a voting application installed in the SIM/SCWS, which uses standardised Mobile Network Operator (MNO) management procedures to communicate (via HTTPs) with a voting authority to vote. The generic SCWS voting model is then used with the e-voting system Prêt à Voter. A preliminary security analysis of the proposal is carried out, and further research areas are identified. As the SCWS voting application is used in a distributed processing architecture, e-voting security is enhanced because to compromise an election, an attacker must target many individual mobile devices rather than a centralised web server.",
"title": ""
},
{
"docid": "53371fac3b92afe5bc6c51dccd95fc4b",
"text": "Multi-frequency electrical impedance tomography (EIT) systems require stable voltage controlled current generators that will work over a wide frequency range and with a large variation in load impedance. In this paper we compare the performance of two commonly used designs: the first is a modified Howland circuit whilst the second is based on a current mirror. The output current and the output impedance of both circuits were determined through PSPICE simulation and through measurement. Both circuits were stable over the frequency ranges 1 kHz to 1 MHz. The maximum variation of output current with frequency for the modified Howland circuit was 2.0% and for the circuit based on a current mirror 1.6%. The output impedance for both circuits was greater than 100 kohms for frequencies up to 100 kHz. However, neither circuit achieved this output impedance at 1 MHz. Comparing the results from the two circuits suggests that there is little to choose between them in terms of a practical implementation.",
"title": ""
},
{
"docid": "9049805c56c9b7fc212fdb4c7f85dfe1",
"text": "Intentions (6) Do all the important errands",
"title": ""
},
{
"docid": "740c3b23904fb05384f0d58c680ea310",
"text": "Huge amount data on the internet are in unstructured texts can‟t simply be used for further processing by computer , therefore specific processing method and algorithm require to extract useful pattern. Text mining is process to extract information from the unstructured data. Text classification is task of automatically sorting set of document into categories from predefined set. A major difficulty of text classification is high dimensionality of feature space. Feature selection method used for dimension reduction. This paper describe about text classification process, compare various classifier and also discuss feature selection method for solving problem of high dimensional data and application of text classification.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "1e46143d47f5f221094d0bb09505be80",
"text": "Clinical Scenario: Patients who experience prolonged concussion symptoms can be diagnosed with postconcussion syndrome (PCS) when those symptoms persist longer than 4 weeks. Aerobic exercise protocols have been shown to be effective in improving physical and mental aspects of health. Emerging research suggests that aerobic exercise may be useful as a treatment for PCS, where exercise allows patients to feel less isolated and more active during the recovery process.\n\n\nCLINICAL QUESTION\nIs aerobic exercise more beneficial in reducing symptoms than current standard care in patients with prolonged symptoms or PCS lasting longer than 4 weeks? Summary of Key Findings: After a thorough literature search, 4 studies relevant to the clinical question were selected. Of the 4 studies, 1 study was a randomized control trial and 3 studies were case series. All 4 studies investigated aerobic exercise protocol as treatment for PCS. Three studies demonstrated a greater rate of symptom improvement from baseline assessment to follow-up after a controlled subsymptomatic aerobic exercise program. One study showed a decrease in symptoms in the aerobic exercise group compared with the full-body stretching group. Clinical Bottom Line: There is moderate evidence to support subsymptomatic aerobic exercise as a treatment of PCS; therefore, it should be considered as a clinical option for reducing PCS and prolonged concussion symptoms. A previously validated protocol, such as the Buffalo Concussion Treadmill test, Balke protocol, or rating of perceived exertion, as mentioned in this critically appraised topic, should be used to measure baseline values and treatment progression. Strength of Recommendation: Level C evidence exists that the aerobic exercise protocol is more effective than the current standard of care in treating PCS.",
"title": ""
},
{
"docid": "a8d7f6dcaf55ebd5ec580b2b4d104dd9",
"text": "In this paper we investigate social tags as a novel highvolume source of semantic metadata for music, using techniques from the fields of information retrieval and multivariate data analysis. We show that, despite the ad hoc and informal language of tagging, tags define a low-dimensional semantic space that is extremely well-behaved at the track level, in particular being highly organised by artist and musical genre. We introduce the use of Correspondence Analysis to visualise this semantic space, and show how it can be applied to create a browse-by-mood interface for a psychologically-motivated two-dimensional subspace rep resenting musical emotion.",
"title": ""
},
{
"docid": "f442354c5a99ece9571168648285f763",
"text": "A general closed-form subharmonic stability condition is derived for the buck converter with ripple-based constant on-time control and a feedback filter. The turn-on delay is included in the analysis. Three types of filters are considered: low-pass filter (LPF), phase-boost filter (PBF), and inductor current feedback (ICF) which changes the feedback loop frequency response like a filter. With the LPF, the stability region is reduced. With the PBF or ICF, the stability region is enlarged. Stability conditions are determined both for the case of a single output capacitor and for the case of two parallel-connected output capacitors having widely different time constants. The past research results related to the feedback filters become special cases. All theoretical predictions are verified by experiments.",
"title": ""
},
{
"docid": "0e3135a7846cee7f892b99dc4881b461",
"text": "OBJECTIVE: This study examined the relation among children's physical activity, sedentary behaviours, and body mass index (BMI), while controlling for sex, family structure, and socioeconomic status.DESIGN: Epidemiological study examining the relations among physical activity participation, sedentary behaviour (video game use and television (TV)/video watching), and BMI on a nationally representative sample of Canadian children.SUBJECTS: A representative sample of Canadian children aged 7–11 (N=7216) from the 1994 National Longitudinal Survey of Children and Youth was used in the analysis.MEASUREMENTS: Physical activity and sport participation, sedentary behaviour (video game use and TV/video watching), and BMI measured by parental report.RESULTS: Both organized and unorganized sport and physical activity are negatively associated with being overweight (10–24% reduced risk) or obese (23–43% reduced risk), while TV watching and video game use are risk factors for being overweight (17–44% increased risk) or obese (10–61% increased risk). Physical activity and sedentary behaviour partially account for the association of high socioeconomic status and two-parent family structure with the likelihood of being overweight or obese.CONCLUSION: This study provides evidence supporting the link between physical inactivity and obesity of Canadian children.",
"title": ""
},
{
"docid": "b10ad91ce374a772790666da5a79616c",
"text": "Photophobia is a common yet debilitating symptom seen in many ophthalmic and neurologic disorders. Despite its prevalence, it is poorly understood and difficult to treat. However, the past few years have seen significant advances in our understanding of this symptom. We review the clinical characteristics and disorders associated with photophobia, discuss the anatomy and physiology of this phenomenon, and conclude with a practical approach to diagnosis and treatment.",
"title": ""
},
{
"docid": "93a3895a03edcb50af74db901cb16b90",
"text": "OBJECT\nBecause lumbar magnetic resonance (MR) imaging fails to identify a treatable cause of chronic sciatica in nearly 1 million patients annually, the authors conducted MR neurography and interventional MR imaging in 239 consecutive patients with sciatica in whom standard diagnosis and treatment failed to effect improvement.\n\n\nMETHODS\nAfter performing MR neurography and interventional MR imaging, the final rediagnoses included the following: piriformis syndrome (67.8%), distal foraminal nerve root entrapment (6%), ischial tunnel syndrome (4.7%), discogenic pain with referred leg pain (3.4%), pudendal nerve entrapment with referred pain (3%), distal sciatic entrapment (2.1%), sciatic tumor (1.7%), lumbosacral plexus entrapment (1.3%), unappreciated lateral disc herniation (1.3%), nerve root injury due to spinal surgery (1.3%), inadequate spinal nerve root decompression (0.8%), lumbar stenosis (0.8%), sacroiliac joint inflammation (0.8%), lumbosacral plexus tumor (0.4%), sacral fracture (0.4%), and no diagnosis (4.2%). Open MR-guided Marcaine injection into the piriformis muscle produced the following results: no response (15.7%), relief of greater than 8 months (14.9%), relief lasting 2 to 4 months with continuing relief after second injection (7.5%), relief for 2 to 4 months with subsequent recurrence (36.6%), and relief for 1 to 14 days with full recurrence (25.4%). Piriformis surgery (62 operations; 3-cm incision, transgluteal approach, 55% outpatient; 40% with local or epidural anesthesia) resulted in excellent outcome in 58.5%, good outcome in 22.6%, limited benefit in 13.2%, no benefit in 3.8%, and worsened symptoms in 1.9%.\n\n\nCONCLUSIONS\nThis Class A quality evaluation of MR neurography's diagnostic efficacy revealed that piriformis muscle asymmetry and sciatic nerve hyperintensity at the sciatic notch exhibited a 93% specificity and 64% sensitivity in distinguishing patients with piriformis syndrome from those without who had similar symptoms (p < 0.01). Evaluation of the nerve beyond the proximal foramen provided eight additional diagnostic categories affecting 96% of these patients. More than 80% of the population good or excellent functional outcome was achieved.",
"title": ""
},
{
"docid": "b44f24b54e45974421f799527391a9db",
"text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.",
"title": ""
},
{
"docid": "63a0eda53c38e434002c561687cf5e10",
"text": "We propose a constructive control design for stabilization of non-periodic trajectories of underactuated robots. An important example of such a system is an underactuated “dynamic walking” biped robot traversing rough or uneven terrain. The stabilization problem is inherently challenging due to the nonlinearity, open-loop instability, hybrid (impact) dynamics, and target motions which are not known in advance. The proposed technique is to compute a transverse linearization about the desired motion: a linear impulsive system which locally represents “transversal” dynamics about a target trajectory. This system is then exponentially stabilized using a modified receding-horizon control design, providing exponential orbital stability of the target trajectory of the original nonlinear system. The proposed method is experimentally verified using a compass-gait walker: a two-degree-of-freedom biped with hip actuation but pointed stilt-like feet. The technique is, however, very general and can be applied to a wide variety of hybrid nonlinear systems.",
"title": ""
},
{
"docid": "ad1d572a7ee58c92df5d1547fefba1e8",
"text": "The primary source for the blood supply of the head of the femur is the deep branch of the medial femoral circumflex artery (MFCA). In posterior approaches to the hip and pelvis the short external rotators are often divided. This can damage the deep branch and interfere with perfusion of the head. We describe the anatomy of the MFCA and its branches based on dissections of 24 cadaver hips after injection of neoprene-latex into the femoral or internal iliac arteries. The course of the deep branch of the MFCA was constant in its extracapsular segment. In all cases there was a trochanteric branch at the proximal border of quadratus femoris spreading on to the lateral aspect of the greater trochanter. This branch marks the level of the tendon of obturator externus, which is crossed posteriorly by the deep branch of the MFCA. As the deep branch travels superiorly, it crosses anterior to the conjoint tendon of gemellus inferior, obturator internus and gemellus superior. It then perforates the joint capsule at the level of gemellus superior. In its intracapsular segment it runs along the posterosuperior aspect of the neck of the femur dividing into two to four subsynovial retinacular vessels. We demonstrated that obturator externus protected the deep branch of the MFCA from being disrupted or stretched during dislocation of the hip in any direction after serial release of all other soft-tissue attachments of the proximal femur, including a complete circumferential capsulotomy. Precise knowledge of the extracapsular anatomy of the MFCA and its surrounding structures will help to avoid iatrogenic avascular necrosis of the head of the femur in reconstructive surgery of the hip and fixation of acetabular fractures through the posterior approach.",
"title": ""
},
{
"docid": "8b3557219674c8441e63e9b0ab459c29",
"text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.",
"title": ""
},
{
"docid": "dbac70b623466c13d6033f6af5520910",
"text": "This paper first presents an improved trajectory-based algorithm for automatically detecting and tracking the ball in broadcast soccer video. Unlike the object-based algorithms, our algorithm does not evaluate whether a sole object is a ball. Instead, it evaluates whether a candidate trajectory, which is generated from the candidate feature image by a candidate verification procedure based on Kalman filter,, which is generated from the candidate feature image by a candidate verification procedure based on Kalman filter, is a ball trajectory. Secondly, a new approach for automatically analyzing broadcast soccer video is proposed, which is based on the ball trajectory. The algorithms in this approach not only improve play-break analysis and high-level semantic event detection, but also detect the basic actions and analyze team ball possession, which may not be analyzed based only on the low-level feature. Moreover, experimental results show that our ball detection and tracking algorithm can achieve above 96% accuracy for the video segments with the soccer field. Compared with the existing methods, a higher accuracy is achieved on goal detection and play-break segmentation. To the best of our knowledge, we present the first solution in detecting the basic actions such as touching and passing, and analyzing the team ball possession in broadcast soccer video.",
"title": ""
}
] | scidocsrr |
6983c21d0a12808f443e462b3ce3de13 | Lucid dreaming treatment for nightmares: a pilot study. | [
{
"docid": "5bcccfe91c68d12b8bf78017a477c979",
"text": "SUMMARY\nThe occurrence of lucid dreaming (dreaming while being conscious that one is dreaming) has been verified for 5 selected subjects who signaled that they knew they were dreaming while continuing to dream during unequivocal REM sleep. The signals consisted of particular dream actions having observable concomitants and were performed in accordance with pre-sleep agreement. The ability of proficient lucid dreamers to signal in this manner makes possible a new approach to dream research--such subjects, while lucid, could carry out diverse dream experiments marking the exact time of particular dream events, allowing derivation of of precise psychophysiological correlations and methodical testing of hypotheses.",
"title": ""
}
] | [
{
"docid": "4a6d231ce704e4acf9320ac3bd5ade14",
"text": "Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",
"title": ""
},
{
"docid": "e4cefd3932ea07682e4eef336dda278b",
"text": "Rubinstein-Taybi syndrome (RSTS) is a developmental disorder characterized by a typical face and distal limbs abnormalities, intellectual disability, and a vast number of other features. Two genes are known to cause RSTS, CREBBP in 60% and EP300 in 8-10% of clinically diagnosed cases. Both paralogs act in chromatin remodeling and encode for transcriptional co-activators interacting with >400 proteins. Up to now 26 individuals with an EP300 mutation have been published. Here, we describe the phenotype and genotype of 42 unpublished RSTS patients carrying EP300 mutations and intragenic deletions and offer an update on another 10 patients. We compare the data to 308 individuals with CREBBP mutations. We demonstrate that EP300 mutations cause a phenotype that typically resembles the classical RSTS phenotype due to CREBBP mutations to a great extent, although most facial signs are less marked with the exception of a low-hanging columella. The limb anomalies are more similar to those in CREBBP mutated individuals except for angulation of thumbs and halluces which is very uncommon in EP300 mutated individuals. The intellectual disability is variable but typically less marked whereas the microcephaly is more common. All types of mutations occur but truncating mutations and small rearrangements are most common (86%). Missense mutations in the HAT domain are associated with a classical RSTS phenotype but otherwise no genotype-phenotype correlation is detected. Pre-eclampsia occurs in 12/52 mothers of EP300 mutated individuals versus in 2/59 mothers of CREBBP mutated individuals, making pregnancy with an EP300 mutated fetus the strongest known predictor for pre-eclampsia. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "d52c31b947ee6edf59a5ef416cbd0564",
"text": "Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors, which are often regarded as pseudo-background, are effective clues to find salient objects in images. Although image boundary is commonly used as background priors, it does not work well for images of complex scenes and videos. In this paper, we explore how to identify the background priors for a video and propose a saliency-based method to detect the visual objects by using the background priors. For a video, we integrate multiple pairs of scale-invariant feature transform flows from long-range frames, and a bidirectional consistency propagation is conducted to obtain the accurate and sufficient temporal background priors, which are combined with spatial background priors to generate spatiotemporal background priors. Next, a novel dual-graph-based structure using spatiotemporal background priors is put forward in the computation of saliency maps, fully taking advantage of appearance and motion information in videos. Experimental results on different challenging data sets show that the proposed method robustly and accurately detects the video objects in both simple and complex scenes and achieves better performance compared with other the state-of-the-art video saliency models.",
"title": ""
},
{
"docid": "c56daed0cc2320892fad3ac34ce90e09",
"text": "In this paper we describe the open source data analytics platform KNIME, focusing particularly on extensions and modules supporting fuzzy sets and fuzzy learning algorithms such as fuzzy clustering algorithms, rule induction methods, and interactive clustering tools. In addition we outline a number of experimental extensions, which are not yet part of the open source release and present two illustrative examples from real world applications to demonstrate the power of the KNIME extensions.",
"title": ""
},
{
"docid": "806ae85b278c98a9107adeb1f55b8808",
"text": "The present studies report the effects on neonatal rats of oral exposure to genistein during the period from birth to postnatal day (PND) 21 to generate data for use in assessing human risk following oral ingestion of genistein. Failure to demonstrate significant exposure of the newborn pups via the mothers milk led us to subcutaneously inject genistein into the pups over the period PND 1-7, followed by daily gavage dosing to PND 21. The targeted doses throughout were 4 mg/kg/day genistein (equivalent to the average exposure of infants to total isoflavones in soy milk) and a dose 10 times higher than this (40 mg/kg genistein). The dose used during the injection phase of the experiment was based on plasma determinations of genistein and its major metabolites. Diethylstilbestrol (DES) at 10 micro g/kg was used as a positive control agent for assessment of changes in the sexually dimorphic nucleus of the preoptic area (SDN-POA). Administration of 40 mg/kg genistein increased uterus weights at day 22, advanced the mean day of vaginal opening, and induced permanent estrus in the developing female pups. Progesterone concentrations were also decreased in the mature females. There were no effects in females dosed with 4 mg/kg genistein, the predicted exposure level for infants drinking soy-based infant formulas. There were no consistent effects on male offspring at either dose level of genistein. Although genistein is estrogenic at 40 mg/kg/day, as illustrated by the effects described above, this dose does not have the same repercussions as DES in terms of the organizational effects on the SDN-POA.",
"title": ""
},
{
"docid": "7df7377675ac0dfda5bcd22f2f5ba22b",
"text": "Background and Aim. Esthetic concerns in primary teeth have been studied mainly from the point of view of parents. The aim of this study was to study compare the opinions of children aged 5-8 years to have an opinion regarding the changes in appearance of their teeth due to dental caries and the materials used to restore those teeth. Methodology. A total of 107 children and both of their parents (n = 321), who were seeking dental treatment, were included in this study. A tool comprising a questionnaire and pictures of carious lesions and their treatment arranged in the form of a presentation was validated and tested on 20 children and their parents. The validated tool was then tested on all participants. Results. Children had acceptable validity statistics for the tool suggesting that they were able to make informed decisions regarding esthetic restorations. There was no difference between the responses of the children and their parents on most points. Zirconia crowns appeared to be the most acceptable full coverage restoration for primary anterior teeth among both children and their parents. Conclusion. Within the limitations of the study it can be concluded that children in their sixth year of life are capable of appreciating the esthetics of the restorations for their anterior teeth.",
"title": ""
},
{
"docid": "7926ab6b5cd5837a9b3f59f8a1b3f5ac",
"text": "Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet.",
"title": ""
},
{
"docid": "bd24772c4f75f90fe51841aeb9632e4f",
"text": "Fifty years have passed since the publication of the first regression tree algorithm. New techniques have added capabilities that far surpass those of the early methods. Modern classification trees can partition the data with linear splits on subsets of variables and fit nearest neighbor, kernel density, and other models in the partitions. Regression trees can fit almost every kind of traditional statistical model, including least-squares, quantile, logistic, Poisson, and proportional hazards models, as well as models for longitudinal and multiresponse data. Greater availability and affordability of software (much of which is free) have played a significant role in helping the techniques gain acceptance and popularity in the broader scientific community. This article surveys the developments and briefly reviews the key ideas behind some of the major algorithms.",
"title": ""
},
{
"docid": "17598d7543d81dcf7ceb4cb354fb7c81",
"text": "Bitcoin is the first decentralized crypto-currency that is currently by far the most popular one in use. The bitcoin transaction syntax is expressive enough to setup digital contracts whose fund transfer can be enforced automatically. In this paper, we design protocols for the bitcoin voting problem, in which there are n voters, each of which wishes to fund exactly one of two candidates A and B. The winning candidate is determined by majority voting, while the privacy of individual vote is preserved. Moreover, the decision is irrevocable in the sense that once the outcome is revealed, the winning candidate is guaranteed to have the funding from all n voters. As in previous works, each voter is incentivized to follow the protocol by being required to put a deposit in the system, which will be used as compensation if he deviates from the protocol. Our solution is similar to previous protocols used for lottery, but needs an additional phase to distribute secret random numbers via zero-knowledge-proofs. Moreover, we have resolved a security issue in previous protocols that could prevent compensation from being paid.",
"title": ""
},
{
"docid": "6897a459e95ac14772de264545970726",
"text": "There is a need for a system which provides real-time local environmental data in rural crop fields for the detection and management of fungal diseases. This paper presents the design of an Internet of Things (IoT) system consisting of a device capable of sending real-time environmental data to cloud storage and a machine learning algorithm to predict environmental conditions for fungal detection and prevention. The stored environmental data on conditions such as air temperature, relative air humidity, wind speed, and rain fall is accessed and processed by a remote computer for analysis and management purposes. A machine learning algorithm using Support Vector Machine regression (SVMr) was developed to process the raw data and predict short-term (day-to-day) air temperature, relative air humidity, and wind speed values to assist in predicting the presence and spread of harmful fungal diseases through the local crop field. Together, the environmental data and environmental predictions made easily accessible by this IoT system will ultimately assist crop field managers by facilitating better management and prevention of fungal disease spread.",
"title": ""
},
{
"docid": "704bd445fd9ff34a2d71e8e5b196760c",
"text": "Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approaches tend to work in a “unidirectional” bottom-up feed-forward fashion. However, biological evidence suggests that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work introduces “bidirectional” architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as latent variables in a global energy function. We call our models convolutional latentvariable models (CLVMs). From a theoretical perspective, CLVMs unify several approaches for recognition, including CNNs, generative deep models (e.g., Boltzmann machines), and discriminative latent-variable models (e.g., DPMs). From a practical perspective, CLVMs are particularly well-suited for multi-task learning. We describe a single architecture that simultaneously achieves state-of-the-art accuracy for tasks spanning both high-level recognition (part detection/localization) and low-level grouping (pixel segmentation). Bidirectional reasoning is particularly helpful for detailed low-level tasks, since they can take advantage of top-down feedback. Our architectures are quite efficient, capable of processing an image in milliseconds. We present results on benchmark datasets with both part/keypoint labels and segmentation masks (such as PASCAL and LFW) that demonstrate a significant improvement over prior art, in both speed and accuracy.",
"title": ""
},
{
"docid": "5745ed6c874867ad2de84b040e40d336",
"text": "The chemokine (C-X-C motif) ligand 1 (CXCL1) regulates tumor-stromal interactions and tumor invasion. However, the precise role of CXCL1 on gastric tumor growth and patient survival remains unclear. In the current study, protein expressions of CXCL1, vascular endothelial growth factor (VEGF) and phospho-signal transducer and activator of transcription 3 (p-STAT3) in primary tumor tissues from 98 gastric cancer patients were measured by immunohistochemistry (IHC). CXCL1 overexpressed cell lines were constructed using Lipofectamine 2000 reagent or lentiviral vectors. Effects of CXCL1 on VEGF expression and local tumor growth were evaluated in vitro and in vivo. CXCL1 was positively expressed in 41.4% of patients and correlated with VEGF and p-STAT3 expression. Higher CXCL1 expression was associated with advanced tumor stage and poorer prognosis. In vitro studies in AGS and SGC-7901 cells revealed that CXCL1 increased cell migration but had little effect on cell proliferation. CXCL1 activated VEGF signaling in gastric cancer (GC) cells, which was inhibited by STAT3 or chemokine (C-X-C motif) receptor 2 (CXCR2) blockade. CXCL1 also increased p-STAT3 expression in GC cells. In vivo, CXCL1 increased xenograft local tumor growth, phospho-Janus kinase 2 (p-JAK2), p-STAT3 levels, VEGF expression and microvessel density. These results suggested that CXCL1 increased local tumor growth through activation of VEGF signaling which may have mechanistic implications for the observed inferior GC survival. The CXCL1/CXCR2 pathway might be potent to improve anti-angiogenic therapy for gastric cancer.",
"title": ""
},
{
"docid": "4737fe7f718f79c74595de40f8778da2",
"text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.",
"title": ""
},
{
"docid": "7f711c94920e0bfa8917ad1b5875813c",
"text": "With the increasing acceptance of Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, a radical transformation is currently occurring inside network providers infrastructures. The trend of Software-based networks foreseen with the 5th Generation of Mobile Network (5G) is drastically changing requirements in terms of how networks are deployed and managed. One of the major changes requires the transaction towards a distributed infrastructure, in which nodes are built with standard commodity hardware. This rapid deployment of datacenters is paving the way towards a different type of environment in which the computational resources are deployed up to the edge of the network, referred to as Multi-access Edge Computing (MEC) nodes. However, MEC nodes do not usually provide enough resources for executing standard virtualization technologies typically used in large datacenters. For this reason, software containerization represents a lightweight and viable virtualization alternative for such scenarios. This paper presents an architecture based on the Open Baton Management and Orchestration (MANO) framework combining different infrastructural technologies supporting the deployment of container-based network services even at the edge of the network.",
"title": ""
},
{
"docid": "ba39b85859548caa2d3f1d51a7763482",
"text": "A new antenna structure of internal LTE/WWAN laptop computer antenna formed by a coupled-fed loop antenna connected with two branch radiators is presented. The two branch radiators consist of one longer strip and one shorter strip, both contributing multi-resonant modes to enhance the bandwidth of the antenna. The antenna's lower band is formed by a dual-resonant mode mainly contributed by the longer branch strip, while the upper band is formed by three resonant modes contributed respectively by one higher-order resonant mode of the longer branch strip, one resonant mode of the coupled-fed loop antenna alone, and one resonant mode of the shorter branch strip. The antenna's lower and upper bands can therefore cover the desired 698~960 and 1710~2690 MHz bands, respectively. The proposed antenna is suitable to be mounted at the top shielding metal wall of the display ground of the laptop computer and occupies a small volume of 4 × 10 × 75 mm3 above the top shielding metal wall, which makes it promising to be embedded inside the casing of the laptop computer as an internal antenna.",
"title": ""
},
{
"docid": "93a39df6ee080e359f50af46d02cdb71",
"text": "Mobile edge computing (MEC) providing information technology and cloud-computing capabilities within the radio access network is an emerging technique in fifth-generation networks. MEC can extend the computational capacity of smart mobile devices (SMDs) and economize SMDs’ energy consumption by migrating the computation-intensive task to the MEC server. In this paper, we consider a multi-mobile-users MEC system, where multiple SMDs ask for computation offloading to a MEC server. In order to minimize the energy consumption on SMDs, we jointly optimize the offloading selection, radio resource allocation, and computational resource allocation coordinately. We formulate the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints. In order to solve the problem, we propose a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy. Considering the complexity of RTLBB cannot be guaranteed, we further design a Gini coefficient-based greedy heuristic (GCGH) to solve the MINLP problem in polynomial complexity by degrading the MINLP problem into the convex problem. Many simulation results demonstrate the energy saving enhancements of RLTBB and GCGH.",
"title": ""
},
{
"docid": "a1fed0bcce198ad333b45bfc5e0efa12",
"text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.",
"title": ""
},
{
"docid": "fa62c54cf22c7d0822c7a4171a3d8bcd",
"text": "Interaction with robot systems for specification of manufacturing tasks and motions needs to be simple, to enable wide-spread use of robots in SMEs. In the best case, existing practices from manual work could be used, to smoothly let current employees start using robot technology as a natural part of their work. Our aim is to simplify the robot programming task by allowing the user to simply make technical drawings on a sheet of paper. Craftsman use paper and raw sketches for several situations; to share ideas, to get a better imagination or to remember the customer situation. Currently these sketches have either to be interpreted by the worker when producing the final product by hand, or transferred into CAD file using an according tool. The former means that no automation is included, the latter means extra work and much experience in using the CAD tool. Our approach is to use the digital pen and paper from Anoto as input devices for SME robotic tasks, thereby creating simpler and more user friendly alternatives for programming, parameterization and commanding actions. To this end, the basic technology has been investigated and fully working prototypes have been developed to explore the possibilities and limitation in the context of typical SME applications. Based on the encouraging experimental results, we believe that drawings on digital paper will, among other means of human-robot interaction, play an important role in manufacturing SMEs in the future. Index Terms — CAD, Human machine interfaces, Industrial Robots, Robot programming.",
"title": ""
},
{
"docid": "6f679c5678f1cc5fed0af517005cb6f5",
"text": "In today's world of globalization, there is a serious need of incorporating semantics in Education Domain which is very significant with an ultimate goal of providing an efficient, adaptive and personalized learning environment. An attempt towards this goal has been made to develop an Education based Ontology with some capability to describe a semantic web based sharable knowledge. So as a contribution, this paper presents a revisit towards amalgamating Semantics in Education. In this direction, an effort has been made to construct an Education based Ontology using Protege 5.2.0, where a hierarchy of classes and subclasses have been defined along with their properties, relations, and instances. Finally, at the end of this paper an implementation is also presented involving query retrieval using DLquery illustrations.",
"title": ""
},
{
"docid": "f5ce4a13a8d081243151e0b3f0362713",
"text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the",
"title": ""
}
] | scidocsrr |
750486024ecb735d65d683309e9f7933 | Accurate Scene Text Recognition Based on Recurrent Neural Network | [
{
"docid": "26fc8289a213c51b43777fc909eaeb7e",
"text": "This paper tackles the problem of recognizing characters in images of natural scenes. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English and Kannada characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object cateogorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbour and SVM classification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems. Furthermore, the method can benefit from synthetically generated training data obviating the need for expensive data collection and annotation.",
"title": ""
}
] | [
{
"docid": "2511ab81d229e87d14dfa7511f186cad",
"text": "THIS ARTICLE MAY BE REPRODUCED OR TRANSMITTED IN ANY FORM WITHOUT WRITTEN PERMISSION FROM THE PUBLISHER. 45 One of the most popular self-help books in the United States today is The Seven Habits of Highly Effective People by Stephen R. Covey.1 In this best seller, Covey presents a principle-centered approach for solving personal and professional problems. One of his admonitions is to “begin with the end in mind.” According to Covey, “To begin with the end in mind means to start with a clear understanding of your destination. It means to know where you’re going so that you better understand where you are now and so that the steps that you take are always in the right direction.” Covey’s approach to life in general can be directly applied to routine orthodontic treatment with fixed appliances, in that the clinician should have a clear understanding of the sequence of events that will lead to an excellent clinical result. A personal examination of 30 years of transfer cases seen in private practice, however, reveals that not all clinicians share the same vision as to the sequence of events that should occur, even in relatively routine treatments. Nor do all clinicians prepare a patient for fixed appliance therapy in the same manner, as is evidenced by the wide variation observed in band and bond positions. This variation in bracket position occurs so frequently that, when accepting a transfer patient, many clinicians simply remove the existing appliances and replace them not only with their own specific prescription, but also place the brackets in position according to their own preference. Radical changes in treatment plan often occur as well. Most times, the details of appliance manipulation are as important as the original diagnosis and treatment plan in achieving an excellent orthodontic result. This article discusses many of those details.",
"title": ""
},
{
"docid": "83d0dc6c2ad117cabbd7cd80463dbe43",
"text": "Internet addiction is a new and often unrecognized clinical disorder that can cause relational, occupational, and social problems. Pathological gambling is compared to problematic internet use because of overlapping diagnostic criteria. As computers are used with great frequency, detection and diagnosis of internet addiction is often difficult. Symptoms of a possible problem may be masked by legitimate use of the internet. Clinicians may overlook asking questions about computer use. To help clinicians identify internet addiction in practice, this paper provides an overview of the problem and the various subtypes that have been identified. The paper reviews conceptualizations of internet addiction, various forms that the disorder takes, and treatment considerations for working with this emergent client population.",
"title": ""
},
{
"docid": "79101132835328557d91b123d99e3526",
"text": "We present a transmit subaperturing (TS) approach for multiple-input multiple-output (MIMO) radars with co-located antennas. The proposed scheme divides the transmit array elements into multiple groups, each group forms a directional beam and modulates a distinct waveform, and all beams are steerable and point to the same direction. The resulting system is referred to as a TS-MIMO radar. A TS-MIMO radar is a tunable system that offers a continuum of operating modes from the phased-array radar, which achieves the maximum directional gain but the least interference rejection ability, to the omnidirectional transmission based MIMO radar, which can handle the largest number of interference sources but offers no directional gain. Tuning of the TS-MIMO system can be easily made by changing the configuration of the transmit subapertures, which provides a direct tradeoff between the directional gain and interference rejection power of the system. The performance of the TS-MIMO radar is examined in terms of the output signal-to-interference-plus-noise ratio (SINR) of an adaptive beamformer in an interference and training limited environment, where we show analytically how the output SINR is affected by several key design parameters, including the size/number of the subapertures and the number of training signals. Our results are verified by computer simulation and comparisons are made among various operating modes of the proposed TS-MIMO system.",
"title": ""
},
{
"docid": "387c2b51fcac3c4f822ae337cf2d3f8d",
"text": "This paper directly follows and extends, where a novel method for measurement of extreme impedances is described theoretically. In this paper experiments proving that the method can significantly improve stability of a measurement system are described. Using Agilent PNA E8364A vector network analyzer (VNA) the method is able to measure reflection coefficient with stability improved 36-times in magnitude and 354-times in phase compared to the classical method of reflection coefficient measurement. Further, validity of the error model and related equations stated in are verified by real measurement of SMD resistors (size 0603) in microwave test fixture. Values of the measured SMD resistors range from 12 kOmega up to 330 kOmega. A novel calibration technique using three different resistors as calibration standards is used. The measured values of impedances reasonably agree with assumed values.",
"title": ""
},
{
"docid": "2974d042acbf8b7cfa5772aa6c27c5da",
"text": "Physical Unclonable Functions (PUFs) are cryptographic primitives that can be used to generate volatile secret keys for cryptographic operations and enable low-cost authentication of integrated circuits. Existing PUF designs mainly exploit variation effects on silicon and hence are not readily applicable for the authentication of printed circuit boards (PCBs). To tackle the above problem, in this paper, we propose a novel PUF device that is able to generate unique and stable IDs for individual PCB, namely BoardPUF. To be specific, we embed a number of capacitors in the internal layer of PCBs and utilize their variations for key generation. Then, by integrating a cryptographic primitive (e.g. hash function) into BoardPUF, we can effectively perform PCB authentication in a challenge-response manner. Our experimental results on fabricated boards demonstrate the efficacy of BoardPUF.",
"title": ""
},
{
"docid": "bffc44d02edaa8a699c698185e143d22",
"text": "Photoplethysmography (PPG) technology has been used to develop small, wearable, pulse rate sensors. These devices, consisting of infrared light-emitting diodes (LEDs) and photodetectors, offer a simple, reliable, low-cost means of monitoring the pulse rate noninvasively. Recent advances in optical technology have facilitated the use of high-intensity green LEDs for PPG, increasing the adoption of this measurement technique. In this review, we briefly present the history of PPG and recent developments in wearable pulse rate sensors with green LEDs. The application of wearable pulse rate monitors is discussed.",
"title": ""
},
{
"docid": "2ec9ac2c283fa0458eb97d1e359ec358",
"text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.",
"title": ""
},
{
"docid": "97dd8b1630b574797ca2847e6b3dfc0c",
"text": "We propose a novel discipline for programming stream functions and for the semantic description of stream manipulation languages based on the observation that both general and causal stream functions can be characterized as coKleisli arrows of comonads. This seems to be a promising application for the old, but very little exploited idea that if monads abstract notions of computation of a value, comonads ought to be useable as an abstraction of notions of value in a context. We also show that causal partial-stream functions can be described in terms of a combination of a comonad and a monad.",
"title": ""
},
{
"docid": "5a69b2301b95976ee29138092fc3bb1a",
"text": "We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format.",
"title": ""
},
{
"docid": "b114ebfd30146d8fcb175db42b5e898e",
"text": "Smartphones are becoming more preferred companions to users than desktops or notebooks. Knowing that smartphones are most popular with users at the age around 26, using smartphones to speed up the process of taking attendance by university instructors would save lecturing time and hence enhance the educational process. This paper proposes a system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture. The students will need to scan the code in order to confirm their attendance. The paper explains the high level implementation details of the proposed system. It also discusses how the system verifies student identity to eliminate false registrations. Keywords—Mobile Computing; Attendance System; Educational System; GPS",
"title": ""
},
{
"docid": "3db831270f7b40b73fa98e0f404d735b",
"text": "In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning, based on questionnaires administered to players after playing different levels. The contributions of the current paper are (1) more accurate models based on a much larger data set; (2) a mechanism for adapting level design parameters to given players and playing style; (3) evaluation of this adaptation mechanism using both algorithmic and human players. The results indicate that the adaptation mechanism effectively optimizes level design parameters for particular players.",
"title": ""
},
{
"docid": "7b5eacf2e826e4b7a68395d9c7421463",
"text": "How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands.",
"title": ""
},
{
"docid": "eb8d1663cf6117d76a6b61de38b55797",
"text": "Many security experts would agree that, had it not been for mobile configurations, the synthesis of online algorithms might never have occurred. In fact, few computational biologists would disagree with the evaluation of von Neumann machines. We construct a peer-to-peer tool for harnessing Smalltalk, which we call TalmaAment.",
"title": ""
},
{
"docid": "3a55674e92d3b8dd38eaa5058aed3425",
"text": "OBJECTIVE\nThe objective of the present systematic review was to analyze the potential effect of incorporation of cantilever extensions on the survival rate of implant-supported fixed partial dental prostheses (FPDPs) and the incidence of technical and biological complications, as reported in longitudinal studies with at least 5 years of follow-up.\n\n\nMETHODS\nA MEDLINE search was conducted up to and including November 2008 for longitudinal studies with a mean follow-up period of at least 5 years. Two reviewers performed screening and data abstraction independently. Prosthesis-based data on survival/failure rate, technical complications (prosthesis-related problems, implant loss) and biological complications (marginal bone loss) were analyzed.\n\n\nRESULTS\nThe search provided 103 titles with abstract. Full-text analysis was performed of 12 articles, out of which three were finally included. Two of the studies had a prospective or retrospective case-control design, whereas the third was a prospective cohort study. The 5-year survival rate of cantilever FPDPs varied between 89.9% and 92.7% (weighted mean 91.9%), with implant fracture as the main cause for failures. The corresponding survival rate for FPDPs without cantilever extensions was 96.3-96.2% (weighted mean 95.8%). Technical complications related to the supra-constructions in the three included studies were reported to occur at a frequency of 13-26% (weighted mean 20.3%) for cantilever FPDPs compared with 0-12% (9.7%) for non-cantilever FPDPs. The most common complications were minor porcelain fractures and bridge-screw loosening. For cantilever FPDPs, the 5-year event-free survival rate varied between 66.7% and 79.2% (weighted mean 71.7%) and between 83.1% and 96.3% (weighted mean 85.9%) for non-cantilever FPDPs. No statistically significant differences were reported with regard to peri-implant bone-level change between the two prosthetic groups, either at the prosthesis or at the implant level.\n\n\nCONCLUSION\nData on implant-supported FPDPs with cantilever extensions are limited and therefore survival and complication rates should be interpreted with caution. The incorporation of cantilevers into implant-borne prostheses may be associated with a higher incidence of minor technical complications.",
"title": ""
},
{
"docid": "d1eb1b18105d79c44dc1b6b3b2c06ee2",
"text": "An implementation of high speed AES algorithm based on FPGA is presented in this paper in order to improve the safety of data in transmission. The mathematic principle, encryption process and logic structure of AES algorithm are introduced. So as to reach the porpose of improving the system computing speed, the pipelining and papallel processing methods were used. The simulation results show that the high-speed AES encryption algorithm implemented correctly. Using the method of AES encryption the data could be protected effectively.",
"title": ""
},
{
"docid": "759bf80a33903899cb7f684aa277eddd",
"text": "Effective patient similarity assessment is important for clinical decision support. It enables the capture of past experience as manifested in the collective longitudinal medical records of patients to help clinicians assess the likely outcomes resulting from their decisions and actions. However, it is challenging to devise a patient similarity metric that is clinically relevant and semantically sound. Patient similarity is highly context sensitive: it depends on factors such as the disease, the particular stage of the disease, and co-morbidities. One way to discern the semantics in a particular context is to take advantage of physicians’ expert knowledge as reflected in labels assigned to some patients. In this paper we present a method that leverages localized supervised metric learning to effectively incorporate such expert knowledge to arrive at semantically sound patient similarity measures. Experiments using data obtained from the MIMIC II database demonstrate the effectiveness of this approach.",
"title": ""
},
{
"docid": "17797efad4f13f961ed300316eb16b6b",
"text": "Cellular senescence, which has been linked to age-related diseases, occurs during normal aging or as a result of pathological cell stress. Due to their incapacity to proliferate, senescent cells cannot contribute to normal tissue maintenance and tissue repair. Instead, senescent cells disturb the microenvironment by secreting a plethora of bioactive factors that may lead to inflammation, regenerative dysfunction and tumor progression. Recent understanding of stimuli and pathways that induce and maintain cellular senescence offers the possibility to selectively eliminate senescent cells. This novel strategy, which so far has not been tested in humans, has been coined senotherapy or senolysis. In mice, senotherapy proofed to be effective in models of accelerated aging and also during normal chronological aging. Senotherapy prolonged lifespan, rejuvenated the function of bone marrow, muscle and skin progenitor cells, improved vasomotor function and slowed down atherosclerosis progression. While initial studies used genetic approaches for the killing of senescent cells, recent approaches showed similar effects with senolytic drugs. These observations open up exciting possibilities with a great potential for clinical development. However, before the integration of senotherapy into patient care can be considered, we need further research to improve our insight into the safety and efficacy of this strategy during short- and long-term use.",
"title": ""
},
{
"docid": "724d2443f884aa50abe3837704d22799",
"text": "Multi-layer models with multiple attention heads per layer provide superior translation quality compared to simpler and shallower models, but determining what source context is most relevant to each target word is more challenging as a result. Therefore, deriving high-accuracy word alignments from the activations of a state-of-the-art neural machine translation model is an open challenge. We propose a simple model extension to the Transformer architecture that makes use of its hidden representations and is restricted to attend solely on encoder information to predict the next word. It can be trained on bilingual data without word-alignment information. We further introduce a novel alignment inference procedure which applies stochastic gradient descent to directly optimize the attention activations towards a given target word. The resulting alignments dramatically outperform the naı̈ve approach to interpreting Transformer attention activations, and are comparable to Giza++ on two publicly available data sets.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "4635cba50b2ebdacf9787b90ec76b06f",
"text": "The Berkeley FrameNet Project1 has been engaged since 1997 in discovering and describing the semantic and distributional properties of words in the general vocabulary of English.2 Notions from FRAME SEMANTICS (see Fillmore and Baker 2009 and references therein) provide the basis of the semantic description of the lexical units in the database, and sentences extracted from the FrameNet (FN) text corpora3 serve as material for analysis and annotation. The goal is to describe the combinatorial properties of each word, both semantically and syntactically, as these properties are revealed in the corpora. The present chapter reviews some of the principles and displays some of the results of FrameNet’s lexical research but focuses on an appended project for recognizing and cataloguing GRAMMATICAL CONSTRUCTIONS in English. The registry of English constructions to be created by this secondary project—the Constructicon—will describe the grammatical characteristics and semantic import of each construction, and will link to each a collection of sample sentences that have been annotated to exhibit these characteristics, using tools developed for the earlier lexical work. While building a Constructicon has different goals from those of designing a construction-based grammar of the language, the intention is that each construction will be represented in a way compatible with the development of full grammar of the language of the sort presented elsewhere in this volume (see especially Sag this volume). In some cases we offer precise proposals for the treatment of a construction as it would appear in the grammar; in other cases the descriptions we present should be seen at least as organized observations about individual constructions, observations that need to be accounted for in a future complete grammar. In all cases we expect that the constructicon will contain useful information for advanced language pedagogy and that it will suggest new levels of expectation for parsing and other NLP activities. The main body of the finished Constructicon will display the properties of constructional phenomena in an abbreviated format, alongside of a representative sample of English sentences annotated to display the properties claimed for each construction. The annotation procedure follows a method of identifying and labeling phrases originally developed for FN’s lexicographic activities but adapted to indicate (1) the stretches of language that count as instances of given constructions (e.g. the phrase bracketed in { } in They",
"title": ""
}
] | scidocsrr |
9711dfa77aaad4d6223d8ab145ad4f7f | Antenna-in-Package and Transmit–Receive Switch for Single-Chip Radio Transceivers of Differential Architecture | [
{
"docid": "6d70ac4457983c7df8896a9d31728015",
"text": "This brief presents a differential transmit-receive (T/R) switch integrated in a 0.18-mum standard CMOS technology for wireless applications up to 6 GHz. This switch design employs fully differential architecture to accommodate the design challenge of differential transceivers and improve the linearity performance. It exhibits less than 2-dB insertion loss, higher than 15-dB isolation, in a 60 mumtimes40 mum area. 15-dBm power at 1-dB compression point (P1dB) is achieved without using additional techniques to enhance the linearity. This switch is suitable for differential transceiver front-ends with a moderate power level. To the best of the authors' knowledge, this is the first reported differential T/R switch in CMOS for multistandard and wideband wireless applications",
"title": ""
},
{
"docid": "277919545c003c0c2a266ace0d70de03",
"text": "Two single-pole, double-throw transmit/receive switches were designed and fabricated with different substrate resistances using a 0.18-/spl mu/m p/sup $/substrate CMOS process. The switch with low substrate resistances exhibits 0.8-dB insertion loss and 17-dBm P/sub 1dB/ at 5.825 GHz, whereas the switch with high substrate resistances has 1-dB insertion loss and 18-dBm P/sub 1dB/. These results suggest that the optimal insertion loss can be achieved with low substrate resistances and 5.8-GHz T/R switches with excellent insertion loss and reasonable power handling capability can be implemented in a 0.18-/spl mu/m CMOS process.",
"title": ""
}
] | [
{
"docid": "1fd0f4fd2d63ef3a71f8c56ce6a25fb5",
"text": "A new ‘growing’ maximum likelihood classification algorithm for small reservoir delineation has been developed and is tested with Radarsat-2 data for reservoirs in the semi-arid Upper East Region, Ghana. The delineation algorithm is able to find the land-water boundary from SAR imagery for different weather and environmental conditions. As such, the algorithm allows for remote sensed operational monitoring of small reservoirs.",
"title": ""
},
{
"docid": "005c1b8d6ca23a4ba2d315d2e541dba7",
"text": "This paper proposes a satellite receiver filter design using FIR digital filtering technique. We present various design methods like windowing, least squares and equiripple for satellite burst demodulator application and compare their performance. Various designs of FIR filter are compared from the view point of hardware complexity, frequency response characteristics and implementation strategies. The filter is designed for band pass of the frequency range of 100 MHz to 500 MHz suitable for the entire bandwidth of satellite transponder. The burst mode detector requires narrow passband to increase SNR for preamble portion. When acquisition phase is complete, the bandpass is increased to full bandwidth of the signal.",
"title": ""
},
{
"docid": "5318baa10a6db98a0f31c6c30fdf6104",
"text": "In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.",
"title": ""
},
{
"docid": "9c97262605b3505bbc33c64ff64cfcd5",
"text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.",
"title": ""
},
{
"docid": "c79be5b8b375a9bced1bfe5c3f9024ce",
"text": "Recent technological advances have enabled DNA methylation to be assayed at single-cell resolution. However, current protocols are limited by incomplete CpG coverage and hence methods to predict missing methylation states are critical to enable genome-wide analyses. We report DeepCpG, a computational approach based on deep neural networks to predict methylation states in single cells. We evaluate DeepCpG on single-cell methylation data from five cell types generated using alternative sequencing protocols. DeepCpG yields substantially more accurate predictions than previous methods. Additionally, we show that the model parameters can be interpreted, thereby providing insights into how sequence composition affects methylation variability.",
"title": ""
},
{
"docid": "1d72e3bbc8106a8f360c05bd0a638f0d",
"text": "Advancements in computer vision, natural language processing and deep learning techniques have resulted in the creation of intelligent systems that have achieved impressive results in the visually grounded tasks such as image captioning and visual question answering (VQA). VQA is a task that can be used to evaluate a system's capacity to understand an image. It requires an intelligent agent to answer a natural language question about an image. The agent must ground the question into the image and return a natural language answer. One of the latest techniques proposed to tackle this task is the attention mechanism. It allows the agent to focus on specific parts of the input in order to answer the question. In this paper we propose a novel long short-term memory (LSTM) architecture that uses dual attention to focus on specific question words and parts of the input image in order to generate the answer. We evaluate our solution on the recently proposed Visual 7W dataset and show that it performs better than state of the art. Additionally, we propose two new question types for this dataset in order to improve model evaluation. We also make a qualitative analysis of the results and show the strength and weakness of our agent.",
"title": ""
},
{
"docid": "38a4b3c515ee4285aa88418b30937c62",
"text": "Docker containers have recently become a popular approach to provision multiple applications over shared physical hosts in a more lightweight fashion than traditional virtual machines. This popularity has led to the creation of the Docker Hub registry, which distributes a large number of official and community images. In this paper, we study the state of security vulnerabilities in Docker Hub images. We create a scalable Docker image vulnerability analysis (DIVA) framework that automatically discovers, downloads, and analyzes both official and community images on Docker Hub. Using our framework, we have studied 356,218 images and made the following findings: (1) both official and community images contain more than 180 vulnerabilities on average when considering all versions; (2) many images have not been updated for hundreds of days; and (3) vulnerabilities commonly propagate from parent images to child images. These findings demonstrate a strong need for more automated and systematic methods of applying security updates to Docker images and our current Docker image analysis framework provides a good foundation for such automatic security update.",
"title": ""
},
{
"docid": "7c1fafba892be56bb81a59df996bd95f",
"text": "Cowper's gland syringocele is an uncommon, underdiagnosed cystic dilatation of Cowper's gland ducts showing various radiological patterns. Herein we report a rare case of giant Cowper's gland syringocele in an adult male patient, with description of MRI findings and management outcome.",
"title": ""
},
{
"docid": "109c5caa55d785f9f186958f58746882",
"text": "Apriori and Eclat are the best-known basic algorithms for mining frequent item sets in a set of transactions. In this paper I describe implementations of these two algorithms that use several optimizations to achieve maximum performance, w.r.t. both execution time and memory usage. The Apriori implementation is based on a prefix tree representation of the needed counters and uses a doubly recursive scheme to count the transactions. The Eclat implementation uses (sparse) bit matrices to represent transactions lists and to filter closed and maximal item sets.",
"title": ""
},
{
"docid": "5f330c46df15da0b0d932590a1a773a9",
"text": "During the past decade, the alexithymia construct has undergone theoretical refinement and empirical testing and has evolved into a potential new paradigm for understanding the influence of emotions and personality on physical illness and health. Like the traditional psychosomatic medicine paradigm, the alexithymia construct links susceptibility to disease with prolonged states of emotional arousal. But whereas the traditional paradigm emphasizes intrapsychic conflicts that are presumed to generate such emotional states, the alexithymia construct focuses attention on deficits in the cognitive processing of emotions, which remain undifferentiated and poorly regulated. This paper reviews the development and validation of the construct and discusses its clinical implications for psychosomatic medicine.",
"title": ""
},
{
"docid": "5c9ba6384b6983a26212e8161e502484",
"text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.",
"title": ""
},
{
"docid": "1685d4c4a61bcb7f9f61d1e0d9fd1241",
"text": "A review of recent research addressed two questions: how common are problems of substance abuse in traumatic brain injury (TBI), and to what extent does alcohol and other drug use mediate outcome? Studies showed alcohol intoxication present in one third to one half of hospitalizations; data for other drug intoxication were not available. Nearly two thirds of rehabilitation patients may have a history of substance abuse that preceded their injuries. Intoxication was related to acute complications, longer hospital stays, and poorer discharge status; however, these relationships may have been caused by colinearity with history. History of substance abuse showed the same morbidity, and was further associated with higher mortality rates, poorer neuropsychological outcome, and greater likelihood of repeat injuries and late deterioration. The effect of history may be caused by subgroups with more severe substance abuse problems. Implications for rehabilitation are discussed, including the potential negative impact of untreated substance abuse on the ability to document efficacy of rehabilitation efforts.",
"title": ""
},
{
"docid": "7398b6a56fa55098e7cf36ca3f14db48",
"text": "The World Health Organization projects that the number of people living in cities will nearly double over the next few decades, so urban centers need to provide more sustainable solutions for smart living. New technologies— such as materials, sensors, wireless communications, and controls—will be necessary to manage living environments that proactively sense behavioral and health risks and provide situationaware responses t o emergencies or disasters. In addition, utility and transportation networks must adapt to dynamic usage, tra c conditions, and user behavior with a minimal carbon footprint; a clean and renewable energy grid must actuate localized energy and power control; and pervasive security is needed to detect and prevent potential threats. This vision is bold but critical to enabling smart living. Cloud-only models face serious challenges in latency, network bandwidth, geographic focus, reliability, and security. Fog computing reduces these challenges by providing a systemlevel horizontal architecture to distribute computing, storage, control, and networking resources and services from the cloud to connected devices (“things”). Think of fog computing as the cloud on the ground: it enables latency-sensitive computing to be performed in close proximity to the things it controls. Over time, fog and cloud computing will converge into uni ed end-to-end platforms o ering integrated services and applications along the continuum from the cloud to things. Applications developed and deployed for the cloud will be able to run in fog and vice versa.",
"title": ""
},
{
"docid": "d62c50e109195f483119ebe36350ff54",
"text": "We address the problem of inferring users’ interests from microblogging sites such as Twitter, based on their utterances and interactions in the social network. Inferring user interests is important for systems such as search and recommendation engines to provide information that is more attuned to the likes of its users. In this paper, we propose a probabilistic generative model of user utterances that encapsulates both user and network information. This model captures the complex interactions between varied interests of the users, his level of activeness in the network, and the information propagation from the neighbors. As exact probabilistic inference in this model is intractable, we propose an online variational inference algorithm that also takes into account evolving social graph, user and his neighbors? interests. We prove the optimality of the online inference with respect to an equivalent batch update. We present experimental results performed on the actual Twitter users, validating our approach. We also present extensive results showing inadequacy of using Mechanical Turk platform for large scale validation.",
"title": ""
},
{
"docid": "e3863f0dc86fd194342c050df45f6e95",
"text": "This paper opened the new area the information theory. Before this paper, most people believed that the only way to make the error probability of transmission as small as desired is to reduce the data rate (such as a long repetition scheme). However, surprisingly this paper revealed that it does not need to reduce the data rate for achieving that much of small errors. It proved that we can get some positive data rate that has the same small error probability and also there is an upper bound of the data rate, which means we cannot achieve the data rate with any encoding scheme that has small enough error probability over the upper bound.",
"title": ""
},
{
"docid": "c553ea1a03550bdc684dbacbb9bef385",
"text": "NeuCoin is a decentralized peer-to-peer cryptocurrency derived from Sunny King’s Peercoin, which itself was derived from Satoshi Nakamoto’s Bitcoin. As with Peercoin, proof-of-stake replaces proof-of-work as NeuCoin’s security model, effectively replacing the operating costs of Bitcoin miners (electricity, computers) with the capital costs of holding the currency. Proof-of-stake also avoids proof-of-work’s inherent tendency towards centralization resulting from competition for coinbase rewards among miners based on lowest cost electricity and hash power. NeuCoin increases security relative to Peercoin and other existing proof-of-stake currencies in numerous ways, including: (1) incentivizing nodes to continuously stake coins over time through substantially higher mining rewards and lower minimum stake age; (2) abandoning the use of coin age in the mining formula; (3) causing the stake modifier parameter to change over time for each stake; and (4) utilizing a client that punishes nodes that attempt to mine on multiple branches with duplicate stakes. This paper demonstrates how NeuCoin’s proof-of-stake implementation addresses all commonly raised “nothing at stake” objections to generic proof-of-stake systems. It also reviews many of the flaws of proof-of-work designs to highlight the potential for an alternate cryptocurrency that solves these flaws.",
"title": ""
},
{
"docid": "d97669811124f3c6f4cef5b2a144a46c",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "1987ba476be524db448cce1835460a33",
"text": "We report on the main features of the IJCAI’07 program, including its theme, and its schedule and organization. In particular, we discuss an effective and novel presentation format at IJCAI in which oral and poster papers were presented in the same sessions categorized by topic area.",
"title": ""
},
{
"docid": "b33e896a23f27a81f04aaeaff2f2350c",
"text": "Nowadays it has become increasingly common for family members to be distributed in different time zones. These time differences pose specific challenges for communication within the family and result in different communication practices to cope with them. To gain an understanding of current challenges and practices, we interviewed people who regularly communicate with immediate family members living in other time zones. We report primary findings from the interviews, and identify design opportunities for improving the experience of cross time zone family communication.",
"title": ""
},
{
"docid": "61ae981007d9ad7ba5499c434b17c371",
"text": "of a dissertation at the University of Miami. Dissertation supervised by Professor Mei-Ling Shyu. No. of pages in text. (160) With the proliferation of digital photo-capture devices and the development of web technologies, the era of big data has arrived, which poses challenges to process and retrieve vast amounts of data with heterogeneous and diverse dimensionality. In the field of multimedia information retrieval, traditional keyword-based approaches perform well on text data, but it can hardly adapt to image and video due to the fact that a large proportion of this data nowadays is unorganized. This means the textual descriptions of images or videos, also known as metadata, could be unavailable, incomplete or even incorrect. Therefore, Content-Based Multimedia Information Retrieval (CBMIR) has emerged, which retrieves relevant images or videos by analyzing their visual content. Various data mining techniques such as feature selection, classification, clustering and filtering, have been utilized in CBMIR to solve issues involving data imbalance, data quality and size, limited ground truth, user subjectivity, etc. However, as an intrinsic problem of CBMIR, the semantic gap between low-level visual features and high-level semantics is still difficult to conquer. Now, with the rapid popularization of social media repositories, which allows users to upload images and videos, and assign tags to describe them, it has brought new directions as well as new challenges to the area of multimedia information retrieval. As suggested by the name, multimedia is a combination of different content forms that include text, audio, images, videos, etc. A series of research studies have been conducted to take advantage of one modality to compensate the other for",
"title": ""
}
] | scidocsrr |
0c5b9acb058ce6a0f3a8c55bed479885 | Hypergraph Models and Algorithms for Data-Pattern-Based Clustering | [
{
"docid": "b7a4eec912eb32b3b50f1b19822c44a1",
"text": "Mining numerical data is a relatively difficult problem in data mining. Clustering is one of the techniques. We consider a database with numerical attributes, in which each transaction is viewed as a multi-dimensional vector. By studying the clusters formed by these vectors, we can discover certain behaviors hidden in the data. Traditional clustering algorithms find clusters in the full space of the data sets. This results in high dimensional clusters, which are poorly comprehensible to human. One important task in this setting is the ability to discover clusters embedded in the subspaces of a high-dimensional data set. This problem is known as subspace clustering. We follow the basic assumptions of previous work CLIQUE. It is found that the number of subspaces with clustering is very large, and a criterion called the coverage is proposed in CLIQUE for the pruning. In addition to coverage, we identify new useful criteria for this problem and propose an entropybased algorithm called ENCLUS to handle the criteria. Our major contributions are: (1) identify new meaningful criteria of high density and correlation of dimensions for goodness of clustering in subspaces, (2) introduce the use of entropy and provide evidence to support its use, (3) make use of two closure properties based on entropy to prune away uninteresting subspaces efficiently, (4) propose a mechanism to mine non-minimally correlated subspaces which are of interest because of strong clustering, (5) experiments are carried out to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
}
] | [
{
"docid": "4bbb2191088155c823bc152fce0dec89",
"text": "Image Segmentation is an important and challenging factor in the field of medical sciences. It is widely used for the detection of tumours. This paper deals with detection of brain tumour from MR images of the brain. The brain is the anterior most part of the nervous system. Tumour is a rapid uncontrolled growth of cells. Magnetic Resonance Imaging (MRI) is the device required to diagnose brain tumour. The normal MR images are not that suitable for fine analysis, so segmentation is an important process required for efficiently analyzing the tumour images. Clustering is suitable for biomedical image segmentation as it uses unsupervised learning. This paper work uses K-Means clustering where the detected tumour shows some abnormality which is then rectified by the use of morphological operators along with basic image processing techniques to meet the goal of separating the tumour cells from the normal cells.",
"title": ""
},
{
"docid": "a7db9f3f1bb5883f6a5a873dd661867b",
"text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.",
"title": ""
},
{
"docid": "643d75042a38c24b0e4130cb246fc543",
"text": "Silicon carbide (SiC) switching power devices (MOSFETs, JFETs) of 1200 V rating are now commercially available, and in conjunction with SiC diodes, they offer substantially reduced switching losses relative to silicon (Si) insulated gate bipolar transistors (IGBTs) paired with fast-recovery diodes. Low-voltage industrial variable-speed drives are a key application for 1200 V devices, and there is great interest in the replacement of the Si IGBTs and diodes that presently dominate in this application with SiC-based devices. However, much of the performance benefit of SiC-based devices is due to their increased switching speeds ( di/dt, dv/ dt), which raises the issues of increased electromagnetic interference (EMI) generation and detrimental effects on the reliability of inverter-fed electrical machines. In this paper, the tradeoff between switching losses and the high-frequency spectral amplitude of the device switching waveforms is quantified experimentally for all-Si, Si-SiC, and all-SiC device combinations. While exploiting the full switching-speed capability of SiC-based devices results in significantly increased EMI generation, the all-SiC combination provides a 70% reduction in switching losses relative to all-Si when operated at comparable dv/dt. It is also shown that the loss-EMI tradeoff obtained with the Si-SiC device combination can be significantly improved by driving the IGBT with a modified gate voltage profile.",
"title": ""
},
{
"docid": "d3b24655e01cbb4f5d64006222825361",
"text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "545998c2badee9554045c04983b1d11b",
"text": "This paper presents a new control approach for nonlinear network-induced time delay systems by combining online reset control, neural networks, and dynamic Bayesian networks. We use feedback linearization to construct a nominal control for the system then use reset control and a neural network to compensate for errors due to the time delay. Finally, we obtain a stochastic model of the Networked Control System (NCS) using a Dynamic Bayesian Network (DBN) and use it to design a predictive control. We apply our control methodology to a nonlinear inverted pendulum and evaluate its performance through numerical simulations. We also test our approach with real-time experiments on a dc motor-load NCS with wireless communication implemented using a Ubiquitous Sensor Network (USN). Both the simulation and experimental results demonstrate the efficacy of our control methodology.",
"title": ""
},
{
"docid": "b0a37782d653fa03843ecdc118a56034",
"text": "Non-frontal lip views contain useful information which can be used to enhance the performance of frontal view lipreading. However, the vast majority of recent lipreading works, including the deep learning approaches which significantly outperform traditional approaches, have focused on frontal mouth images. As a consequence, research on joint learning of visual features and speech classification from multiple views is limited. In this work, we present an end-to-end multi-view lipreading system based on Bidirectional Long-Short Memory (BLSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and performs visual speech classification from multiple views and also achieves state-of-the-art performance. The model consists of multiple identical streams, one for each view, which extract features directly from different poses of mouth images. The temporal dynamics in each stream/view are modelled by a BLSTM and the fusion of multiple streams/views takes place via another BLSTM. An absolute average improvement of 3% and 3.8% over the frontal view performance is reported on the OuluVS2 database when the best two (frontal and profile) and three views (frontal, profile, 45◦) are combined, respectively. The best three-view model results in a 10.5% absolute improvement over the current multi-view state-of-the-art performance on OuluVS2, without using external databases for training, achieving a maximum classification accuracy of 96.9%.",
"title": ""
},
{
"docid": "3cb0e239ecfc9949afe89fe80a92cfd5",
"text": "The purpose of this study is to measure the impact of product perceive quality on purchase intention with level of satisfaction, for meeting this purpose the data was collected by individually through 122 questionnaires by adopting the convenience techniques. Using statistical software hypothesis shows that these variables have positive significant relationship. Practical contribution shows that this study can be used as a guideline to management and marketers to improve the product quality.",
"title": ""
},
{
"docid": "2dde173faac8d5cbb63aed8d379308fa",
"text": "Delineating infarcted tissue in ischemic stroke lesions is crucial to determine the extend of damage and optimal treatment for this life-threatening condition. However, this problem remains challenging due to high variability of ischemic strokes’ location and shape. Recently, fully-convolutional neural networks (CNN), in particular those based on U-Net [27], have led to improved performances for this task [7]. In this work, we propose a novel architecture that improves standard U-Net based methods in three important ways. First, instead of combining the available image modalities at the input, each of them is processed in a different path to better exploit their unique information. Moreover, the network is densely-connected (i.e., each layer is connected to all following layers), both within each path and across different paths, similar to HyperDenseNet [11]. This gives our model the freedom to learn the scale at which modalities should be processed and combined. Finally, inspired by the Inception architecture [32], we improve standard U-Net modules by extending inception modules with two convolutional blocks with dilated convolutions of different scale. This helps handling the variability in lesion sizes. We split the 93 stroke datasets into training and validation sets containing 83 and 9 examples respectively. Our network was trained on a NVidia TITAN XP GPU with 16 GBs RAM, using ADAM as optimizer and a learning rate of 1×10−5 during 200 epochs. Training took around 5 hours and segmentation of a whole volume took between 0.2 and 2 seconds, as average. The performance on the test set obtained by our method is compared to several baselines, to demonstrate the effectiveness of our architecture, and to a state-of-art architecture that employs factorized dilated convolutions, i.e., ERFNet [26].",
"title": ""
},
{
"docid": "a3914095f36b87d74b4c737a06eaa2a8",
"text": "In this study, the swing-up of a double inverted pendulum is controlled by nonlinear model predictive control (NMPC). The fast computation algorithm called C/GMRES (continuation/generalized minimal residual) is applied to solve a nonlinear two-point boundary value problem over a receding horizon in real time. The goal is to swing-up and stabilize two pendulums from the downward to upright position. To make the tuning process of the performance index simpler, the terminal cost in the performance index is given by a solution of the algebraic Riccati equation. The simulation results show that C/GMRES can solve the NMPC problem in real time and swingup the double inverted pendulum with a significant reduction in the computational cost compared with Newton’s method.",
"title": ""
},
{
"docid": "d5d160d536b72bd8f40d42bc609640f5",
"text": "Weight pruning has been introduced as an efficient model compression technique. Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information. Moreover, computations associated with such sparse matrix formats are slow because sequential sparse matrix decoding process does not utilize highly parallel computing systems efficiently. As an attempt to compress index information while keeping the decoding process parallelizable, Viterbi-based pruning was suggested. Decoding non-zero weights, however, is still sequential in Viterbi-based pruning. In this paper, we propose a new sparse matrix format in order to enable a highly parallel decoding process of the entire sparse matrix. The proposed sparse matrix is constructed by combining pruning and weight quantization. For the latest RNN models on PTB and WikiText-2 corpus, LSTM parameter storage requirement is compressed 19× using the proposed sparse matrix format compared to the baseline model. Compressed weight and indices can be reconstructed into a dense matrix fast using Viterbi encoders. Simulation results show that the proposed scheme can feed parameters to processing elements 20 % to 106 % faster than the case where the dense matrix values directly come from DRAM.",
"title": ""
},
{
"docid": "3b167c48f2e658b1001ddbfab02d2729",
"text": "We consider the recognition of activities from passive entities by analysing radio-frequency (RF)-channel fluctuation. In particular, we focus on the recognition of activities by active Software-defined-radio (SDR)-based Device-free Activity Recognition (DFAR) systems and investigate the localisation of activities performed, the generalisation of features for alternative environments and the distinction between walking speeds. Furthermore, we conduct case studies for Received Signal Strength (RSS)-based active and continuous signal-based passive systems to exploit the accuracy decrease in these related cases. All systems are compared to an accelerometer-based recognition system.",
"title": ""
},
{
"docid": "4852971924b06e1314b8946078e15b44",
"text": "In this work we introduce a graph theoretical method to compare MEPs, which is independent of molecular alignment. It is based on the edit distance of weighted rooted trees, which encode the geometrical and topological information of Negative Molecular Isopotential Surfaces. A meaningful chemical classification of a set of 46 molecules with different functional groups was achieved. Structure--activity relationships for the corticosteroid binding affinity (CBG) of 31 steroids by means of hierarchical clustering resulted in a clear partitioning in high, intermediate, and low activity groups, whereas the results from quantitative structure--activity relationships, obtained from a partial least-squares analysis, showed comparable or better cross-validated correlation coefficients than the ones reported for previous methods based solely in the MEP.",
"title": ""
},
{
"docid": "320bde052bb8d325c90df45cb21ac5de",
"text": "The power generated by solar photovoltaic (PV) module depends on surrounding irradiance, temperature and shading conditions. Under partial shading conditions (PSC) the power from the PV module can be dramatically reduced and maximum power point tracking (MPPT) control will be affected. This paper presents a hybrid simulation model of PV cell/module and system using Matlab®/Simulink® and Pspice®. The hybrid simulation model includes the solar PV cells and the converter power stage and can be expanded to add MPPT control and other functions. The model is able to simulate both the I-V characteristics curves and the P-V characteristics curves of PV modules under uniform shading conditions (USC) and PSC. The model is used to study different parameters variations effects on the PV array. The developed model is suitable to simulate several homogeneous or/and heterogeneous PV cells or PV panels connected in series and/or in parallel.",
"title": ""
},
{
"docid": "6a4cd21704bfbdf6fb3707db10f221a8",
"text": "Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to a standard implementation of LSTMs on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.",
"title": ""
},
{
"docid": "f267f73e9770184fbe617446ee4782c0",
"text": "Juvenile dermatomyositis (JDM) is a rare, potentially life-threatening systemic autoimmune disease primarily affecting muscle and skin. Recent advances in the recognition, standardised assessment and treatment of JDM have been greatly facilitated by large collaborative research networks. Through these networks, a number of immunogenetic risk factors have now been defined, as well as a number of potential pathways identified in the aetio-pathogenesis of JDM. Myositis-associated and myositis-specific autoantibodies are helping to sub-phenotype JDM, defined by clinical features, outcomes and immunogenetic risk factors. Partially validated tools to assess disease activity and damage have assisted in standardising outcomes. Aggressive treatment approaches, including multiple initial therapies, as well as new drugs and biological therapies for refractory disease, offer promise of improved outcomes and less corticosteroid-related toxicity.",
"title": ""
},
{
"docid": "70e88fe5fc43e0815a1efa05e17f7277",
"text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.",
"title": ""
},
{
"docid": "5bb36646f4db3d2efad8e0ee828b3022",
"text": "PURPOSE\nWhile modern clinical CT scanners under normal circumstances produce high quality images, severe artifacts degrade the image quality and the diagnostic value if metal prostheses or other metal objects are present in the field of measurement. Standard methods for metal artifact reduction (MAR) replace those parts of the projection data that are affected by metal (the so-called metal trace or metal shadow) by interpolation. However, while sinogram interpolation methods efficiently remove metal artifacts, new artifacts are often introduced, as interpolation cannot completely recover the information from the metal trace. The purpose of this work is to introduce a generalized normalization technique for MAR, allowing for efficient reduction of metal artifacts while adding almost no new ones. The method presented is compared to a standard MAR method, as well as MAR using simple length normalization.\n\n\nMETHODS\nIn the first step, metal is segmented in the image domain by thresholding. A 3D forward projection identifies the metal trace in the original projections. Before interpolation, the projections are normalized based on a 3D forward projection of a prior image. This prior image is obtained, for example, by a multithreshold segmentation of the initial image. The original rawdata are divided by the projection data of the prior image and, after interpolation, denormalized again. Simulations and measurements are performed to compare normalized metal artifact reduction (NMAR) to standard MAR with linear interpolation and MAR based on simple length normalization.\n\n\nRESULTS\nPromising results for clinical spiral cone-beam data are presented in this work. Included are patients with hip prostheses, dental fillings, and spine fixation, which were scanned at pitch values ranging from 0.9 to 3.2. Image quality is improved considerably, particularly for metal implants within bone structures or in their proximity. The improvements are evaluated by comparing profiles through images and sinograms for the different methods and by inspecting ROIs. NMAR outperforms both other methods in all cases. It reduces metal artifacts to a minimum, even close to metal regions. Even for patients with dental fillings, which cause most severe artifacts, satisfactory results are obtained with NMAR. In contrast to other methods, NMAR prevents the usual blurring of structures close to metal implants if the metal artifacts are moderate.\n\n\nCONCLUSIONS\nNMAR clearly outperforms the other methods for both moderate and severe artifacts. The proposed method reliably reduces metal artifacts from simulated as well as from clinical CT data. Computationally efficient and inexpensive compared to iterative methods, NMAR can be used as an additional step in any conventional sinogram inpainting-based MAR method.",
"title": ""
},
{
"docid": "1a41bd991241ed1751beda2362465a0d",
"text": "Over the last decade, Convolutional Neural Networks (CNN) saw a tremendous surge in performance. However, understanding what a network has learned still proves to be a challenging task. To remedy this unsatisfactory situation, a number of groups have recently proposed different methods to visualize the learned models. In this work we suggest a general taxonomy to classify and compare these methods, subdividing the literature into three main categories and providing researchers with a terminology to base their works on. Furthermore, we introduce the FeatureVis library for MatConvNet: an extendable, easy to use open source library for visualizing CNNs. It contains implementations from each of the three main classes of visualization methods and serves as a useful tool for an enhanced understanding of the features learned by intermediate layers, as well as for the analysis of why a network might fail for certain examples.",
"title": ""
},
{
"docid": "47f1d6df5ec3ff30d747fb1fcbc271a7",
"text": "a r t i c l e i n f o Experimental studies routinely show that participants who play a violent game are more aggressive immediately following game play than participants who play a nonviolent game. The underlying assumption is that nonviolent games have no effect on aggression, whereas violent games increase it. The current studies demonstrate that, although violent game exposure increases aggression, nonviolent video game exposure decreases aggressive thoughts and feelings (Exp 1) and aggressive behavior (Exp 2). When participants assessed after a delay were compared to those measured immediately following game play, violent game players showed decreased aggressive thoughts, feelings and behavior, whereas nonviolent game players showed increases in these outcomes. Experiment 3 extended these findings by showing that exposure to nonviolent puzzle-solving games with no expressly prosocial content increases prosocial thoughts, relative to both violent game exposure and, on some measures, a no-game control condition. Implications of these findings for models of media effects are discussed. A major development in mass media over the last 25 years has been the advent and rapid growth of the video game industry. From the earliest arcade-based console games, video games have been immediately and immensely popular, particularly among young people and their subsequent introduction to the home market only served to further elevate their prevalence (Gentile, 2009). Given their popularity, social scientists have been concerned with the potential effects of video games on those who play them, focusing particularly on games with violent content. While a large percentage of games have always involved the destruction of enemies, recent advances in technology have enabled games to become steadily more realistic. Coupled with an increase in the number of adult players, these advances have enabled the development of games involving more and more graphic violence. Over the past several years, the majority of best-selling games have involved frequent and explicit acts of violence as a central gameplay theme (Smith, Lachlan, & Tamborini, 2003). A video game is essentially a simulated experience. Virtually every major theory of human aggression, including social learning theory, predicts that repeated simulation of antisocial behavior will produce an increase in antisocial behavior (e.g., aggression) and a decrease in prosocial behavior (e.g., helping) outside the simulated environment (i.e., in \" real life \"). In addition, an increase in the perceived realism of the simulation is posited to increase the strength of negative effects (Gentile & Anderson, 2003). Meta-analyses …",
"title": ""
},
{
"docid": "041772bbad50a5bf537c0097e1331bdd",
"text": "As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material. We describe an automatic question generator that uses semantic pattern recognition to create questions of varying depth and type for self-study or tutoring. Throughout, we explore how linguistic considerations inform system design. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.",
"title": ""
}
] | scidocsrr |
896c0e37abe2764b4c8a1e6e268cd9ee | Dataset for the First Evaluation on Chinese Machine Reading Comprehension | [
{
"docid": "da2f99dd979a1c4092c22ed03537bbe8",
"text": "Several large cloze-style context-questionanswer datasets have been introduced recently: the CNN and Daily Mail news data and the Children’s Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Our model outperforms models previously proposed for these tasks by a large margin.",
"title": ""
},
{
"docid": "79a2cc561cd449d8abb51c162eb8933d",
"text": "We introduce a new test of how well language models capture meaning in children’s books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lowerfrequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
}
] | [
{
"docid": "4becb2f976472e288bcb791f29612475",
"text": "In this paper we integrate at the tactical level two decision problems arising in container terminals: the berth allocation problem, which consists of assigning and scheduling incoming ships to berthing positions, and the quay crane assignment problem, which assigns to incoming ships a certain QC profile (i.e. number of quay cranes per working shift). We present two formulations: a mixed integer quadratic program and a linearization which reduces to a mixed integer linear program. The objective function aims, on the one hand, to maximize the total value of chosen QC profiles and, on the other hand, to minimize the housekeeping costs generated by transshipment flows between ships. To solve the problem we developed a heuristic algorithm which combines tabu search methods and mathematical programming techniques. Computational results on instances based on real data are presented and compared to those obtained through a commercial solver.",
"title": ""
},
{
"docid": "af47d1cc068467eaee7b6129682c9ee3",
"text": "Diffusion kurtosis imaging (DKI) is gaining rapid adoption in the medical imaging community due to its ability to measure the non-Gaussian property of water diffusion in biological tissues. Compared to traditional diffusion tensor imaging (DTI), DKI can provide additional details about the underlying microstructural characteristics of the neural tissues. It has shown promising results in studies on changes in gray matter and mild traumatic brain injury where DTI is often found to be inadequate. The DKI dataset, which has high-fidelity spatio-angular fields, is difficult to visualize. Glyph-based visualization techniques are commonly used for visualization of DTI datasets; however, due to the rapid changes in orientation, lighting, and occlusion, visually analyzing the much more higher fidelity DKI data is a challenge. In this paper, we provide a systematic way to manage, analyze, and visualize high-fidelity spatio-angular fields from DKI datasets, by using spherical harmonics lighting functions to facilitate insights into the brain microstructure.",
"title": ""
},
{
"docid": "7a231a0286e0f921c92974b773bc494f",
"text": "We consider the sequential (i.e., online) detection of false data injection attacks in smart grid, which aims to manipulate the state estimation procedure by injecting malicious data to the monitoring meters. The unknown parameters in the system, namely the state vector, injected malicious data and the set of attacked meters pose a significant challenge for designing a robust, computationally efficient, and high-performance detector. We propose a sequential detector based on the generalized likelihood ratio to address this challenge. Specifically, the proposed detector is designed to be robust to a variety of attacking strategies, and load situations in the power system, and its computational complexity linearly scales with the number of meters. Moreover, it considerably outperforms the existing first-order cumulative sum detector in terms of the average detection delay and robustness to various attacking strategies. For wide-area monitoring in smart grid, we further develop a distributed sequential detector using an adaptive sampling technique called level-triggered sampling. The resulting distributed detector features single bit per sample in terms of the communication overhead, while preserving the high performance of the proposed centralized detector.",
"title": ""
},
{
"docid": "ee08d4723ebf030bb79c3c1a18d27ee3",
"text": "In this work we present a new method for the modeling and simulation study of a photovoltaic grid connected system and its experimental validation. This method has been applied in the simulation of a grid connected PV system with a rated power of 3.2 Kwp, composed by a photovoltaic generator and a single phase grid connected inverter. First, a PV module, forming part of the whole PV array is modeled by a single diode lumped circuit and main parameters of the PV module are evaluated. Results obtained for the PV module characteristics have been validated experimentally by carrying out outdoor I–V characteristic measurements. To take into account the power conversion efficiency, the measured AC output power against DC input power is fitted to a second order efficiency model to derive its specific parameters. The simulation results have been performed through Matlab/Simulink environment. Results has shown good agreement with experimental data, whether for the I–V characteristics or for the whole operating system. The significant error indicators are reported in order to show the effectiveness of the simulation model to predict energy generation for such PV system. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "44368062de68f6faed57d43b8e691e35",
"text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.",
"title": ""
},
{
"docid": "1f809c009955737df324c6e466a86d70",
"text": "Digital banking as an essential service can be hard to access in remote, rural regions where the network connectivity is unavailable or intermittent. The payment operators like Visa and Mastercard often face difficulties reaching these remote, rural areas. Although micro-banking has been made possible by short message service or unstructured supplementary service data messages in some places, their security flaws and session-based nature prevent them from wider adoption. Global-level cryptocurrencies enable low-cost, secure, and pervasive money transferring among distributed peers, but are still limited in their ability to reach people in remote communities. We propose a blockchain-based digital payment scheme that can deliver reliable services on top of unreliable networks in remote regions. We focus on a scenario where a community-run base station provides reliable local network connectivity while intermittently connects to the broader Internet. We take advantage of the distributed verification guarantees of the Blockchain technology for financial transaction verification and leverage smart contracts for secure service management. In the proposed system, payment operators deploy multiple proxy nodes that are intermittently connected to the remote communities where the local blockchain networks, such as Ethereum are composed of miners, vendors, and regular users. Through probabilistic modeling, we devise design parameters for the blockchain network to realize robust operation over the top of the unreliable network. Furthermore, we show that the transaction processing time will not be significantly impacted due to the network unreliability through extensive emulations on a private Ethereum network. Finally, we demonstrate the practical feasibility of the proposed system by developing Near Field Communication (NFC)-enabled payment gateways on Raspberry-Pis, a mobile wallet application and mining nodes on off-the-shelf computers.",
"title": ""
},
{
"docid": "c1ad9ac457bbb34ce00ba76fcbb8185d",
"text": "Securing the Internet of Things (IoT) is a necessary milestone toward expediting the deployment of its applications and services. In particular, the functionality of the IoT devices is extremely dependent on the reliability of their message transmission. Cyber attacks such as data injection, eavesdropping, and man-in-the-middle threats present major security challenges. Securing IoT devices against such attacks requires accounting for their stringent computational power and need for low-latency operations. In this paper, a novel deep learning method is proposed to detect cyber attacks via dynamic watermarking of IoT signals. The proposed learning framework, based on a long short-term memory (LSTM) structure, enables the IoT devices to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT's cloud center, which collects signals from the IoT devices, to effectively authenticate the reliability of the signals. Furthermore, the proposed method prevents complicated attack scenarios such as eavesdropping in which the cyber attacker collects the data from the IoT devices and aims to break the watermarking algorithm. Simulation results show that, with an attack detection delay of under 1 second, the messages can be transmitted from IoT devices with an almost 100% reliability.",
"title": ""
},
{
"docid": "2e7ee3674bdd58967380a59d638b2b17",
"text": "Media applications are characterized by large amounts of available parallelism, little data reuse, and a high computation to memory access ratio. While these characteristics are poorly matched to conventional microprocessor architectures, they are a good fit for modern VLSI technology with its high arithmetic capacity but limited global bandwidth. The stream programming model, in which an application is coded as streams of data records passing through computation kernels, exposes both parallelism and locality in media applications that can be exploited by VLSI architectures. The Imagine architecture supports the stream programming model by providing a bandwidth hierarchy tailored to the demands of media applications. Compared to a conventional scalar processor, Imagine reduces the global register and memory bandwidth required by typical applications by factors of 13 and 21 respectively. This bandwidth efficiency enables a single chip Imagine processor to achieve a peak performance of 16.2GFLOPS (single-precision floating point) and sustained performance of up to 8.5GFLOPS on media processing kernels.",
"title": ""
},
{
"docid": "f88b686c82ed883b5b271900a809f6c1",
"text": "I believe that four advancements are necessary to achieve that aim. Methods for integrating diverse algorithms seamlessly into big-data architectures need to be found. Software development and archiving should be brought together under one roof. Data reading must become automated among formats. Ultimately, the interpretation of vast streams of scientific data will require a new breed of researcher equally familiar with science and advanced computing.",
"title": ""
},
{
"docid": "8f0975954a3767eab03f68884ecb54fa",
"text": "Digital Image processing has become popular and rapidly growing area of application under Computer Science. A basic study of image processing and its application areas are carried out in this paper. Each of these applications may be unique from others. To illustrate the basic concepts of image processing various reviews had done in this paper. The main two applications of digital image processing are discussed below. Firstly pictorial information can be improved for human perception, secondly for autonomous machine perception and efficient storage processing using image data. Digital image can be represented using set of digital values called pixels. Pixel value represent opacities, colors, gray levels, heights etc. Digitization causes a digital image to become an approximation of a real scene. To process an image some operations are applied on image. This paper discusses about the basic aspects of image processing .Image Acquisition means sensing an image .Image Enhancement means improvement in appearance of image .Image Restoration to restore an image .Image Compression to reduce the amount of data of an image to reduce size .This class of technique also include extraction/selection procedures .The importance applications of image processing include Artistic effects ,Bio-medical ,Industrial Inspection ,Geographic Information system ,Law Enforcement, Human Computer interface such as Face Recognition and Gesture recognition.",
"title": ""
},
{
"docid": "746de4365c6fd923ad88aceb3680e30b",
"text": "The requirements for minimising microbial contamination in pharmaceutical cleanrooms are outlined in regulatory documents published by authorities that include the European Commission1 and the Food and Drug Administration in the USA2. These authorities also suggest the use of risk management and assessment techniques to identify and control sources of microbial contamination3,4. Risk assessment and management methods have been investigated by the authors of this article5–9 and other approaches are discussed by Mollah et al10. Risk assessment methods are used to calculate the degree of risk to the product from microbial sources in a cleanroom. Factors that influence risk are determined and assigned descriptors of risk, which are of the ‘high’, ‘medium’, and ‘low’ type that act as surrogates for actual numerical values. Numerical scores are assigned to these descriptors and the scores combined, usually by multiplication, to obtain a risk assessment for each source of contamination. However, a risk assessment carried out in this manner may not be accurate, for the following reasons.",
"title": ""
},
{
"docid": "9d08b5e74b62a66c8521f2c6dc254920",
"text": "A recognition with a large-scale network is simulated on a PDP-11/34 minicomputer and is shown to have a great capability for visual pattern recognition. The model consists of nine layers of cells. The authors demonstrate that the model can be trained to recognize handwritten Arabic numerals even with considerable deformations in shape. A learning-with-a-teacher process is used for the reinforcement of the modifiable synapses in the new large-scale model, instead of the learning-without-a-teacher process applied to a previous model. The authors focus on the mechanism for pattern recognition rather than that for self-organization.",
"title": ""
},
{
"docid": "d01a22301de1274220a16351d14d4d83",
"text": "In this paper, we propose a solution to the problems and the features encountered in the geometric modeling of the 6 DOF manipulator arm, the Fanuc. Among these, the singularity of the Jacobian matrix obtained by the kinematic model and which has a great influence on the boundaries and accessibility of the workspace of manipulator robot and it reduce the number of solutions found. We can decompose it into several sub-matrices of smaller dimensions, for ultimately a non-linear equation with two unknowns. We validate our work by conducting a simulation software platform that allows us to verify the results of manipulation in a virtual reality environment based on VRML and Matlab software, integration with the CAD model.",
"title": ""
},
{
"docid": "0fcd4fcc743010415db27cc8201f8416",
"text": " A model is presented that allows prediction of the probability for the formation of appositions between the axons and dendrites of any two neurons based only on their morphological statistics and relative separation. Statistics of axonal and dendritic morphologies of single neurons are obtained from 3D reconstructions of biocytin-filled cells, and a statistical representation of the same cell type is obtained by averaging across neurons according to the model. A simple mathematical formulation is applied to the axonal and dendritic statistical representations to yield the probability for close appositions. The model is validated by a mathematical proof and by comparison of predicted appositions made by layer 5 pyramidal neurons in the rat somatosensory cortex with real anatomical data. The model could be useful for studying microcircuit connectivity and for designing artificial neural networks.",
"title": ""
},
{
"docid": "66ca4bacfbae3ff32b105565dace5194",
"text": "In this paper, we analyze and systematize the state-ofthe-art graph data privacy and utility techniques. Specifically, we propose and develop SecGraph (available at [1]), a uniform and open-source Secure Graph data sharing/publishing system. In SecGraph, we systematically study, implement, and evaluate 11 graph data anonymization algorithms, 19 data utility metrics, and 15 modern Structure-based De-Anonymization (SDA) attacks. To the best of our knowledge, SecGraph is the first such system that enables data owners to anonymize data by state-of-the-art anonymization techniques, measure the data’s utility, and evaluate the data’s vulnerability against modern De-Anonymization (DA) attacks. In addition, SecGraph enables researchers to conduct fair analysis and evaluation of existing and newly developed anonymization/DA techniques. Leveraging SecGraph, we conduct extensive experiments to systematically evaluate the existing graph data anonymization and DA techniques. The results demonstrate that (i) most anonymization schemes can partially or conditionally preserve most graph utilities while losing some application utility; (ii) no DA attack is optimum in all scenarios. The DA performance depends on several factors, e.g., similarity between anonymized and auxiliary data, graph density, and DA heuristics; and (iii) all the state-of-the-art anonymization schemes are vulnerable to several or all of the modern SDA attacks. The degree of vulnerability of each anonymization scheme depends on how much and which data utility it preserves.",
"title": ""
},
{
"docid": "aa7d94bebbd988af48bc7cb9f5e35a39",
"text": "Over the recent years, embedding methods have attracted increasing focus as a means for knowledge graph completion. Similarly, rule-based systems have been studied for this task in the past. What is missing so far is a common evaluation that includes more than one type of method. We close this gap by comparing representatives of both types of systems in a frequently used evaluation protocol. Leveraging the explanatory qualities of rule-based systems, we present a fine-grained evaluation that gives insight into characteristics of the most popular datasets and points out the different strengths and shortcomings of the examined approaches. Our results show that models such as TransE, RESCAL or HolE have problems in solving certain types of completion tasks that can be solved by a rulebased approach with high precision. At the same time, there are other completion tasks that are difficult for rule-based systems. Motivated by these insights, we combine both families of approaches via ensemble learning. The results support our assumption that the two methods complement each other in a beneficial way.",
"title": ""
},
{
"docid": "3370a138771566427fde6208dac759b7",
"text": "Communication protocols determine how network components interact with each other. Therefore, the ability to derive a speci cation of a protocol can be useful in various contexts, such as to support deeper black-box testing or e ective defense mechanisms. Unfortunately, it is often hard to obtain the speci cation because systems implement closed (i.e., undocumented) protocols, or because a time consuming translation has to be performed, from the textual description of the protocol to a format readable by the tools. To address these issues, we propose a new methodology to automatically infer a speci cation of a protocol from network traces, which generates automata for the protocol language and state machine. Since our solution only resorts to interaction samples of the protocol, it is well-suited to uncover the message formats and protocol states of closed protocols and also to automate most of the process of specifying open protocols. The approach was implemented in ReverX and experimentally evaluated with publicly available FTP traces. Our results show that the inferred speci cation is a good approximation of the reference speci cation, exhibiting a high level of precision and recall.",
"title": ""
},
{
"docid": "9245316ec7a2d1cb98d9385e54e0874d",
"text": "A novel partial order is defined on the space of digraphs or hypergraphs, based on assessing the cost of producing a graph via a sequence of elementary transformations. Leveraging work by Knuth and Skilling on the foundations of inference, and the structure of Heyting algebras on graph space, this partial order is used to construct an intuitionistic probability measure that applies to either digraphs or hypergraphs. As logical inference steps can be represented as transformations on hypergraphs representing logical statements, this also yields an intuitionistic probability measure on spaces of theorems. The central result is also extended to yield intuitionistic probabilities based on more general weighted rule systems defined over bicartesian closed categories.",
"title": ""
},
{
"docid": "754108343e8a57852d4a54abf45f5c43",
"text": "Precision measurement of dc high current is usually realized by second harmonic fluxgate current transducers, but the complicated modulation and demodulation circuits with high cost have been limiting their applications. This paper presents a low-cost transducer that can substitute the traditional ones for precision measurement of high current. The new transducer, based on the principle of zero-flux, is the combination of an improved self-oscillating fluxgate sensor with a magnetic integrator in a common feedback loop. The transfer function of the zero-flux control strategy of the transducer is established to verify the validity of the qualitative analysis on operating principle. Origins and major influence factors of the modulation ripple, respectively, caused by the useful signal extraction circuit and the transformer effect are studied, and related suppression methods are proposed, which can be considered as one of the major technical modifications for performance improvement. As verification, a prototype is realized, and several key specifications, including the linearity, small-signal bandwidth, modulation ripple, ratio stability under full load, power-on repeatability, magnetic error, and temperature coefficient, are characterized. Measurement results show that the new transducer with the maximum output ripple 0.3 μA can measure dc current up to ±600 A with a relative accuracy 1.3 ppm in the full scale, and it also can measure ac current and has a -3 dB bandwidth greater than 100 kHz.",
"title": ""
}
] | scidocsrr |
7c944862dfcc3f89cd284ac16b50f486 | Grouping Synonymous Sentences from a Parallel Corpus | [
{
"docid": "4361b4d2d77d22f46b9cd5920a4822c8",
"text": "While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text. Our approach yields phrasal and single word lexical paraphrases as well as syntactic paraphrases.",
"title": ""
}
] | [
{
"docid": "ee5eb52575cf01b825b244d9391c6f5c",
"text": "We present a data-driven framework called generative adversarial privacy (GAP). Inspired by recent advancements in generative adversarial networks (GANs), GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. We show that for appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. We also evaluate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI face database. KeywordsData Privacy, Differential Privacy, Adversarial Learning, Generative Adversarial Networks, Minimax Games, Information Theory",
"title": ""
},
{
"docid": "30ba59e335d9b448b29d2528b5e08a5c",
"text": "Classification of alcoholic electroencephalogram (EEG) signals is a challenging job in biomedical research for diagnosis and treatment of brain diseases of alcoholic people. The aim of this study was to introduce a robust method that can automatically identify alcoholic EEG signals based on time–frequency (T–F) image information as they convey key characteristics of EEG signals. In this paper, we propose a new hybrid method to classify automatically the alcoholic and control EEG signals. The proposed scheme is based on time–frequency images, texture image feature extraction and nonnegative least squares classifier (NNLS). In T–F analysis, the spectrogram of the short-time Fourier transform is considered. The obtained T–F images are then converted into 8-bit grayscale images. Co-occurrence of the histograms of oriented gradients (CoHOG) and Eig(Hess)-CoHOG features are extracted from T–F images. Finally, obtained features are fed into NNLS classifier as input for classify alcoholic and control EEG signals. To verify the effectiveness of the proposed approach, we replace the NNLS classifier by artificial neural networks, k-nearest neighbor, linear discriminant analysis and support vector machine classifier separately, with the same features. Experimental outcomes along with comparative evaluations with the state-of-the-art algorithms manifest that the proposed method outperforms competing algorithms. The experimental outcomes are promising, and it can be anticipated that upon its implementation in clinical practice, the proposed scheme will alleviate the onus of the physicians and expedite neurological diseases diagnosis and research.",
"title": ""
},
{
"docid": "37c005b87b3ccdfad86c760ecba7b8de",
"text": "Intelligent processing of complex signals such as images is often performed by a hierarchy of nonlinear processing layers, such as a deep net or an object recognition cascade. Joint estimation of the parameters of all the layers is a difficult nonconvex optimization. We describe a general strategy to learn the parameters and, to some extent, the architecture of nested systems, which we call themethod of auxiliary coordinates (MAC) . This replaces the original problem involving a deeply nested function with a constrained problem involving a different function in an augmented space without nesting. The constrained problem may be solved with penalty-based methods using alternating optimization over the parameters and the auxiliary coordinates. MAC has provable convergence, is easy to implement reusing existing algorithms for single layers, can be parallelized trivially and massively, applies even when parameter derivatives are not available or not desirable, can perform some model selection on the fly, and is competitive with stateof-the-art nonlinear optimizers even in the serial computation setting, often providing reasonable models within a few iterations. The continued increase in recent years in data availability and processing power has enabled the development and practical applicability of ever more powerful models in sta tistical machine learning, for example to recognize faces o r speech, or to translate natural language. However, physical limitations in serial computation suggest that scalabl e processing will require algorithms that can be massively parallelized, so they can profit from the thousands of inexpensive processors available in cloud computing. We focus on hierarchical, or nested, processing architectures. As a particular but important example, consider deep neuAppearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors. ral nets (fig. 1), which were originally inspired by biological systems such as the visual and auditory cortex in the mammalian brain (Serre et al., 2007), and which have been proven very successful at learning sophisticated task s, such as recognizing faces or speech, when trained on data.",
"title": ""
},
{
"docid": "82535c102f41dc9d47aa65bd71ca23be",
"text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.",
"title": ""
},
{
"docid": "10d8bbea398444a3fb6e09c4def01172",
"text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.",
"title": ""
},
{
"docid": "0798ed2ff387823bcd7572a9ddf6a5e1",
"text": "We present a novel algorithm for point cloud segmentation using group convolutions. Our approach uses a radial basis function (RBF) based variational autoencoder (VAE) network. We transform unstructured point clouds into regular voxel grids and use subvoxels within each voxel to encode the local geometry using a VAE architecture. In order to handle sparse distribution of points within each voxel, we use RBF to compute a local, continuous representation within each subvoxel. We extend group equivariant convolutions to 3D point cloud processing and increase the expressive capacity of the neural network. The combination of RBF and VAE results in a good volumetric representation that can handle noisy point cloud datasets and is more robust for learning. We highlight the performance on standard benchmarks and compare with prior methods. In practice, our approach outperforms state-of-the-art segmentation algorithms on the ShapeNet and S3DIS datasets.",
"title": ""
},
{
"docid": "3f9e5be7bfe8c28291758b0670afc61c",
"text": "Grayscale error di usion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. In color error di usion what color to render is a major concern in addition to nding optimal dot patterns. This article presents a survey of key methods for artifact reduction in grayscale and color error di usion. The linear gain model by Kite et al. replaces the thresholding quantizer with a scalar gain plus additive noise. They show that the sharpening is proportional to the scalar gain. Kite et al. derive the sharpness control parameter value in threshold modulation (Eschbach and Knox, 1991) to compensate linear distortion. False textures at mid-gray (Fan and Eschbach, 1994) are due to limit cycles, which can be broken up by using a deterministic bit ipping quantizer (Damera-Venkata and Evans, 2001). Several other variations on grayscale error di usion have been proposed to reduce false textures in shadow and highlight regions, including green noise halftoning (Levien, 1993) and tone-dependent error di usion (Li and Allebach, 2002). Color error di usion ideally requires the quantization error to be di used to frequencies and colors, to which the HVS is least sensitive. We review the following approaches: color plane separable (Kolpatzik and Bouman 1992) design; perceptual quantization (Shaked et al. 1996, Haneishi et al. 1996) ; green noise extensions (Lau et al. 2000); and matrix-valued error lters (Damera-Venkata and Evans, 2001).",
"title": ""
},
{
"docid": "ebaf73ec27127016f3327e6a0b88abff",
"text": "A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can’t communicate about their medical history to the medical practitioners. Also, Medical practitioners can’t edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.",
"title": ""
},
{
"docid": "4eb1e28d62af4a47a2e8dc795b89cc09",
"text": "This paper describes a new computational finance approach. This approach combines pattern recognition techniques with an evolutionary computation kernel applied to financial markets time series in order to optimize trading strategies. Moreover, for pattern matching a template-based approach is used in order to describe the desired trading patterns. The parameters for the pattern templates, as well as, for the decision making rules are optimized using a genetic algorithm kernel. The approach was tested considering actual data series and presents a robust profitable trading strategy which clearly beats the market, S&P 500 index, reducing the investment risk significantly.",
"title": ""
},
{
"docid": "764eba2c2763db6dce6c87170e06d0f8",
"text": "Kansei Engineering was developed as a consumer-oriented technology for new product development. It is defined as \"translating technology of a consumer's feeling and image for a product into design elements\". Kansei Engineering (KE) technology is classified into three types, KE Type I, II, and III. KE Type I is a category classification on the new product toward the design elements. Type II utilizes the current computer technologies such as Expert System, Neural Network Model and Genetic Algorithm. Type III is a model using a mathematical structure. Kansei Engineering has permeated Japanese industries, including automotive, electrical appliance, construction, clothing and so forth. The successful companies using Kansei Engineering benefited from good sales regarding the new consumer-oriented products. Relevance to industry Kansei Engineering is utilized in the automotive, electrical appliance, construction, clothing and other industries. This paper provides help to potential product designers in these industries.",
"title": ""
},
{
"docid": "132880bc2af0e8ce5e0dc04b0ff397f6",
"text": "The need to have equitable access to quality healthcare is enshrined in the United Nations (UN) Sustainable Development Goals (SDGs), which defines the developmental agenda of the UN for the next 15 years. In particular, the third SDG focuses on the need to “ensure healthy lives and promote well-being for all at all ages”. In this paper, we build the case that 5G wireless technology, along with concomitant emerging technologies (such as IoT, big data, artificial intelligence and machine learning), will transform global healthcare systems in the near future. Our optimism around 5G-enabled healthcare stems from a confluence of significant technical pushes that are already at play: apart from the availability of high-throughput low-latency wireless connectivity, other significant factors include the democratization of computing through cloud computing; the democratization of Artificial Intelligence (AI) and cognitive computing (e.g., IBM Watson); and the commoditization of data through crowdsourcing and digital exhaust. These technologies together can finally crack a dysfunctional healthcare system that has largely been impervious to technological innovations. We highlight the persistent deficiencies of the current healthcare system and then demonstrate how the 5G-enabled healthcare revolution can fix these deficiencies. We also highlight open technical research challenges, and potential pitfalls, that may hinder the development of such a 5G-enabled health revolution.",
"title": ""
},
{
"docid": "066eef8e511fac1f842c699f8efccd6b",
"text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "cd8cad6445b081e020d90eb488838833",
"text": "Heavy metal pollution has become one of the most serious environmental problems today. The treatment of heavy metals is of special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for heavy metal removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat heavy metal wastewater and evaluates these techniques. These technologies include chemical precipitation, ion-exchange, adsorption, membrane filtration, coagulation-flocculation, flotation and electrochemical methods. About 185 published studies (1988-2010) are reviewed in this paper. It is evident from the literature survey articles that ion-exchange, adsorption and membrane filtration are the most frequently studied for the treatment of heavy metal wastewater.",
"title": ""
},
{
"docid": "062149cd37d1e9f04f32bd6b713f10ab",
"text": "Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN).",
"title": ""
},
{
"docid": "8bdd02547be77f4c825c9aed8016ddf8",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
},
{
"docid": "c3eaaa0812eb9ab7e5402339733daa28",
"text": "BACKGROUND\nHypovitaminosis D and a low calcium intake contribute to increased parathyroid function in elderly persons. Calcium and vitamin D supplements reduce this secondary hyperparathyroidism, but whether such supplements reduce the risk of hip fractures among elderly people is not known.\n\n\nMETHODS\nWe studied the effects of supplementation with vitamin D3 (cholecalciferol) and calcium on the frequency of hip fractures and other nonvertebral fractures, identified radiologically, in 3270 healthy ambulatory women (mean [+/- SD] age, 84 +/- 6 years). Each day for 18 months, 1634 women received tricalcium phosphate (containing 1.2 g of elemental calcium) and 20 micrograms (800 IU) of vitamin D3, and 1636 women received a double placebo. We measured serial serum parathyroid hormone and 25-hydroxyvitamin D (25(OH)D) concentrations in 142 women and determined the femoral bone mineral density at base line and after 18 months in 56 women.\n\n\nRESULTS\nAmong the women who completed the 18-month study, the number of hip fractures was 43 percent lower (P = 0.043) and the total number of nonvertebral fractures was 32 percent lower (P = 0.015) among the women treated with vitamin D3 and calcium than among those who received placebo. The results of analyses according to active treatment and according to intention to treat were similar. In the vitamin D3-calcium group, the mean serum parathyroid hormone concentration had decreased by 44 percent from the base-line value at 18 months (P < 0.001) and the serum 25(OH)D concentration had increased by 162 percent over the base-line value (P < 0.001). The bone density of the proximal femur increased 2.7 percent in the vitamin D3-calcium group and decreased 4.6 percent in the placebo group (P < 0.001).\n\n\nCONCLUSIONS\nSupplementation with vitamin D3 and calcium reduces the risk of hip fractures and other nonvertebral fractures among elderly women.",
"title": ""
},
{
"docid": "0ff3e49a700a776c1a8f748d78bc4b73",
"text": "Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.",
"title": ""
},
{
"docid": "895f912a24f00984922c586880f77dee",
"text": "Massive multiple-input multiple-output technology has been considered a breakthrough in wireless communication systems. It consists of equipping a base station with a large number of antennas to serve many active users in the same time-frequency block. Among its underlying advantages is the possibility to focus transmitted signal energy into very short-range areas, which will provide huge improvements in terms of system capacity. However, while this new concept renders many interesting benefits, it brings up new challenges that have called the attention of both industry and academia: channel state information acquisition, channel feedback, instantaneous reciprocity, statistical reciprocity, architectures, and hardware impairments, just to mention a few. This paper presents an overview of the basic concepts of massive multiple-input multiple-output, with a focus on the challenges and opportunities, based on contemporary research.",
"title": ""
},
{
"docid": "122e31e413efd0f96860661d461ce780",
"text": "Recent years have seen a dramatic increase in research and development of scientific workflow systems. These systems promise to make scientists more productive by automating data-driven and computeintensive analyses. Despite many early achievements, the long-term success of scientific workflow technology critically depends on making these systems useable by ‘‘mere mortals’’, i.e., scientists who have a very good idea of the analysis methods they wish to assemble, but who are neither software developers nor scripting-language experts. With these users in mind, we identify a set of desiderata for scientific workflow systems crucial for enabling scientists to model and design the workflows they wish to automate themselves. As a first step towards meeting these requirements, we also show how the collection-oriented modeling and design (comad) approach for scientific workflows, implemented within the Kepler system, can help provide these critical, design-oriented capabilities to scientists. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
6c0a4d1ad7c8f4d369cb866fda7e4777 | EduRank: A Collaborative Filtering Approach to Personalization in E-learning | [
{
"docid": "45d57f01218522609d6ef93de61ea491",
"text": "We consider the problem of finding a ranking of a set of elements that is “closest to” a given set of input rankings of the elements; more precisely, we want to find a permutation that minimizes the Kendall-tau distance to the input rankings, where the Kendall-tau distance is defined as the sum over all input rankings of the number of pairs of elements that are in a different order in the input ranking than in the output ranking. If the input rankings are permutations, this problem is known as the Kemeny rank aggregation problem. This problem arises for example in building meta-search engines for Web search, aggregating viewers’ rankings of movies, or giving recommendations to a user based on several different criteria, where we can think of having one ranking of the alternatives for each criterion. Many of the approximation algorithms and heuristics that have been proposed in the literature are either positional, comparison sort or local search algorithms. The rank aggregation problem is a special case of the (weighted) feedback arc set problem, but in the feedback arc set problem we use only information about the preferred relative ordering of pairs of elements to find a ranking of the elements, whereas in the case of the rank aggregation problem, we have additional information in the form of the complete input rankings. The positional methods are the only algorithms that use this additional information. Since the rank aggregation problem is NP-hard, none of these algorithms is guaranteed to find the optimal solution, and different algorithms will provide different solutions. We give theoretical and practical evidence that a combination of these different approaches gives algorithms that are superior to the individual algorithms. Theoretically, we give lower bounds on the performance for many of the “pure” methods. Practically, we perform an extensive evaluation of the “pure” algorithms and ∗Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. frans@mail.tsinghua.edu.cn. Research performed in part while the author was at Nature Source Genetics, Ithaca, NY. †Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. anke@mail.tsinghua.edu.cn. Research partly supported by NSF grant CCF-0514628 and performed in part while the author was at the School of Operations Research and Information Engineering at Cornell University, Ithaca, NY. combinations of different approaches. We give three recommendations for which (combination of) methods to use based on whether a user wants to have a very fast, fast or reasonably fast algorithm.",
"title": ""
}
] | [
{
"docid": "e73c560679c9d856390c672ebc66d571",
"text": "{ This paper describes a complete coverage path planning and guidance methodology for a mobile robot, having the automatic oor cleaning of large industrial areas as a target application. The proposed algorithms rely on the a priori knowledge of a 2D map of the environment and cope with unexpected obstacles not represented on the map. A template based approach is used to control the path execution, thus incorporating, in a natural way, the kinematic and the geometric model of the mobile robot on the path planning procedure. The novelty of the proposed approach is the capability of the path planner to deal with a priori mapped or unexpected obstacles in the middle of the working space. If unmapped obstacles permanently block the planned trajectory, the path tracking control avoids these obstacles. The paper presents experimental results with a LABMATE mobile robot, connrming the feasibility of the total coverage path and the robustness of the path tracking behaviour based control.",
"title": ""
},
{
"docid": "47790125ba78325a4455fcdbae96058a",
"text": "Today solar energy became an important resource of energy generation. But the efficiency of solar system is very low. To increase its efficiency MPPT techniques are used. The main disadvantage of solar system is its variable voltage. And to obtained a stable voltage from solar panels DC-DC converters are used . DC-DC converters are of mainly three types buck, boost and cuk. This paper presents use of cuk converter with MPPT technique. Generally buck and boost converters used. But by using cuk converter we can step up or step down the voltage level according to the load requirement. The circuit has been simulated by MATLAB and Simulink softwares.",
"title": ""
},
{
"docid": "fc3af1e7ebc13605938d8f8238d9c8bd",
"text": "Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and/or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1% AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.",
"title": ""
},
{
"docid": "0d23abee044cf8c793a285146f0669a5",
"text": "Water cycle algorithm (WCA) is a new population-based meta-heuristic technique. It is originally inspired by idealized hydrological cycle observed in natural environment. The conventional WCA is capable to demonstrate a superior performance compared to other well-established techniques in solving constrained and also unconstrained problems. Similar to other meta-heuristics, premature convergence to local optima may still be happened in dealing with some specific optimization tasks. Similar to chaos in real water cycle behavior, this article incorporates chaotic patterns into stochastic processes of WCA to improve the performance of conventional algorithm and to mitigate its premature convergence problem. First, different chaotic signal functions along with various chaotic-enhanced WCA strategies (totally 39 meta-heuristics) are implemented, and the best signal is preferred as the most appropriate chaotic technique for modification of WCA. Second, the chaotic algorithm is employed to tackle various benchmark problems published in the specialized literature and also training of neural networks. The comparative statistical results of new technique vividly demonstrate that premature convergence problem is relieved significantly. Chaotic WCA with sinusoidal map and chaotic-enhanced operators not only can exploit high-quality solutions efficiently but can outperform WCA optimizer and other investigated algorithms.",
"title": ""
},
{
"docid": "5a2c6049e23473a5845b17da4101ab41",
"text": "This paper discusses the design of a battery-less wirelessly-powered UWB system-on-a-chip (SoC) tag for area-constrained localization applications. An antenna-rectifier co-design methodology optimizes sensitivity and increases range under tag area constraints. A low-voltage (0.8-V) UWB TX enables high rectifier sensitivity by reducing required rectifier output voltage. The 2.4GHz rectifier, power-management unit and 8GHz UWB TX are integrated in 65nm CMOS and the rectifier demonstrates state-of-the-art -30.7dBm sensitivity for 1V output with only 1.3cm2 antenna area, representing a 2.3× improvement in sensitivity over previously published work, at 2.6× higher frequency with 9× smaller antenna area. Measurements in an office corridor demonstrate 20m range with 36dBm TX EIRP. The 0.8-V 8GHz UWB TX consumes 64pJ/pulse at 28MHz pulse repetition rate and achieves 2.4GHz -10dB bandwidth. Wireless measurements demonstrate sub-10cm range resolution at range > 10m.",
"title": ""
},
{
"docid": "d3dde75d07ad4ed79ff1da2c3a601e1d",
"text": "In open trials, 1-Hz repetitive transcranial magnetic stimulation (rTMS) to the supplementary motor area (SMA) improved symptoms and normalized cortical hyper-excitability of patients with obsessive-compulsive disorder (OCD). Here we present the results of a randomized sham-controlled double-blind study. Medication-resistant OCD patients (n=21) were assigned 4 wk either active or sham rTMS to the SMA bilaterally. rTMS parameters consisted of 1200 pulses/d, at 1 Hz and 100% of motor threshold (MT). Eighteen patients completed the study. Response to treatment was defined as a > or = 25% decrease on the Yale-Brown Obsessive Compulsive Scale (YBOCS). Non-responders to sham and responders to active or sham rTMS were offered four additional weeks of open active rTMS. After 4 wk, the response rate in the completer sample was 67% (6/9) with active and 22% (2/9) with sham rTMS. At 4 wk, patients receiving active rTMS showed on average a 25% reduction in the YBOCS compared to a 12% reduction in those receiving sham. In those who received 8-wk active rTMS, OCD symptoms improved from 28.2+/-5.8 to 14.5+/-3.6. In patients randomized to active rTMS, MT measures on the right hemisphere increased significantly over time. At the end of 4-wk rTMS the abnormal hemispheric laterality found in the group randomized to active rTMS normalized. The results of the first randomized sham-controlled trial of SMA stimulation in the treatment of resistant OCD support further investigation into the potential therapeutic applications of rTMS in this disabling condition.",
"title": ""
},
{
"docid": "b150e9aef47001e1b643556f64c5741d",
"text": "BACKGROUND\nMany adolescents have poor mental health literacy, stigmatising attitudes towards people with mental illness, and lack skills in providing optimal Mental Health First Aid to peers. These could be improved with training to facilitate better social support and increase appropriate help-seeking among adolescents with emerging mental health problems. teen Mental Health First Aid (teen MHFA), a new initiative of Mental Health First Aid International, is a 3 × 75 min classroom based training program for students aged 15-18 years.\n\n\nMETHODS\nAn uncontrolled pilot of the teen MHFA course was undertaken to examine the feasibility of providing the program in Australian secondary schools, to test relevant measures of student knowledge, attitudes and behaviours, and to provide initial evidence of program effects.\n\n\nRESULTS\nAcross four schools, 988 students received the teen MHFA program. 520 students with a mean age of 16 years completed the baseline questionnaire, 345 completed the post-test and 241 completed the three-month follow-up. Statistically significant improvements were found in mental health literacy, confidence in providing Mental Health First Aid to a peer, help-seeking intentions and student mental health, while stigmatising attitudes significantly reduced.\n\n\nCONCLUSIONS\nteen MHFA appears to be an effective and feasible program for training high school students in Mental Health First Aid techniques. Further research is required with a randomized controlled design to elucidate the causal role of the program in the changes observed.",
"title": ""
},
{
"docid": "2de4de4a7b612fd8d87a40780acdd591",
"text": "In the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture; in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events like TLB misses, L1 and L2 cache misses, by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms make them perform well, which is confirmed by experimental results. *This work was carried out when the author was at the University of Amsterdam, supported by SION grant 612-23-431 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999.",
"title": ""
},
{
"docid": "df106178a7c928318cf116b608ca31b3",
"text": "Toothpaste is a paste or gel to be used with a toothbrush to maintain and improve oral health and aesthetics. Since their introduction several thousand years ago, toothpaste formulations have evolved considerably - from suspensions of crushed egg shells or ashes to complex formulations with often more than 20 ingredients. Among these can be compounds to combat dental caries, gum disease, malodor, calculus, erosion and dentin hypersensitivity. Furthermore, toothpastes contain abrasives to clean and whiten teeth, flavors for the purpose of breath freshening and dyes for better visual appeal. Effective toothpastes are those that are formulated for maximum bioavailability of their actives. This, however, can be challenging as compromises will have to be made when several different actives are formulated in one phase. Toothpaste development is by no means complete as many challenges and especially the poor oral substantivity of most active ingredients are yet to overcome.",
"title": ""
},
{
"docid": "7c1691fd1140b3975b61f8e2ce3dcd9b",
"text": "In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"title": ""
},
{
"docid": "3ba87a9a84f317ef3fd97c79f86340c1",
"text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.",
"title": ""
},
{
"docid": "b231f2c6b19d5c38b8aa99ec1b1e43da",
"text": "Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals’ degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores (“indirect reciprocity”), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. “tit-for-tat”) as well as indirect reciprocity (helping strangers in order to increase one’s reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level.",
"title": ""
},
{
"docid": "e088ad55f29634e036f291a6131ac669",
"text": "In this paper, we present a novel anomaly detection framework which integrates motion and appearance cues to detect abnormal objects and behaviors in video. For motion anomaly detection, we employ statistical histograms to model the normal motion distributions and propose a notion of “cut-bin” in histograms to distinguish unusual motions. For appearance anomaly detection, we develop a novel scheme based on Support Vector Data Description (SVDD), which obtains a spherically shaped boundary around the normal objects to exclude abnormal objects. The two complementary cues are finally combined to achieve more comprehensive detection results. Experimental results show that the proposed approach can effectively locate abnormal objects in multiple public video scenarios, achieving comparable performance to other state-of-the-art anomaly detection techniques. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "be722a19b56ef604d6fe24012470e61f",
"text": "In this paper, we derive optimality results for greedy Bayesian-network search algo rithms that perform single-edge modifica tions at each step and use asymptotically consistent scoring criteria. Our results ex tend those of Meek (1997) and Chickering (2002), who demonstrate that in the limit of large datasets, if the generative distribu tion is perfect with respect to a DAG defined over the observable variables, such search al gorithms will identify this optimal (i.e. gen erative) DAG model. We relax their assump tion about the generative distribution, and assume only that this distribution satisfies the composition property over the observable variables, which is a more realistic assump tion for real domains. Under this assump tion, we guarantee that the search algorithms identify an inclusion-optimal model; that is, a model that (1) contains the generative dis tribution and (2) has no sub-model that con tains this distribution. In addition, we show that the composition property is guaranteed to hold whenever the dependence relation ships in the generative distribution can be characterized by paths between singleton el ements in some generative graphical model (e.g. a DAG, a chain graph, or a Markov network) even when the generative model in cludes unobserved variables, and even when the observed data is subject to selection bias.",
"title": ""
},
{
"docid": "f16ab00d323e4169117eecb72bcb330e",
"text": "Despite the availability of various substance abuse treatments, alcohol and drug misuse and related negative consequences remain prevalent. Vipassana meditation (VM), a Buddhist mindfulness-based practice, provides an alternative for individuals who do not wish to attend or have not succeeded with traditional addiction treatments. In this study, the authors evaluated the effectiveness of a VM course on substance use and psychosocial outcomes in an incarcerated population. Results indicate that after release from jail, participants in the VM course, as compared with those in a treatment-as-usual control condition, showed significant reductions in alcohol, marijuana, and crack cocaine use. VM participants showed decreases in alcohol-related problems and psychiatric symptoms as well as increases in positive psychosocial outcomes. The utility of mindfulness-based treatments for substance use is discussed.",
"title": ""
},
{
"docid": "0c6afb06f8d230943c6855dcb4dd4392",
"text": "The home computer user is often said to be the weakest link in computer security. They do not always follow security advice, and they take actions, as in phishing, that compromise themselves. In general, we do not understand why users do not always behave safely, which would seem to be in their best interest. This paper reviews the literature of surveys and studies of factors that influence security decisions for home computer users. We organize the review in four sections: understanding of threats, perceptions of risky behavior, efforts to avoid security breaches and attitudes to security interventions. We find that these studies reveal a lot of reasons why current security measures may not match the needs or abilities of home computer users and suggest future work needed to inform how security is delivered to this user group.",
"title": ""
},
{
"docid": "9b85f81f50cf94a3a076b202ba94ab82",
"text": "Growing accuracy and robustness of Deep Neural Networks (DNN) models are accompanied by growing model capacity (going deeper or wider). However, high memory requirements of those models make it difficult to execute the training process in one GPU. To address it, we first identify the memory usage characteristics for deep and wide convolutional networks, and demonstrate the opportunities of memory reuse on both intra-layer and inter-layer levels. We then present Layrub, a runtime data placement strategy that orchestrates the execution of training process. It achieves layer-centric reuse to reduce memory consumption for extreme-scale deep learning that cannot be run on one single GPU.",
"title": ""
},
{
"docid": "05e8879a48e3a9808ec74b5bf225c562",
"text": "Although peribronchial lymphatic drainage of the lung has been well characterized, lymphatic drainage in the visceral pleura is less well understood. The objective of the present study was to evaluate the lymphatic drainage of lung segments in the visceral pleura. Adult, European cadavers were examined. Cadavers with a history of pleural or pulmonary disease were excluded. The cadavers had been refrigerated but not embalmed. The lungs were surgically removed and re-warmed. Blue dye was injected into the subpleural area and into the first draining visceral pleural lymphatic vessel of each lung segment. Twenty-one cadavers (7 males and 14 females; mean age 80.9 years) were dissected an average of 9.8 day postmortem. A total of 380 dye injections (in 95 lobes) were performed. Lymphatic drainage of the visceral pleura followed a segmental pathway in 44.2% of the injections (n = 168) and an intersegmental pathway in 55.8% (n = 212). Drainage was found to be both intersegmental and interlobar in 2.6% of the injections (n = 10). Lymphatic drainage in the visceral pleura followed an intersegmental pathway in 22.8% (n = 13) of right upper lobe injections, 57.9% (n = 22) of right middle lobe injections, 83.3% (n = 75) of right lower lobe injections, 21% (n = 21) of left upper lobe injections, and 85.3% (n = 81) of left lower lobe injections. In the lung, lymphatic drainage in the visceral pleura appears to be more intersegmental than the peribronchial pathway is—especially in the lower lobes. The involvement of intersegmental lymphatic drainage in the visceral pleura should now be evaluated during pulmonary resections (and especially sub-lobar resections) for lung cancer.",
"title": ""
},
{
"docid": "ac808ecd75ccee74fff89d03e3396f26",
"text": "This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint. Keywords—Agricultural engineering, computer vision, image processing, flower detection.",
"title": ""
},
{
"docid": "6e77a99b6b0ddf18560580fed1ca5bbe",
"text": "Theoretical analysis of the connection between taxation and risktaking has mainly been concerned with the effect of taxes on portfolio decisions of consumers, Mossin (1968b) and Stiglitz (1969). However, there are some problems which are not naturally classified under this heading and which, although of considerable practical interest, have been left out of the theoretical discussions. One such problem is tax evasion. This takes many forms, and one can hardly hope to give a completely general analysis of all these. Our objective in this paper is therefore the more limited one of analyzing the individual taxpayer’s decision on whether and to what extent to avoid taxes by deliberate underreporting. On the one hand our approach is related to the studies of economics of criminal activity, as e.g. in the papers by Becker ( 1968) and by Tulkens and Jacquemin (197 1). On the other hand it is related to the analysis of optimal portfolio and insurance policies in the economics of uncertainty, as in the work by Arrow ( 1970), Mossin ( 1968a) and several others. We shall start by considering a simple static model where this decision is the only one with which the individual is concerned, so that we ignore the interrelationships that probably exist with other types of economic choices. After a detailed study of this simple case (sections",
"title": ""
}
] | scidocsrr |
3489d1d49350cc9ce296c29ba1c5d1cf | Economics of Internet of Things (IoT): An Information Market Approach | [
{
"docid": "24a164e7d6392b052f8a36e20e9c4f69",
"text": "The initial vision of the Internet of Things was of a world in which all physical objects are tagged and uniquely identified by RFID transponders. However, the concept has grown into multiple dimensions, encompassing sensor networks able to provide real-world intelligence and goal-oriented collaboration of distributed smart objects via local networks or global interconnections such as the Internet. Despite significant technological advances, difficulties associated with the evaluation of IoT solutions under realistic conditions in real-world experimental deployments still hamper their maturation and significant rollout. In this article we identify requirements for the next generation of IoT experimental facilities. While providing a taxonomy, we also survey currently available research testbeds, identify existing gaps, and suggest new directions based on experience from recent efforts in this field.",
"title": ""
}
] | [
{
"docid": "1885ee33c09d943736b03895f41cea06",
"text": "Since the late 1990s, there has been a burst of research on robotic devices for poststroke rehabilitation. Robot-mediated therapy produced improvements on recovery of motor capacity; however, so far, the use of robots has not shown qualitative benefit over classical therapist-led training sessions, performed on the same quantity of movements. Multidegree-of-freedom robots, like the modern upper-limb exoskeletons, enable a distributed interaction on the whole assisted limb and can exploit a large amount of sensory feedback data, potentially providing new capabilities within standard rehabilitation sessions. Surprisingly, most publications in the field of exoskeletons focused only on mechatronic design of the devices, while little details were given to the control aspects. On the contrary, we believe a paramount aspect for robots potentiality lies on the control side. Therefore, the aim of this review is to provide a taxonomy of currently available control strategies for exoskeletons for neurorehabilitation, in order to formulate appropriate questions toward the development of innovative and improved control strategies.",
"title": ""
},
{
"docid": "2683c65d587e8febe45296f1c124e04d",
"text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.",
"title": ""
},
{
"docid": "48096a9a7948a3842afc082fa6e223a6",
"text": "We present a method for using previously-trained ‘teacher’ agents to kickstart the training of a new ‘student’ agent. To this end, we leverage ideas from policy distillation (Rusu et al., 2015; Parisotto et al., 2015) and population based training (Jaderberg et al., 2017). Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance. We show that, on a challenging and computationally-intensive multi-task benchmark (Beattie et al., 2016), kickstarted training improves the data efficiency of new agents, making it significantly easier to iterate on their design. We also show that the same kickstarting pipeline can allow a single student agent to leverage multiple ‘expert’ teachers which specialise on individual tasks. In this setting kickstarting yields surprisingly large gains, with the kickstarted agent matching the performance of an agent trained from scratch in almost 10× fewer steps, and surpassing its final performance by 42%. Kickstarting is conceptually simple and can easily be incorporated into reinforcement learning experiments.",
"title": ""
},
{
"docid": "9694bc859dd5295c40d36230cf6fd1b9",
"text": "In the past two decades, the synthetic style and fashion drug \"crystal meth\" (\"crystal\", \"meth\"), chemically representing the crystalline form of the methamphetamine hydrochloride, has become more and more popular in the United States, in Eastern Europe, and just recently in Central and Western Europe. \"Meth\" is cheap, easy to synthesize and to market, and has an extremely high potential for abuse and dependence. As a strong sympathomimetic, \"meth\" has the potency to switch off hunger, fatigue and, pain while simultaneously increasing physical and mental performance. The most relevant side effects are heart and circulatory complaints, severe psychotic attacks, personality changes, and progressive neurodegeneration. Another effect is \"meth mouth\", defined as serious tooth and oral health damage after long-standing \"meth\" abuse; this condition may become increasingly relevant in dentistry and oral- and maxillofacial surgery. There might be an association between general methamphetamine abuse and the development of osteonecrosis, similar to the medication-related osteonecrosis of the jaws (MRONJ). Several case reports concerning \"meth\" patients after tooth extractions or oral surgery have presented clinical pictures similar to MRONJ. This overview summarizes the most relevant aspect concerning \"crystal meth\" abuse and \"meth mouth\".",
"title": ""
},
{
"docid": "c0dbd6356ead3a9542c9ec20dd781cc7",
"text": "This paper aims to address the importance of supportive teacher–student interactions within the learning environment. This will be explored through the three elements of the NSW Quality Teaching Model; Intellectual Quality, Quality Learning Environment and Significance. The paper will further observe the influences of gender on the teacher–student relationship, as well as the impact that this relationship has on student academic outcomes and behaviour. Teacher–student relationships have been found to have immeasurable effects on students’ learning and their schooling experience. This paper examines the ways in which educators should plan to improve their interactions with students, in order to allow for quality learning. This journal article is available in Journal of Student Engagement: Education Matters: http://ro.uow.edu.au/jseem/vol2/iss1/2 Journal of Student Engagement: Education matters 2012, 2 (1), 2–9 Lauren Liberante 2 The importance of teacher–student relationships, as explored through the lens of the NSW Quality Teaching Model",
"title": ""
},
{
"docid": "bb482edabdb07f412ca13a728b7fd25c",
"text": "This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. We train the cuboid model jointly and discriminatively. In inference we slide and rotate the box in 3D to score the object hypotheses. We evaluate our approach in indoor and outdoor scenarios, and show that our approach outperforms the state-of-the-art in both 2D [1] and 3D object detection [3].",
"title": ""
},
{
"docid": "dd51cc2138760f1dcdce6e150cabda19",
"text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.",
"title": ""
},
{
"docid": "a839016be99c3cb93d30fa48403086d8",
"text": "At synapses of the mammalian central nervous system, release of neurotransmitter occurs at rates transiently as high as 100 Hz, putting extreme demands on nerve terminals with only tens of functional vesicles at their disposal. Thus, the presynaptic vesicle cycle is particularly critical to maintain neurotransmission. To understand vesicle cycling at the most fundamental level, we studied single vesicles undergoing exo/endocytosis and tracked the fate of newly retrieved vesicles. This was accomplished by minimally stimulating boutons in the presence of the membrane-fluorescent styryl dye FM1-43, then selecting for terminals that contained only one dye-filled vesicle. We then observed the kinetics of dye release during single action potential stimulation. We found that most vesicles lost only a portion of their total dye during a single fusion event, but were able to fuse again soon thereafter. We interpret this as direct evidence of \"kiss-and-run\" followed by rapid reuse. Other interpretations such as \"partial loading\" and \"endosomal splitting\" were largely excluded on the basis of multiple lines of evidence. Our data placed an upper bound of <1.4 s on the lifetime of the kiss-and-run fusion event, based on the assumption that aqueous departitioning is rate limiting. The repeated use of individual vesicles held over a range of stimulus frequencies up to 30 Hz and was associated with neurotransmitter release. A small percentage of fusion events did release a whole vesicle's worth of dye in one action potential, consistent with a classical picture of exocytosis as fusion followed by complete collapse or at least very slow retrieval.",
"title": ""
},
{
"docid": "342bcd2509b632480c4f4e8059cfa6a1",
"text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.",
"title": ""
},
{
"docid": "abbafaaf6a93e2a49a692690d4107c9a",
"text": "Virtual teams have become a ubiquitous form of organizing, but the impact of social structures within and between teams on group performance remains understudied. This paper uses the case study of a massively multiplayer online game and server log data from over 10,000 players to examine the connection between group social capital (operationalized through guild network structure measures) and team effectiveness, given a variety of in-game social networks. Three different networks, social, task, and exchange networks, are compared and contrasted while controlling for group size, group age, and player experience. Team effectiveness is maximized at a roughly moderate level of closure across the networks, suggesting that this is the optimal level of the groupâs network density. Guilds with high brokerage, meaning they have diverse connections with other groups, were more effective in achievement-oriented networks. In addition, guilds with central leaders were more effective when they teamed up with other guild leaders.",
"title": ""
},
{
"docid": "3d7eb095e68a9500674493ee58418789",
"text": "Hundreds of scholarly studies have investigated various aspects of the immensely popular Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none of them has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and also provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment, but also for more serious topics such as health information and legal background. Scholars, librarians and students are common users of Wikipedia, and it provides a unique opportunity for educating students in digital",
"title": ""
},
{
"docid": "763983ae894e3b98932233ef0b465164",
"text": "In the rapidly developing world of information technology, computers have been used in various settings for clinical medicine application. Studies have focused on computerized physician order entry (CPOE) system interface design and functional development to achieve a successful technology adoption process. Therefore, the purpose of this study was to evaluate physician satisfaction with the CPOE system. This survey included user attitude toward interface design, operation functions/usage effectiveness, interface usability, and user satisfaction. We used questionnaires for data collection from June to August 2008, and 225 valid questionnaires were returned with a response rate of 84.5 %. Canonical correlation was applied to explore the relationship of personal attributes and usability with user satisfaction. The results of the data analysis revealed that certain demographic groups showed higher acceptance and satisfaction levels, especially residents, those with less pressure when using computers or those with less experience with the CPOE systems. Additionally, computer use pressure and usability were the best predictors of user satisfaction. Based on the study results, it is suggested that future CPOE development should focus on interface design and content links, as well as providing educational training programs for the new users; since a learning curve period should be considered as an indespensible factor for CPOE adoption.",
"title": ""
},
{
"docid": "f94ff39136c71cf2a36253381a042195",
"text": "We present Autonomous Rssi based RElative poSitioning and Tracking (ARREST), a new robotic sensing system for tracking and following a moving, RF-emitting object, which we refer to as the Leader, solely based on signal strength information. Our proposed tracking agent, which we refer to as the TrackBot, uses a single rotating, off-the-shelf, directional antenna, novel angle and relative speed estimation algorithms, and Kalman filtering to continually estimate the relative position of the Leader with decimeter level accuracy (which is comparable to a state-of-the-art multiple access point based RF-localization system) and the relative speed of the Leader with accuracy on the order of 1 m/s. The TrackBot feeds the relative position and speed estimates into a Linear Quadratic Gaussian (LQG) controller to generate a set of control outputs to control the orientation and the movement of the TrackBot. We perform an extensive set of real world experiments with a full-fledged prototype to demonstrate that the TrackBot is able to stay within 5m of the Leader with: (1) more than 99% probability in line of sight scenarios, and (2) more than 75% probability in no line of sight scenarios, when it moves 1.8X faster than the Leader.",
"title": ""
},
{
"docid": "e14b936ecee52765078d77088e76e643",
"text": "In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented. The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image. The original image can be completely recovered after the data have been extracted exactly. The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity. Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload. A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends. This would further improve the embedding performance. Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.",
"title": ""
},
{
"docid": "50d63f05e453468f8e5234910e3d86d1",
"text": "0167-8655/$ see front matter 2011 Published by doi:10.1016/j.patrec.2011.08.019 ⇑ Corresponding author. Tel.: +44 (0) 2075940990; E-mail addresses: gordon.ross03@ic.ac.uk, gr203@i ic.ac.uk (N.M. Adams), d.tasoulis@ic.ac.uk (D.K. Tas Hand). Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time. 2011 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "bbad2fa7a85b7f90d9589adee78a08d7",
"text": "Haze has becoming a yearly occurrence in Malaysia. There exist three dimensionsto the problems associated with air pollution: public ignorance on quality of air, impact of air pollution towards health, and difficulty in obtaining information related to air pollution. This research aims to analyse and visually identify areas and associated level of air pollutant. This study applies the air pollutant index (API) data retrieved from Malaysia Department of Environment (DOE) and Geographic Information System (GIS) via Inverse Distance Weighted (IDW) interpolation methodin ArcGIS 10.1 software to enable haze monitoring visualisation. In this research, study area is narrowed to five major cities in Selangor, Malaysia.",
"title": ""
},
{
"docid": "78ce9ddb8fbfeb801455a76a3a6b0af2",
"text": "Deeply embedded domain-specific languages (EDSLs) intrinsically compromise programmer experience for improved program performance. Shallow EDSLs complement them by trading program performance for good programmer experience. We present Yin-Yang, a framework for DSL embedding that uses Scala macros to reliably translate shallow EDSL programs to the corresponding deep EDSL programs. The translation allows program prototyping and development in the user friendly shallow embedding, while the corresponding deep embedding is used where performance is important. The reliability of the translation completely conceals the deep em- bedding from the user. For the DSL author, Yin-Yang automatically generates the deep DSL embeddings from their shallow counterparts by reusing the core translation. This obviates the need for code duplication and leads to reliability by construction.",
"title": ""
},
{
"docid": "72e6d897e8852fca481d39237cf04e36",
"text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.",
"title": ""
},
{
"docid": "3d23e7b9d8c0e1a3b4916c069bf6f7d6",
"text": "In recent years, depth cameras have become a widely available sensor type that captures depth images at real-time frame rates. Even though recent approaches have shown that 3D pose estimation from monocular 2.5D depth images has become feasible, there are still challenging problems due to strong noise in the depth data and self-occlusions in the motions being captured. In this paper, we present an efficient and robust pose estimation framework for tracking full-body motions from a single depth image stream. Following a data-driven hybrid strategy that combines local optimization with global retrieval techniques, we contribute several technical improvements that lead to speed-ups of an order of magnitude compared to previous approaches. In particular, we introduce a variant of Dijkstra's algorithm to efficiently extract pose features from the depth data and describe a novel late-fusion scheme based on an efficiently computable sparse Hausdorff distance to combine local and global pose estimates. Our experiments show that the combination of these techniques facilitates real-time tracking with stable results even for fast and complex motions, making it applicable to a wide range of inter-active scenarios.",
"title": ""
},
{
"docid": "b18d03e17f05cb0a2bb7a852a53df8cc",
"text": "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.",
"title": ""
}
] | scidocsrr |
0c0dbdd3593239ff7941c8219d15c1bd | The topology of dark networks | [
{
"docid": "8afd1ab45198e9960e6a047091a2def8",
"text": "We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.",
"title": ""
}
] | [
{
"docid": "0bfad59874eb7a52c123bb6cd7bc1c16",
"text": "A 12-year-old patient sustained avulsions of both permanent maxillary central incisors. Subsequently, both teeth developed replacement resorption. The left incisor was extracted alio loco. The right incisor was treated by decoronation (removal of crown and pulp, but preservation of the root substance). Comparison of both sites demonstrated complete preservation of the height and width of the alveolar bone at the decoronation site, whereas the tooth extraction site showed considerable bone loss. In addition, some vertical bone apposition was found on top of the decoronated root. Decoronation is a simple and safe surgical procedure for preservation of alveolar bone prior to implant placement. It must be considered as a treatment option for teeth affected by replacement resorption if tooth transplantation is not feasible.",
"title": ""
},
{
"docid": "1ecf01e0c9aec4159312406368ceeff0",
"text": "Image phylogeny is the problem of reconstructing the structure that represents the history of generation of semantically similar images (e.g., near-duplicate images). Typical image phylogeny approaches break the problem into two steps: (1) estimating the dissimilarity between each pair of images and (2) reconstructing the phylogeny structure. Given that the dissimilarity calculation directly impacts the phylogeny reconstruction, in this paper, we propose new approaches to the standard formulation of the dissimilarity measure employed in image phylogeny, aiming at improving the reconstruction of the tree structure that represents the generational relationships between semantically similar images. These new formulations exploit a different method of color adjustment, local gradients to estimate pixel differences and mutual information as a similarity measure. The results obtained with the proposed formulation remarkably outperform the existing counterparts in the literature, allowing a much better analysis of the kinship relationships in a set of images, allowing for more accurate deployment of phylogeny solutions to tackle traitor tracing, copyright enforcement and digital forensics problems.",
"title": ""
},
{
"docid": "22881dd1a1a17441b3a914117e134a28",
"text": "Remote sensing of the reflectance photoplethysmogram using a video camera typically positioned 1 m away from the patient's face is a promising method for monitoring the vital signs of patients without attaching any electrodes or sensors to them. Most of the papers in the literature on non-contact vital sign monitoring report results on human volunteers in controlled environments. We have been able to obtain estimates of heart rate and respiratory rate and preliminary results on changes in oxygen saturation from double-monitored patients undergoing haemodialysis in the Oxford Kidney Unit. To achieve this, we have devised a novel method of cancelling out aliased frequency components caused by artificial light flicker, using auto-regressive (AR) modelling and pole cancellation. Secondly, we have been able to construct accurate maps of the spatial distribution of heart rate and respiratory rate information from the coefficients of the AR model. In stable sections with minimal patient motion, the mean absolute error between the camera-derived estimate of heart rate and the reference value from a pulse oximeter is similar to the mean absolute error between two pulse oximeter measurements at different sites (finger and earlobe). The activities of daily living affect the respiratory rate, but the camera-derived estimates of this parameter are at least as accurate as those derived from a thoracic expansion sensor (chest belt). During a period of obstructive sleep apnoea, we tracked changes in oxygen saturation using the ratio of normalized reflectance changes in two colour channels (red and blue), but this required calibration against the reference data from a pulse oximeter.",
"title": ""
},
{
"docid": "da988486b0a3e82ce5f7fb8aa5467779",
"text": "The benefits of Domain Specific Modeling Languages (DSML), for modeling and design of cyber physical systems, have been acknowledged in previous years. In contrast to general purpose modeling languages, such as Unified Modeling Language, DSML facilitates the modeling of domain specific concepts. The objective of this work is to develop a simple graphical DSML for cyber physical systems, which allow the unified modeling of the structural and behavioral aspects of a system in a single model, and provide model transformation and design verification support in future. The proposed DSML was defined in terms of its abstract and concrete syntax. The applicability of the proposed DSML was demonstrated by its application in two case studies: Traffic Signal and Arbiter case studies. The results showed that the proposed DSML produce simple and unified models with possible model transformation and verification support.",
"title": ""
},
{
"docid": "6851e4355ab4825b0eb27ac76be2329f",
"text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.",
"title": ""
},
{
"docid": "a2223d57a866b0a0ef138e52fb515b84",
"text": "This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on stateof-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.",
"title": ""
},
{
"docid": "6a541e92e92385c27ceec1e55a50b46e",
"text": "BACKGROUND\nWe retrospectively studied the outcome of Pavlik harness treatment in late-diagnosed hip dislocation in infants between 6 and 24 months of age (Graf type 3 and 4 or dislocated hips on radiographs) treated in our hospital between 1984 and 2004. The Pavlik harness was progressively applied to improve both flexion and abduction of the dislocated hip. In case of persistent adduction contracture, an abduction splint was added temporarily to improve the abduction.\n\n\nMETHODS\nWe included 24 patients (26 hips) between 6 and 24 months of age who presented with a dislocated hip and primarily treated by Pavlik harness in our hospital between 1984 and 2004. The mean age at diagnosis was 9 months (range 6 to 23 mo). The average follow-up was 6 years 6 months (2 to 12 y). Ultrasound images and radiographs were assessed at the time of diagnosis, one year after reposition and at last follow-up.\n\n\nRESULTS\nTwelve of the twenty-six hips (46%) were successfully reduced with Pavlik harness after an average treatment of 14 weeks (4 to 28 wk). One patient (9%) needed a secondary procedure 1 year 9 months after reposition because of residual dysplasia (Pelvis osteotomy). Seventeen of the 26 hips were primary diagnosed by Ultrasound according to the Graf classification. Ten had a Graf type 3 hip and 7 hips were classified as Graf type 4. The success rate was 60% for the type 3 hips and 0% for the type 4 hips. (P=0.035). None of the hips that were reduced with the Pavlik harness developed an avascular necrosis (AVN). Of the hips that failed the Pavlik harness treatment, three hips showed signs of AVN, 1 after closed reposition and 2 after open reposition.\n\n\nCONCLUSION\nThe use of a Pavlik harness in the late-diagnosed hip dislocation type Graf 3 can be a successful treatment option in the older infant. We have noticed few complications in these patients maybe due to progressive and gentle increase of abduction and flexion, with or without temporary use of an abduction splint. The treatment should be abandoned if the hips are not reduced after 6 weeks. None of the Graf 4 hips could be reduced successfully by Pavlik harness. This was significantly different from the success rate for the Graf type 3 hips.\n\n\nLEVEL OF EVIDENCE\nTherapeutic study, clinical case series: Level IV.",
"title": ""
},
{
"docid": "eb639439559f3e4e3540e3e98de7a741",
"text": "This paper presents a deformable model for automatically segmenting brain structures from volumetric magnetic resonance (MR) images and obtaining point correspondences, using geometric and statistical information in a hierarchical scheme. Geometric information is embedded into the model via a set of affine-invariant attribute vectors, each of which characterizes the geometric structure around a point of the model from a local to a global scale. The attribute vectors, in conjunction with the deformation mechanism of the model, warrant that the model not only deforms to nearby edges, as is customary in most deformable surface models, but also that it determines point correspondences based on geometric similarity at different scales. The proposed model is adaptive in that it initially focuses on the most reliable structures of interest, and gradually shifts focus to other structures as those become closer to their respective targets and, therefore, more reliable. The proposed techniques have been used to segment boundaries of the ventricles, the caudate nucleus, and the lenticular nucleus from volumetric MR images.",
"title": ""
},
{
"docid": "e4ade1f0baea7c50d0dff4470bbbfcd9",
"text": "Ad networks for mobile apps require inspection of the visual layout of their ads to detect certain types of placement frauds. Doing this manually is error prone, and does not scale to the sizes of today’s app stores. In this paper, we design a system called DECAF to automatically discover various placement frauds scalably and effectively. DECAF uses automated app navigation, together with optimizations to scan through a large number of visual elements within a limited time. It also includes a framework for efficiently detecting whether ads within an app violate an extensible set of rules that govern ad placement and display. We have implemented DECAF for Windows-based mobile platforms, and applied it to 1,150 tablet apps and 50,000 phone apps in order to characterize the prevalence of ad frauds. DECAF has been used by the ad fraud team in Microsoft and has helped find many instances of ad frauds.",
"title": ""
},
{
"docid": "1836f3cf9c6243b57fd23b8d84b859d1",
"text": "While most Reinforcement Learning work utilizes temporal discounting to evaluate performance, the reasons for this are unclear. Is it out of desire or necessity? We argue that it is not out of desire, and seek to dispel the notion that temporal discounting is necessary by proposing a framework for undiscounted optimization. We present a metric of undiscounted performance and an algorithm for finding action policies that maximize that measure. The technique, which we call Rlearning, is modelled after the popular Q-learning algorithm [17]. Initial experimental results are presented which attest to a great improvement over Q-learning in some simple cases.",
"title": ""
},
{
"docid": "c1d7990c2c94ffd3ed16cce5947e4e27",
"text": "The introduction of online social networks (OSN) has transformed the way people connect and interact with each other as well as share information. OSN have led to a tremendous explosion of network-centric data that could be harvested for better understanding of interesting phenomena such as sociological and behavioural aspects of individuals or groups. As a result, online social network service operators are compelled to publish the social network data for use by third party consumers such as researchers and advertisers. As social network data publication is vulnerable to a wide variety of reidentification and disclosure attacks, developing privacy preserving mechanisms are an active research area. This paper presents a comprehensive survey of the recent developments in social networks data publishing privacy risks, attacks, and privacy-preserving techniques. We survey and present various types of privacy attacks and information exploited by adversaries to perpetrate privacy attacks on anonymized social network data. We present an in-depth survey of the state-of-the-art privacy preserving techniques for social network data publishing, metrics for quantifying the anonymity level provided, and information loss as well as challenges and new research directions. The survey helps readers understand the threats, various privacy preserving mechanisms, and their vulnerabilities to privacy breach attacks in social network data publishing as well as observe common themes and future directions.",
"title": ""
},
{
"docid": "0d6d2413cbaaef5354cf2bcfc06115df",
"text": "Bibliometric and “tech mining” studies depend on a crucial foundation—the search strategy used to retrieve relevant research publication records. Database searches for emerging technologies can be problematic in many respects, for example the rapid evolution of terminology, the use of common phraseology, or the extent of “legacy technology” terminology. Searching on such legacy terms may or may not pick up R&D pertaining to the emerging technology of interest. A challenge is to assess the relevance of legacy terminology in building an effective search model. Common-usage phraseology additionally confounds certain domains in which broader managerial, public interest, or other considerations are prominent. In contrast, searching for highly technical topics is relatively straightforward. In setting forth to analyze “Big Data,” we confront all three challenges—emerging terminology, common usage phrasing, and intersecting legacy technologies. In response, we have devised a systematic methodology to help identify research relating to Big Data. This methodology uses complementary search approaches, starting with a Boolean search model and subsequently employs contingency term sets to further refine the selection. The four search approaches considered are: (1) core lexical query, (2) expanded lexical query, (3) specialized journal search, and (4) cited reference analysis. Of special note here is the use of a “Hit-Ratio” that helps distinguish Big Data elements from less relevant legacy technology terms. We believe that such a systematic search development positions us to do meaningful analyses of Big Data research patterns, connections, and trajectories. Moreover, we suggest that such a systematic search approach can help formulate more replicable searches with high recall and satisfactory precision for other emerging technology studies.",
"title": ""
},
{
"docid": "329343cec99c221e6f6ce8e3f1dbe83f",
"text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.",
"title": ""
},
{
"docid": "b2e62194ce1eb63e0d13659a546db84b",
"text": "The rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.",
"title": ""
},
{
"docid": "062f6ecc9d26310de82572f500cb5f05",
"text": "The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.",
"title": ""
},
{
"docid": "3d8df2c8fcbdc994007104b8d21d7a06",
"text": "The purpose of this research was to analysis the efficiency of global strategies. This paper identified six key strategies necessary for firms to be successful when expanding globally. These strategies include differentiation, marketing, distribution, collaborative strategies, labor and management strategies, and diversification. Within this analysis, we chose to focus on the Coca-Cola Company because they have proven successful in their international operations and are one of the most recognized brands in the world. We performed an in-depth review of how effectively or ineffectively Coca-Cola has used each of the six strategies. The paper focused on Coca-Cola's operations in the United States, China, Belarus, Peru, and Morocco. The author used electronic journals from the various countries to determine how effective Coca-Cola was in these countries. The paper revealed that Coca-Cola was very successful in implementing strategies regardless of the country. However, the author learned that Coca-Cola did not effectively utilize all of the strategies in each country.",
"title": ""
},
{
"docid": "c7160083cc96253d305b127929e25107",
"text": "This paper considers the task of matching images and sentences. The challenge consists in discriminatively embedding the two modalities onto a shared visual-textual space. Existing work in this field largely uses Recurrent Neural Networks (RNN) for text feature learning and employs off-the-shelf Convolutional Neural Networks (CNN) for image feature extraction. Our system, in comparison, differs in two key aspects. Firstly, we build a convolutional network amenable for fine-tuning the visual and textual representations, where the entire network only contains four components, i.e., convolution layer, pooling layer, rectified linear unit function (ReLU), and batch normalisation. Endto-end learning allows the system to directly learn from the data and fully utilise the supervisions. Secondly, we propose instance loss according to viewing each multimodal data pair as a class. This works with a large margin objective to learn the inter-modal correspondence between images and their textual descriptions. Experiments on two generic retrieval datasets (Flickr30k and MSCOCO) demonstrate that our method yields competitive accuracy compared to state-of-the-art methods. Moreover, in language person retrieval, we improve the state of the art by a large margin. Code is available at https://github.com/layumi/ Image-Text-Embedding",
"title": ""
},
{
"docid": "e34b8fd3e1fba5306a88e4aac38c0632",
"text": "1 Jomo was an Assistant Secretary General in the United Nations system responsible for economic research during 2005-2015.; Chowdhury (Chief, Multi-Stakeholder Engagement & Outreach, Financing for Development Office, UN-DESA); Sharma (Senior Economic Affairs Officer, Financing for Development Office, UN-DESA); Platz (Economic Affairs Officer, Financing for Development Office, UN-DESA); corresponding author: Anis Chowdhury (chowdhury4@un.org; anis.z.chowdhury@gmail.com). Thanks to colleagues at the Financing for Development Office of UN-DESA and an anonymous referee for their helpful comments. Thanks also to Alexander Kucharski for his excellent support in gathering data and producing figure charts and to Jie Wei for drawing the flow charts. However, the usual caveats apply. ABSTRACT",
"title": ""
},
{
"docid": "5cbc93a9844fcd026a1705ee031c6530",
"text": "Accompanying the rapid urbanization, many developing countries are suffering from serious air pollution problem. The demand for predicting future air quality is becoming increasingly more important to government's policy-making and people's decision making. In this paper, we predict the air quality of next 48 hours for each monitoring station, considering air quality data, meteorology data, and weather forecast data. Based on the domain knowledge about air pollution, we propose a deep neural network (DNN)-based approach (entitled DeepAir), which consists of a spatial transformation component and a deep distributed fusion network. Considering air pollutants' spatial correlations, the former component converts the spatial sparse air quality data into a consistent input to simulate the pollutant sources. The latter network adopts a neural distributed architecture to fuse heterogeneous urban data for simultaneously capturing the factors affecting air quality, e.g. meteorological conditions. We deployed DeepAir in our AirPollutionPrediction system, providing fine-grained air quality forecasts for 300+ Chinese cities every hour. The experimental results on the data from three-year nine Chinese-city demonstrate the advantages of DeepAir beyond 10 baseline methods. Comparing with the previous online approach in AirPollutionPrediction system, we have 2.4%, 12.2%, 63.2% relative accuracy improvements on short-term, long-term and sudden changes prediction, respectively.",
"title": ""
},
{
"docid": "3512d0a45a764330c8a66afab325d03d",
"text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.",
"title": ""
}
] | scidocsrr |
f83afd4bc31cef68fee3dd74e299d978 | Understanding consumer acceptance of mobile payment services: An empirical analysis | [
{
"docid": "57b945df75d8cd446caa82ae02074c3a",
"text": "A key issue facing information systems researchers and practitioners has been the difficulty in creating favorable user reactions to new technologies. Insufficient or ineffective training has been identified as one of the key factors underlying this disappointing reality. Among the various enhancements to training being examined in research, the role of intrinsic motivation as a lever to create favorable user perceptions has not been sufficiently exploited. In this research, two studies were conducted to compare a traditional training method with a training method that included a component aimed at enhancing intrinsic motivation. The results strongly favored the use of an intrinsic motivator during training. Key implications for theory and practice are discussed. 1Allen Lee was the accepting senior editor for this paper. Sometimes when I am at my computer, I say to my wife, \"1'11 be done in just a minute\" and the next thing I know she's standing over me saying, \"It's been an hour!\" (Collins 1989, p. 11). Investment in emerging information technology applications can lead to productivity gains only if they are accepted and used. Several theoretical perspectives have emphasized the importance of user perceptions of ease of use as a key factor affecting acceptance of information technology. Favorable ease of use perceptions are necessary for initial acceptance (Davis et al. 1989), which of course is essential for adoption and continued use. During the early stages of learning and use, ease of use perceptions are significantly affected by training (e.g., Venkatesh and Davis 1996). Investments in training by organizations have been very high and have continued to grow rapidly. Kelly (1982) reported a figure of $100B, which doubled in about a decade (McKenna 1990). In spite of such large investments in training , only 10% of training leads to a change in behavior On trainees' jobs (Georgenson 1982). Therefore, it is important to understand the most effective training methods (e.g., Facteau et al. 1995) and to improve existing training methods in order to foster favorable perceptions among users about the ease of use of a technology, which in turn should lead to acceptance and usage. Prior research in psychology (e.g., Deci 1975) suggests that intrinsic motivation during training leads to beneficial outcomes. However, traditional training methods in information systems research have tended to emphasize imparting knowledge to potential users (e.g., Nelson and Cheney 1987) while not paying Sufficient attention to intrinsic motivation during training. The two field …",
"title": ""
},
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
}
] | [
{
"docid": "193042bd07d5e9672b04ede9160d406c",
"text": "We report on the flip chip packaging of Micro-Electro-Mechanical System (MEMS)-based digital silicon photonic switching device and the characterization results of 12 × 12 switching ports. The challenges in packaging N<sup> 2</sup> electrical and 2N optical interconnections are addressed with single-layer electrical redistribution lines of 25 <italic>μ</italic>m line width and space on aluminum nitride interposer and 13° polished 64-channel lidless fiber array (FA) with a pitch of 127 <italic>μ</italic>m. 50 <italic>μ</italic>m diameter solder spheres are laser-jetted onto the electrical bond pads surrounded by suspended MEMS actuators on the device before fluxless flip-chip bonding. A lidless FA is finally coupled near-vertically onto the device gratings using a 6-degree-of-freedom (6-DOF) alignment system. Fiber-to-grating coupler loss of 4.25 dB/facet, 10<sup>–11 </sup> bit error rate (BER) through the longest optical path, and 0.4 <italic>μ</italic>s switch reconfiguration time have been demonstrated using 10 Gb/s Ethernet data stream.",
"title": ""
},
{
"docid": "db3fc6ae924c0758bb58cd04f395520e",
"text": "Engineering from the University of Michigan, and a Ph.D. in Information Technologies from the MIT Sloan School of Management. His current research interests include IT adoption and diffusion, management of technology and innovation, software development tools and methods, and real options. He has published in Abstract The extent of organizational innovation with IT, an important construct in the IT innovation literature, has been measured in many different ways. Some measures are more narrowly focused while others aggregate innovative behaviors across a set of innovations or across stages in the assimilation lifecycle within organizations. There appear to be some significant tradeoffs involving aggregation. More aggregated measures can be more robust and generalizable and can promote stronger predictive validity, while less aggregated measures allow more context-specific investigations and can preserve clearer theoretical interpretations. This article begins with a conceptual analysis that identifies the circumstances when these tradeoffs are most likely to favor aggregated measures. It is found that aggregation should be favorable when: (1) the researcher's interest is in general innovation or a model that generalizes to a class of innovations, (2) antecedents have effects in the same direction in all assimilation stages, (3) characteristics of organizations can be treated as constant across the innovations in the study, (4) characteristics of innovations can not be treated as constant across organizations in the study, (5) the set of innovations being aggregated includes substitutes or moderate complements, and (6) sources of noise in the measurement of innovation may be present. The article then presents an empirical study using data on the adoption of software process technologies by 608 US based corporations. This study—which had circumstances quite favorable to aggregation—found that aggregating across three innovations within a technology class more than doubled the variance explained compared to single innovation models. Aggregating across assimilation stages had a slight positive effect on predictive validity. Taken together, these results provide initial confirmation of the conclusions from the conceptual analysis regarding the circumstances favoring aggregation.",
"title": ""
},
{
"docid": "ddc37e29f935bd494b54bd4d38abb3e6",
"text": "NAND flash memory-based storage devices are increasingly adopted as one of the main alternatives for magnetic disk drives. The flash translation layer (FTL) is a software/hardware interface inside NAND flash memory, which allows existing disk-based applications to use it without any significant modifications. Since FTL has a critical impact on the performance of NAND flash-based devices, a variety of FTL schemes have been proposed to improve their performance. However, existing FTLs perform well for either a read intensive workload or a write intensive workload, not for both of them due to their fixed and static address mapping schemes. To overcome this limitation, in this paper, we propose a novel FTL addressing scheme named as Convertible Flash Translation Layer (CFTL, for short). CFTL is adaptive to data access patterns so that it can dynamically switch the mapping of a data block to either read-optimized or write-optimized mapping scheme in order to fully exploit the benefits of both schemes. By judiciously taking advantage of both schemes, CFTL resolves the intrinsic problems of the existing FTLs. In addition to this convertible scheme, we propose an efficient caching strategy so as to considerably improve the CFTL performance further with only a simple hint. Consequently, both of the convertible feature and caching strategy empower CFTL to achieve good read performance as well as good write performance. Our experimental evaluation with a variety of realistic workloads demonstrates that the proposed CFTL scheme outperforms other FTL schemes.",
"title": ""
},
{
"docid": "775e0205ef85aa5d04af38748e63aded",
"text": "Monads are a de facto standard for the type-based analysis of impure aspects of programs, such as runtime cost [9, 5]. Recently, the logical dual of a monad, the comonad, has also been used for the cost analysis of programs, in conjunction with a linear type system [6, 8]. The logical duality of monads and comonads extends to cost analysis: In monadic type systems, costs are (side) effects, whereas in comonadic type systems, costs are coeffects. However, it is not clear whether these two methods of cost analysis are related and, if so, how. Are they equally expressive? Are they equally well-suited for cost analysis with all reduction strategies? Are there translations from type systems with effects to type systems with coeffects and viceversa? The goal of this work-in-progress paper is to explore some of these questions in a simple context — the simply typed lambda-calculus (STLC). As we show, even this simple context is already quite interesting technically and it suffices to bring out several key points.",
"title": ""
},
{
"docid": "0e068a4e7388ed456de4239326eb9b08",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "070a1c6b47a0a5c217e747cd7e0e0d0b",
"text": "In this paper we develop a computational model of visual adaptation for realistic image synthesis based on psychophysical experiments. The model captures the changes in threshold visibility, color appearance, visual acuity, and sensitivity over time that are caused by the visual system’s adaptation mechanisms. We use the model to display the results of global illumination simulations illuminated at intensities ranging from daylight down to starlight. The resulting images better capture the visual characteristics of scenes viewed over a wide range of illumination levels. Because the model is based on psychophysical data it can be used to predict the visibility and appearance of scene features. This allows the model to be used as the basis of perceptually-based error metrics for limiting the precision of global illumination computations. CR",
"title": ""
},
{
"docid": "0be5ab2533511ce002d87ff6a12f7b08",
"text": "This paper deals with the solar photovoltaic (SPV) array fed water-pumping system using a Luo converter as an intermediate DC-DC converter and a permanent magnet brushless DC (BLDC) motor to drive a centrifugal water pump. Among the different types of DC-DC converters, an elementary Luo converter is selected in order to extract the maximum power available from the SPV array and for safe starting of BLDC motor. The elementary Luo converter with reduced components and single semiconductor switch has inherent features of reducing the ripples in its output current and possessing a boundless region for maximum power point tracking (MPPT). The electronically commutated BLDC motor is used with a voltage source inverter (VSI) operated at fundamental frequency switching thus avoiding the high frequency switching losses resulting in a high efficiency of the system. The SPV array is designed such that the power at rated DC voltage is supplied to the BLDC motor-pump under standard test condition and maximum switch utilization of Luo converter is achieved which results in efficiency improvement of the converter. Performances at various operating conditions such as starting, dynamic and steady state behavior are analyzed and suitability of the proposed system is demonstrated using MATLAB/Simulink based simulation results.",
"title": ""
},
{
"docid": "5e42cdbe42b9fafb53b8bbd82ec96d5a",
"text": "Fifty years ago, the author published a paper in Operations Research with the title, “A proof for the queuing formula: L = W ” [Little, J. D. C. 1961. A proof for the queuing formula: L = W . Oper. Res. 9(3) 383–387]. Over the years, L = W has become widely known as “Little’s Law.” Basically, it is a theorem in queuing theory. It has become well known because of its theoretical and practical importance. We report key developments in both areas with the emphasis on practice. In the latter, we collect new material and search for insights on the use of Little’s Law within the fields of operations management and computer architecture.",
"title": ""
},
{
"docid": "b13c9597f8de229fb7fec3e23c0694d1",
"text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.",
"title": ""
},
{
"docid": "7f6f26ac42f8f637415a45afc94daa0f",
"text": "We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning. In particular, training a neural network using synthetic data can be viewed as learning a proposal distribution generator for approximate inference in the synthetic-data generative model. We demonstrate this connection in a recognition task where we develop a novel Captcha-breaking architecture and train it using synthetic data, demonstrating both state-of-the-art performance and a way of computing task-specific posterior uncertainty. Using a neural network trained this way, we also demonstrate successful breaking of real-world Captchas currently used by Facebook and Wikipedia. Reasoning from these empirical results and drawing connections with Bayesian modeling, we discuss the robustness of synthetic data results and suggest important considerations for ensuring good neural network generalization when training with synthetic data.",
"title": ""
},
{
"docid": "7a12529d179d9ca6b94dbac57c54059f",
"text": "A novel design of a hand functions task training robotic system was developed for the stroke rehabilitation. It detects the intention of hand opening or hand closing from the stroke person using the electromyography (EMG) signals measured from the hemiplegic side. This training system consists of an embedded controller and a robotic hand module. Each hand robot has 5 individual finger assemblies capable to drive 2 degrees of freedom (DOFs) of each finger at the same time. Powered by the linear actuator, the finger assembly achieves 55 degree range of motion (ROM) at the metacarpophalangeal (MCP) joint and 65 degree range of motion (ROM) at the proximal interphalangeal (PIP) joint. Each finger assembly can also be adjusted to fit for different finger length. With this task training system, stroke subject can open and close their impaired hand using their own intention to carry out some of the daily living tasks.",
"title": ""
},
{
"docid": "430609545d1ce22e341d3682c27629fb",
"text": "In order to meet the increasing environmental and economic requirements, commercial aircraft industries have been challenged to reduce fuel consumption, noise and emissions. As a result, more electrical aircraft (MEA), on which engine driven electrical power replaces other primary powers, is being investigated. However, with the increasing demands of electrical power capacity on MEA, the engines have to be redesigned to supply enough power and space for bigger generators. In order to avoid this problem, fuel cell systems could partially/entirely replace the engine driven generators to supply electric power on board. Compared to the traditional electrical power system which is driven by main engines/auxiliary power unit (APU) on MEA, fuel cell based systems would be more efficient and more environmental friendly. Also, fuel cells could work continuously during the entire flight envelope. This paper introduces fuel cell system concepts on MEA. Characters of solid oxide fuel cell (SOFC) and polymer electrolyte membrane fuel cell (PEMFC) are compared. An SOFC APU application on MEA is introduced. Finally, challenges of fell cells application on MEA are discussed.",
"title": ""
},
{
"docid": "4dc38ae50a2c806321020de4a140ed5f",
"text": "Transcranial direct current stimulation (tDCS) is a promising technology to enhance cognitive and physical performance. One of the major areas of interest is the enhancement of memory function in healthy individuals. The early arrival of tDCS on the market for lifestyle uses and cognitive enhancement purposes lead to the voicing of some important ethical concerns, especially because, to date, there are no official guidelines or evaluation procedures to tackle these issues. The aim of this article is to review ethical issues related to uses of tDCS for memory enhancement found in the ethics and neuroscience literature and to evaluate how realistic and scientifically well-founded these concerns are? In order to evaluate how plausible or speculative each issue is, we applied the methodological framework described by Racine et al. (2014) for \"informed and reflective\" speculation in bioethics. This framework could be succinctly presented as requiring: (1) the explicit acknowledgment of factual assumptions and identification of the value attributed to them; (2) the validation of these assumptions with interdisciplinary literature; and (3) the adoption of a broad perspective to support more comprehensive reflection on normative issues. We identified four major considerations associated with the development of tDCS for memory enhancement: safety, autonomy, justice and authenticity. In order to assess the seriousness and likelihood of harm related to each of these concerns, we analyzed the assumptions underlying the ethical issues, and the level of evidence for each of them. We identified seven distinct assumptions: prevalence, social acceptance, efficacy, ideological stance (bioconservative vs. libertarian), potential for misuse, long term side effects, and the delivery of complete and clear information. We conclude that ethical discussion about memory enhancement via tDCS sometimes involves undue speculation, and closer attention to scientific and social facts would bring a more nuanced analysis. At this time, the most realistic concerns are related to safety and violation of users' autonomy by a breach of informed consent, as potential immediate and long-term health risks to private users remain unknown or not well defined. Clear and complete information about these risks must be provided to research participants and consumers of tDCS products or related services. Broader public education initiatives and warnings would also be worthwhile to reach those who are constructing their own tDCS devices.",
"title": ""
},
{
"docid": "5fe45b44d4e113e1f9b1867ac7244074",
"text": "Wireless sensor networks (WSNs) will play a key role in the extension of the smart grid towards residential premises, and enable various demand and energy management applications. Efficient demand-supply balance and reducing electricity expenses and carbon emissions will be the immediate benefits of these applications. In this paper, we evaluate the performance of an in-home energy management (iHEM) application. The performance of iHEM is compared with an optimization-based residential energy management (OREM) scheme whose objective is to minimize the energy expenses of the consumers. We show that iHEM decreases energy expenses, reduces the contribution of the consumers to the peak load, reduces the carbon emissions of the household, and its savings are close to OREM. On the other hand, iHEM application is more flexible as it allows communication between the controller and the consumer utilizing the wireless sensor home area network (WSHAN). We evaluate the performance of iHEM under the presence of local energy generation capability, prioritized appliances, and for real-time pricing. We show that iHEM reduces the expenses of the consumers for each case. Furthermore, we show that packet delivery ratio, delay, and jitter of the WSHAN improve as the packet size of the monitoring applications, that also utilize the WSHAN, decreases.",
"title": ""
},
{
"docid": "192663cdecdcfda1f86605adbc3c6a56",
"text": "With the introduction of IT to conduct business we accepted the loss of a human control step. For this reason, the introduction of new IT systems was accompanied by the development of the authorization concept. But since, in reality, there is no such thing as 100 per cent security; auditors are commissioned to examine all transactions for misconduct. Since the data exists in digital form already, it makes sense to use computer-based processes to analyse it. Such processes allow the auditor to carry out extensive checks within an acceptable timeframe and with reasonable effort. Once the algorithm has been defined, it only takes sufficient computing power to evaluate larger quantities of data. This contribution presents the state of the art for IT-based data analysis processes that can be used to identify fraudulent activities.",
"title": ""
},
{
"docid": "80de9b0ba596c19bfc8a99fd46201a99",
"text": "We integrate the recently proposed spatial transformer network (SPN) ( Jaderberg & Simonyan , 2015) into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNNSPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input images without deteriorating performance. The down-sampling in RNN-SPN can be thought of as adaptive downsampling that minimizes the information loss in the regions of interest. We attribute the superior performance of the RNN-SPN to the fact that it can attend to a sequence of regions of interest.",
"title": ""
},
{
"docid": "0a7db914781aacb79a7139f3da41efbb",
"text": "This work studies the reliability behaviour of gate oxides grown by in situ steam generation technology. A comparison with standard steam oxides is performed, investigating interface and bulk properties. A reduced conduction at low fields and an improved reliability is found for ISSG oxide. The initial lower bulk trapping, but with similar degradation rate with respect to standard oxides, explains the improved reliability results. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4bf4be69ea3f3afceca056e2b5b8102",
"text": "In this paper we present a conversational dialogue system, Ch2R (Chinese Chatter Robot) for online shopping guide, which allows users to inquire about information of mobile phone in Chinese. The purpose of this paper is to describe our development effort in terms of the underlying human language technologies (HLTs) as well as other system issues. We focus on a mixed-initiative conversation mechanism for interactive shopping guide combining initiative guiding and question understanding. We also present some evaluation on the system in mobile phone shopping guide domain. Evaluation results demonstrate the efficiency of our approach.",
"title": ""
},
{
"docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b",
"text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.",
"title": ""
},
{
"docid": "c83d034e052926520677d0c5880f8800",
"text": "Sperm vitality is a reflection of the proportion of live, membrane-intact spermatozoa determined by either dye exclusion or osmoregulatory capacity under hypo-osmotic conditions. In this chapter we address the two most common methods of sperm vitality assessment: eosin-nigrosin staining and the hypo-osmotic swelling test, both utilized in clinical Andrology laboratories.",
"title": ""
}
] | scidocsrr |
1248a2ef2907eae2afb2a8d073912018 | Simultaneous localization and mapping with infinite planes | [
{
"docid": "3b9ad8509b9b59e4673d1f6e375ab722",
"text": "This paper describes a system for performing multisession visual mapping in large-scale environments. Multi-session mapping considers the problem of combining the results of multiple Simultaneous Localisation and Mapping (SLAM) missions performed repeatedly over time in the same environment. The goal is to robustly combine multiple maps in a common metrical coordinate system, with consistent estimates of uncertainty. Our work employs incremental Smoothing and Mapping (iSAM) as the underlying SLAM state estimator and uses an improved appearance-based method for detecting loop closures within single mapping sessions and across multiple sessions. To stitch together pose graph maps from multiple visual mapping sessions, we employ spatial separator variables, called anchor nodes, to link together multiple relative pose graphs. We provide experimental results for multi-session visual mapping in the MIT Stata Center, demonstrating key capabilities that will serve as a foundation for future work in large-scale persistent visual mapping.",
"title": ""
}
] | [
{
"docid": "be18a6729dc170fc03b61436c99c843d",
"text": "Hepatitis C virus (HCV) is a major cause of liver disease worldwide and a potential cause of substantial morbidity and mortality in the future. The complexity and uncertainty related to the geographic distribution of HCV infection and chronic hepatitis C, determination of its associated risk factors, and evaluation of cofactors that accelerate its progression, underscore the difficulties in global prevention and control of HCV. Because there is no vaccine and no post-exposure prophylaxis for HCV, the focus of primary prevention efforts should be safer blood supply in the developing world, safe injection practices in health care and other settings, and decreasing the number of people who initiate injection drug use.",
"title": ""
},
{
"docid": "5c11d9004e57395641a63cd50f8baefa",
"text": "Current digital painting tools are primarily targeted at professionals and are often overwhelmingly complex for use by novices. At the same time, simpler tools may not invoke the user creatively, or are limited to plain styles that lack visual sophistication. There are many people who are not art professionals, yet would like to partake in digital creative expression. Challenges and rewards for novices differ greatly from those for professionals. In this paper, we leverage existing works in Creativity and Creativity Support Tools (CST) to formulate design goals specifically for digital art creation tools for novices. We implemented these goals within a digital painting system, called Painting with Bob. We evaluate the efficacy of the design and our prototype with a user study, and we find that users are highly satisfied with the user experience, as well as the paintings created with our system.",
"title": ""
},
{
"docid": "9b10757ca3ca84784033c20f064078b7",
"text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.",
"title": ""
},
{
"docid": "4859363a5f64977336d107794251a203",
"text": "The paper treats a modular program in which transfers of control between modules follow a semi-Markov process. Each module is failure-prone, and the different failure processes are assumed to be Poisson. The transfers of control between modules (interfaces) are themselves subject to failure. The overall failure process of the program is described, and an asymptotic Poisson process approximation is given for the case when the individual modules and interfaces are very reliable. A simple formula gives the failure rate of the overall program (and hence mean time between failures) under this limiting condition. The remainder of the paper treats the consequences of failures. Each failure results in a cost, represented by a random variable with a distribution typical of the type of failure. The quantity of interest is the total cost of running the program for a time t, and a simple approximating distribution is given for large t. The parameters of this limiting distribution are functions only of the means and variances of the underlying distributions, and are thus readily estimable. A calculation of program availability is given as an example of the cost process. There follows a brief discussion of methods of estimating the parameters of the model, with suggestions of areas in which it might be used.",
"title": ""
},
{
"docid": "995376c324ff12a0be273e34f44056df",
"text": "Conventional Gabor representation and its extracted features often yield a fairly poor performance in retrieving the rotated and scaled versions of the texture image under query. To address this issue, existing methods exploit multiple stages of transformations for making rotation and/or scaling being invariant at the expense of high computational complexity and degraded retrieval performance. The latter is mainly due to the lost of image details after multiple transformations. In this paper, a rotation-invariant and a scale-invariant Gabor representations are proposed, where each representation only requires few summations on the conventional Gabor filter impulse responses. The optimum setting of the orientation parameter and scale parameter is experimentally determined over the Brodatz and MPEG-7 texture databases. Features are then extracted from these new representations for conducting rotation-invariant or scale-invariant texture image retrieval. Since the dimension of the new feature space is much reduced, this leads to a much smaller metadata storage space and faster on-line computation on the similarity measurement. Simulation results clearly show that our proposed invariant Gabor representations and their extracted invariant features significantly outperform the conventional Gabor representation approach for rotation-invariant and scale-invariant texture image retrieval. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "39fcc45d79680c7e231643d6c75aee18",
"text": "This paper presents a Kernel Entity Salience Model (KESM) that improves text understanding and retrieval by better estimating entity salience (importance) in documents. KESM represents entities by knowledge enriched distributed representations, models the interactions between entities and words by kernels, and combines the kernel scores to estimate entity salience. The whole model is learned end-to-end using entity salience labels. The salience model also improves ad hoc search accuracy, providing effective ranking features by modeling the salience of query entities in candidate documents. Our experiments on two entity salience corpora and two TREC ad hoc search datasets demonstrate the effectiveness of KESM over frequency-based and feature-based methods. We also provide examples showing how KESM conveys its text understanding ability learned from entity salience to search.",
"title": ""
},
{
"docid": "6e4d4e1fa86a0566c24cb045616fd4b7",
"text": "Hardcore, jungle, and drum and bass (HJDB) are fastpaced electronic dance music genres that often employ resequenced breakbeats or drum samples from jazz and funk percussionist solos. We present a style-specific method for downbeat detection specifically designed for HJDB. The presented method combines three forms of metrical information in the prediction of downbeats: lowlevel onset event information; periodicity information from beat tracking; and high-level information from a regression model trained with classic breakbeats. In an evaluation using 206 HJDB pieces, we demonstrate superior accuracy of our style specific method over four general downbeat detection algorithms. We present this result to motivate the need for style-specific knowledge and techniques for improved downbeat detection.",
"title": ""
},
{
"docid": "38570075c31812866646d47d25667a49",
"text": "Mercator is a program that uses hop-limited probes—the same primitive used in traceroute—to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route ca p ble routers wherever possible to enhance the fidelity of the resulting ma p, and employs novel mechanisms for resolvingaliases(interfaces belonging to the same router). This paper describes the design of these heuri stics and our experiences with Mercator, and presents some preliminary a nalysis of the resulting Internet map.",
"title": ""
},
{
"docid": "70fd543752f17237386b3f8e99954230",
"text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency",
"title": ""
},
{
"docid": "0ad68f20acf338f4051a93ba5e273187",
"text": "FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.",
"title": ""
},
{
"docid": "0c79db142f913564654f53b6519f2927",
"text": "For software process improvement -SPIthere are few small organizations using models that guide the management and deployment of their improvement initiatives. This is largely because a lot of these models do not consider the special characteristics of small businesses, nor the appropriate strategies for deploying an SPI initiative in this type of organization. It should also be noted that the models which direct improvement implementation for small settings do not present an explicit process with which to organize and guide the internal work of the employees involved in the implementation of the improvement opportunities. In this paper we propose a lightweight process, which takes into account appropriate strategies for this type of organization. Our proposal, known as a “Lightweight process to incorporate improvements” uses the philosophy of the Scrum agile",
"title": ""
},
{
"docid": "6d26e03468a9d9c5b9952a5c07743db3",
"text": "Graphs are a powerful tool to model structured objects, but it is nontrivial to measure the similarity between two graphs. In this paper, we construct a two-graph model to represent human actions by recording the spatial and temporal relationships among local features. We also propose a novel family of context-dependent graph kernels (CGKs) to measure similarity between graphs. First, local features are used as the vertices of the two-graph model and the relationships among local features in the intra-frames and inter-frames are characterized by the edges. Then, the proposed CGKs are applied to measure the similarity between actions represented by the two-graph model. Graphs can be decomposed into numbers of primary walk groups with different walk lengths and our CGKs are based on the context-dependent primary walk group matching. Taking advantage of the context information makes the correctly matched primary walk groups dominate in the CGKs and improves the performance of similarity measurement between graphs. Finally, a generalized multiple kernel learning algorithm with a proposed l12-norm regularization is applied to combine these CGKs optimally together and simultaneously train a set of action classifiers. We conduct a series of experiments on several public action datasets. Our approach achieves a comparable performance to the state-of-the-art approaches, which demonstrates the effectiveness of the two-graph model and the CGKs in recognizing human actions.",
"title": ""
},
{
"docid": "e9402a771cc761e7e6484c2be6bc2cce",
"text": "In this work, we present the Text Conditioned Auxiliary Classifier Generative Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network (GAN) for synthesizing images from their text descriptions. Former approaches have tried to condition the generative process on the textual data; but allying it to the usage of class information, known to diversify the generated samples and improve their structural coherence, has not been explored. We trained the presented TAC-GAN model on the Oxford102 dataset of flowers, and evaluated the discriminability of the generated images with Inception-Score, as well as their diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our approach outperforms the stateof-the-art models, i.e., its inception score is 3.45, corresponding to a relative increase of 7.8% compared to the recently introduced StackGan. A comparison of the mean MS-SSIM scores of the training and generated samples per class shows that our approach is able to generate highly diverse images with an average MS-SSIM of 0.14 over all generated classes.",
"title": ""
},
{
"docid": "1fa056e87c10811b38277d161c81c2ac",
"text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.",
"title": ""
},
{
"docid": "c422ef1c225f2dc2483c5a4093333b57",
"text": "The rapid advancement in the electronic commerce technology makes electronic transaction an indispensable part of our daily life. While, this way of transaction has always been facing security problems. Researchers persevere in looking for fraud transaction detection methodologies. A promising paradigm is to devise dedicated detectors for the typical patterns of fraudulent transactions. Unfortunately, this paradigm is really constrained by the lack of real electronic transaction data, especially real fraudulent samples. In this paper, by analyzing real B2C electronic transaction data provided by an Asian bank, from the perspective of transaction sequence, we discover a typical pattern of fraud transactions: Most of the fraud transactions are fast and repeated transactions between the same customer and the same vendor, and all the transaction amounts are nearly the same. We name this pattern Replay Attack. We prove the prominent existence of Replay Attack by comprehensive statistics, and we propose a novel fraud transaction detector, Replay Attack Killer (RAK). By experiment, we show that RAK can catch up to 92% fraud transactions in real time but only disturb less than 0.06% normal transactions.",
"title": ""
},
{
"docid": "c5147ed058b546048bd72dde768976dd",
"text": "51 Abstract— Cryptarithmetic puzzles are quite old and their inventor is not known. An example in The American Agriculturist of 1864 disproves the popular notion that it was invented by Sam Loyd. The name cryptarithmetic was coined by puzzlist Minos (pseudonym of Maurice Vatriquant) in the May 1931 issue of Sphinx, a Belgian magazine of recreational mathematics. In the 1955, J. A. H. Hunter introduced the word \"alphabetic\" to designate cryptarithms, such as Dudeney's, whose letters from meaningful words or phrases. Solving a cryptarithm by hand usually involves a mix of deductions and exhaustive tests of possibilities. Cryptarithmetic is a puzzle consisting of an arithmetic problem in which the digits have been replaced by letters of the alphabet. The goal is to decipher the letters (i.e. Map them back onto the digits) using the constraints provided by arithmetic and the additional constraint that no two letters can have the same numerical value. Cryptarithmetic is a class of constraint satisfaction problems which includes making mathematical relations between meaningful words using simple arithmetic operators like 'plus' in a way that the result is conceptually true, and assigning digits to the letters of these words and generating numbers in order to make correct arithmetic operations as well",
"title": ""
},
{
"docid": "b174bbcb91d35184674532b6ab22dcdf",
"text": "Many studies have confirmed the benefit of gamification on learners’ motivation. However, gamification may also demotivate some learners, or learners may focus on the gamification elements instead of the learning content. Some researchers have recommended building learner models that can be used to adapt gamification elements based on learners’ personalities. Building such a model requires a strong understanding of the relationship between gamification and personality. Existing empirical work has focused on measuring knowledge gain and learner preference. These findings may not be reliable because the analyses are based on learners who complete the study and because they rely on self-report from learners. This preliminary study explores a different approach by allowing learners to drop out at any time and then uses the number of students left as a proxy for motivation and engagement. Survival analysis is used to analyse the data. The results confirm the benefits of gamification and provide some pointers to how this varies with personality.",
"title": ""
},
{
"docid": "f9d333d7d8aa3f7fb834b202a3b10a3b",
"text": "Human skin is the largest organ in our body which provides protection against heat, light, infections and injury. It also stores water, fat, and vitamin. Cancer is the leading cause of death in economically developed countries and the second leading cause of death in developing countries. Skin cancer is the most commonly diagnosed type of cancer among men and women. Exposure to UV rays, modernize diets, smoking, alcohol and nicotine are the main cause. Cancer is increasingly recognized as a critical public health problem in Ethiopia. There are three type of skin cancer and they are recognized based on their own properties. In view of this, a digital image processing technique is proposed to recognize and predict the different types of skin cancers using digital image processing techniques. Sample skin cancer image were taken from American cancer society research center and DERMOFIT which are popular and widely focuses on skin cancer research. The classification system was supervised corresponding to the predefined classes of the type of skin cancer. Combining Self organizing map (SOM) and radial basis function (RBF) for recognition and diagnosis of skin cancer is by far better than KNN, Naïve Bayes and ANN classifier. It was also showed that the discrimination power of morphology and color features was better than texture features but when morphology, texture and color features were used together the classification accuracy was increased. The best classification accuracy (88%, 96.15% and 95.45% for Basal cell carcinoma, Melanoma and Squamous cell carcinoma respectively) were obtained using combining SOM and RBF. The overall classification accuracy was 93.15%.",
"title": ""
},
{
"docid": "1c46fbf6a21aa1c80cec9382bb3d45fa",
"text": "BACKGROUND\nNusinersen is an antisense oligonucleotide drug that modulates pre-messenger RNA splicing of the survival motor neuron 2 ( SMN2) gene. It has been developed for the treatment of spinal muscular atrophy (SMA).\n\n\nMETHODS\nWe conducted a multicenter, double-blind, sham-controlled, phase 3 trial of nusinersen in 126 children with SMA who had symptom onset after 6 months of age. The children were randomly assigned, in a 2:1 ratio, to undergo intrathecal administration of nusinersen at a dose of 12 mg (nusinersen group) or a sham procedure (control group) on days 1, 29, 85, and 274. The primary end point was the least-squares mean change from baseline in the Hammersmith Functional Motor Scale-Expanded (HFMSE) score at 15 months of treatment; HFMSE scores range from 0 to 66, with higher scores indicating better motor function. Secondary end points included the percentage of children with a clinically meaningful increase from baseline in the HFMSE score (≥3 points), an outcome that indicates improvement in at least two motor skills.\n\n\nRESULTS\nIn the prespecified interim analysis, there was a least-squares mean increase from baseline to month 15 in the HFMSE score in the nusinersen group (by 4.0 points) and a least-squares mean decrease in the control group (by -1.9 points), with a significant between-group difference favoring nusinersen (least-squares mean difference in change, 5.9 points; 95% confidence interval, 3.7 to 8.1; P<0.001). This result prompted early termination of the trial. Results of the final analysis were consistent with results of the interim analysis. In the final analysis, 57% of the children in the nusinersen group as compared with 26% in the control group had an increase from baseline to month 15 in the HFMSE score of at least 3 points (P<0.001), and the overall incidence of adverse events was similar in the nusinersen group and the control group (93% and 100%, respectively).\n\n\nCONCLUSIONS\nAmong children with later-onset SMA, those who received nusinersen had significant and clinically meaningful improvement in motor function as compared with those in the control group. (Funded by Biogen and Ionis Pharmaceuticals; CHERISH ClinicalTrials.gov number, NCT02292537 .).",
"title": ""
},
{
"docid": "d0c8a1faccfa3f0469e6590cc26097c8",
"text": "This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.",
"title": ""
}
] | scidocsrr |
a85c0f790bf8313452e9ea38d4c94096 | A mobile health application for falls detection and biofeedback monitoring | [
{
"docid": "f8cc65321723e9bd54b5aea4052542fc",
"text": "Falls in elderly is a major health problem and a cost burden to social services. Thus automatic fall detectors are needed to support the independence and security of the elderly. The goal of this research is to develop a real-time portable wireless fall detection system, which is capable of automatically discriminating between falls and Activities of Daily Life (ADL). The fall detection system contains a portable fall-detection terminal and a monitoring centre, both of which communicate with ZigBee protocol. To extract the features of falls, falls data and ADL data obtained from young subjects are analyzed. Based on the characteristics of falls, an effective fall detection algorithm using tri-axis accelerometers is introduced, and the results show that falls can be distinguished from ADL with a sensitivity over 95% and a specificity of 100%, for a total set of 270 movements.",
"title": ""
}
] | [
{
"docid": "4aec21b8c4bf0cd71130f6dccd251376",
"text": "Access to capital in the form of credit through money lending requires that the lender to be able to measure the risk of repayment for a given return. In ancient times money lending needed to occur between known parties or required collateral to secure the loan. In the modern era of banking institutions provide loans to individuals who meet a qualification test. Grameen Bank in Bangladesh has demonstrated that small poor communities benefited from the \"microcredit\" financial innovation, which allowed a priori non-bankable entrepreneurs to engage in self-employment projects. Online P2P (Peer to Peer) lending is considered an evolution of the microcredit concept, and reflects the application of its principles into internet communities. Internet ventures like Prosper.com, Zopa or Lendingclub.com, provide the means for lenders and borrowers to meet, interact and define relationships as part of social groups. This paper measures the influence of social interactions in the risk evaluation of a money request; with special focus on the impact of one-to-one and one-to-many relationships. The results showed that fostering social features increases the chances of getting a loan fully funded, when financial features are not enough to construct a differentiating successful credit request. For this task, a model-based clustering method was applied on actual P2P Lending data provided by Prosper.com.",
"title": ""
},
{
"docid": "ab01efad4c65bbed9e4a499844683326",
"text": "To achieve good generalization in supervised learning, the training and testing examples are usually required to be drawn from the same source distribution. In this paper we propose a method to relax this requirement in the context of logistic regression. Assuming <i>D<sup>p</sup></i> and <i>D<sup>a</sup></i> are two sets of examples drawn from two mismatched distributions, where <i>D<sup>a</sup></i> are fully labeled and <i>D<sup>p</sup></i> partially labeled, our objective is to complete the labels of <i>D<sup>p</sup>.</i> We introduce an auxiliary variable μ for each example in <i>D<sup>a</sup></i> to reflect its mismatch with <i>D<sup>p</sup>.</i> Under an appropriate constraint the μ's are estimated as a byproduct, along with the classifier. We also present an active learning approach for selecting the labeled examples in <i>D<sup>p</sup>.</i> The proposed algorithm, called \"Migratory-Logit\" or M-Logit, is demonstrated successfully on simulated as well as real data sets.",
"title": ""
},
{
"docid": "bfd946e8b668377295a1672a7bb915a3",
"text": "Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data.",
"title": ""
},
{
"docid": "f04eb852a050249ba5e6d38ee4a7d54c",
"text": "The project Legal Semantic WebA Recommendation System makes use of the Semantic Web and it is used for the proactive legal decision making. With the help of web semantics, a lawyer handling a new case can filter out similar cases from the court case repository implemented using RDF (Resource Description Framework), and from here he can extract the judgments done on those similar cases. In this way he can better prepare himself with similar judgments in his hands which will guide him to an improved argumentation. The role of web semantics here is that it introduces intelligent matching of the court case details. The search is not only thorough but also accurate and precise to the maximum level of attainment with the use of ontology designed exclusively for this purpose.",
"title": ""
},
{
"docid": "b8d41b4b440641d769f58189db8eaf91",
"text": "Differential diagnosis of trichotillomania is often difficult in clinical practice. Trichoscopy (hair and scalp dermoscopy) effectively supports differential diagnosis of various hair and scalp diseases. The aim of this study was to assess the usefulness of trichoscopy in diagnosing trichotillomania. The study included 370 patients (44 with trichotillomania, 314 with alopecia areata and 12 with tinea capitis). Statistical analysis revealed that the main and most characteristic trichoscopic findings of trichotillomania are: irregularly broken hairs (44/44; 100% of patients), v-sign (24/44; 57%), flame hairs (11/44; 25%), hair powder (7/44; 16%) and coiled hairs (17/44; 39%). Flame hairs, v-sign, tulip hairs, and hair powder were newly identified in this study. In conclusion, we describe here specific trichoscopy features, which may be applied in quick, non-invasive, in-office differential diagnosis of trichotillomania.",
"title": ""
},
{
"docid": "82708e65107a0877a052ce81294f535c",
"text": "Abstract—Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and Critical Information Infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates Organisation Cyber Resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called CSuite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.",
"title": ""
},
{
"docid": "9d9665a21e5126ba98add5a832521cd1",
"text": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and to score, and less prone to overfitting.",
"title": ""
},
{
"docid": "3323feaddbdf0937cef4ecf7dcedc263",
"text": "Cloud storage services have become increasingly popular. Because of the importance of privacy, many cloud storage encryption schemes have been proposed to protect data from those who do not have access. All such schemes assumed that cloud storage providers are safe and cannot be hacked; however, in practice, some authorities (i.e., coercers) may force cloud storage providers to reveal user secrets or confidential data on the cloud, thus altogether circumventing storage encryption schemes. In this paper, we present our design for a new cloud storage encryption scheme that enables cloud storage providers to create convincing fake user secrets to protect user privacy. Since coercers cannot tell if obtained secrets are true or not, the cloud storage providers ensure that user privacy is still securely protected.",
"title": ""
},
{
"docid": "6d44c4244064634deda30a5059acd87e",
"text": "Currently, gene sequence genealogies of the Oligotrichea Bütschli, 1889 comprise only few species. Therefore, a cladistic approach, especially to the Oligotrichida, was made, applying Hennig's method and computer programs. Twenty-three characters were selected and discussed, i.e., the morphology of the oral apparatus (five characters), the somatic ciliature (eight characters), special organelles (four characters), and ontogenetic particulars (six characters). Nine of these characters developed convergently twice. Although several new features were included into the analyses, the cladograms match other morphological trees in the monophyly of the Oligotrichea, Halteriia, Oligotrichia, Oligotrichida, and Choreotrichida. The main synapomorphies of the Oligotrichea are the enantiotropic division mode and the de novo-origin of the undulating membranes. Although the sister group relationship of the Halteriia and the Oligotrichia contradicts results obtained by gene sequence analyses, no morphologic, ontogenetic or ultrastructural features were found, which support a branching of Halteria grandinella within the Stichotrichida. The cladistic approaches suggest paraphyly of the family Strombidiidae probably due to the scarce knowledge. A revised classification of the Oligotrichea is suggested, including all sufficiently known families and genera.",
"title": ""
},
{
"docid": "ab989f39a5dd2ba3c98c0ffddd5c85cb",
"text": "This paper proposes a revision of the multichannel concept as it has been applied in previous studies on multichannel commerce. Digitalization and technological innovations have blurred the line between physical and electronic channels. A structured literature review on multichannel consumer and firm behaviour is conducted to reveal the established view on multichannel phenomena. By providing empirical evidence on market offerings and consumer perceptions, we expose a significant mismatch between the dominant conceptualization of multichannel commerce applied in research and today’s market realities. This tension highlights the necessity for a changed view on multichannel commerce to study and understand phenomena in converging sales channels. Therefore, an extended conceptualization of multichannel commerce, named the multichannel continuum, is proposed. This is the first study that considers the broad complexity of integrated multichannel decisions. It aims at contributing to the literature on information systems and channel choice by developing a reference frame for studies on how technological advancements that allow the integration of different channels shape consumer and firm decision making in multichannel commerce. Accordingly, a brief research agenda contrasts established findings with unanswered questions, challenges and opportunities that arise in this more complex multichannel market environment.",
"title": ""
},
{
"docid": "e267d6bd0aa5f260095993525b790018",
"text": "Strassen's matrix multiplication (MM) has benefits with respect to any (highly tuned) implementations of MM because Strassen's reduces the total number of operations. Strassen achieved this operation reduction by replacing computationally expensive MMs with matrix additions (MAs). For architectures with simple memory hierarchies, having fewer operations directly translates into an efficient utilization of the CPU and, thus, faster execution. However, for modern architectures with complex memory hierarchies, the operations introduced by the MAs have a limited in-cache data reuse and thus poor memory-hierarchy utilization, thereby overshadowing the (improved) CPU utilization, and making Strassen's algorithm (largely) useless on its own.\n In this paper, we investigate the interaction between Strassen's effective performance and the memory-hierarchy organization. We show how to exploit Strassen's full potential across different architectures. We present an easy-to-use adaptive algorithm that combines a novel implementation of Strassen's idea with the MM from automatically tuned linear algebra software (ATLAS) or GotoBLAS. An additional advantage of our algorithm is that it applies to any size and shape matrices and works equally well with row or column major layout. Our implementation consists of introducing a final step in the ATLAS/GotoBLAS-installation process that estimates whether or not we can achieve any additional speedup using our Strassen's adaptation algorithm. Then we install our codes, validate our estimates, and determine the specific performance.\n We show that, by the right combination of Strassen's with ATLAS/GotoBLAS, our approach achieves up to 30%/22% speed-up versus ATLAS/GotoBLAS alone on modern high-performance single processors. We consider and present the complexity and the numerical analysis of our algorithm, and, finally, we show performance for 17 (uniprocessor) systems.",
"title": ""
},
{
"docid": "9c8204510362de8a5362400fc4d26e24",
"text": "We focus on predicting sleep stages from radio measurements without any attached sensors on subjects. We introduce a new predictive model that combines convolutional and recurrent neural networks to extract sleep-specific subjectinvariant features from RF signals and capture the temporal progression of sleep. A key innovation underlying our approach is a modified adversarial training regime that discards extraneous information specific to individuals or measurement conditions, while retaining all information relevant to the predictive task. We analyze our game theoretic setup and empirically demonstrate that our model achieves significant improvements over state-of-the-art solutions.",
"title": ""
},
{
"docid": "750abc9e51aed62305187d7103e3f267",
"text": "This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set ofguidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. Theseare demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographicliterature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches tolegend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamismand design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphicand The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specificneeds, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements offact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence ofthis work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization.",
"title": ""
},
{
"docid": "e16d89d3a6b3d38b5823fae977087156",
"text": "The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.",
"title": ""
},
{
"docid": "e4b6dbd8238160457f14aacb8f9717ff",
"text": "Abs t r ac t . The PKZIP program is one of the more widely used archive/ compression programs on personM, computers. It also has many compatible variants on other computers~ and is used by most BBS's and ftp sites to compress their archives. PKZIP provides a stream cipher which allows users to scramble files with variable length keys (passwords). In this paper we describe a known pla.intext attack on this cipher, which can find the internal representation of the key within a few hours on a personal computer using a few hundred bytes of known plaintext. In many cases, the actual user keys can also be found from the internal representation. We conclude that the PKZIP cipher is weak, and should not be used to protect valuable data.",
"title": ""
},
{
"docid": "f7b911eca27efc3b0535f8b48222f993",
"text": "Numerous entity linking systems are addressing the entity recognition problem by using off-the-shelf NER systems. It is, however, a difficult task to select which specific model to use for these systems, since it requires to judge the level of similarity between the datasets which have been used to train models and the dataset at hand to be processed in which we aim to properly recognize entities. In this paper, we present the newest version of ADEL, our adaptive entity recognition and linking framework, where we experiment with an hybrid approach mixing a model combination method to improve the recognition level and to increase the efficiency of the linking step by applying a filter over the types. We obtain promising results when performing a 4-fold cross validation experiment on the OKE 2016 challenge training dataset. We also demonstrate that we achieve better results that in our previous participation on the OKE 2015 test set. We finally report the results of ADEL on the OKE 2016 test set and we present an error analysis highlighting the main difficulties of this challenge.",
"title": ""
},
{
"docid": "6cbce08be2401cac8a2d04159222aa3a",
"text": "Optimal treatment of symptomatic accessory navicular bones, generally asymptomatic ‘extra’ ossicles in the front interior ankle, remains debated. Incidence and type of accessory navicular bones in Chinese patients were examined as a basis for improving diagnostic and treatment standards. Accessory navicular bones were retrospectively examined in 1,625 (790 men and 835 women) patients with trauma-induced or progressive symptomatic ankle pain grouped by gender and age from August 2011 to May 2012. Anterior–posterior/oblique X-ray images; presence; type; affected side; modified Coughlin’s classification types 1, 2A, 2B, and 3; and subgroups a–c were recorded. Accessory navicular bones were found in 329 (20.2 %) patients (143 men and 186 women; mean age, 47.24 ± 18.34, ranging 14–96 years). Patients aged 51–60 exhibited most accessory navicular bones (29.7 %), with risk slightly higher in women and generally increasing from minimal 10.9 % at ages 11–20 to age 51 and thereafter declining to 0.4 % by age 90. The incidence was 41.6 % for Type 1 (Type 1a: 9.1 %, Type 1b: 15.5 %, and Type 1c: 19.4 %), 36.8 % for Type 2 (Type 2Aa: 2.1 %, Type 2Ab: 13.7 %, Type 2Ac: 5.1 %, Type 2Ba: 2.1 %, 2Bb: 2.1 %, and 2Bc: 11.6 %), and 21.6 % for Type 3 (Type 3a: 4.5 %, Type 3b: 14 %, and Type 3c: 3.0 %). Approximately one-fifth (20.3 %) of ankle pain patients exhibited accessory navicular bones, with Type 2 most common and middle-aged patients most commonly affected. Thus, accessory navicular bones may be less rare than previously thought, underlying treatable symptomatic conditions of foot pain and deformity.",
"title": ""
},
{
"docid": "5285b2b579c8a0f0915e76e41d66330c",
"text": "Not all bugs lead to program crashes, and not always is there a formal specification to check the correctness of a software test's outcome. A common scenario in software testing is therefore that test data are generated, and a tester manually adds test oracles. As this is a difficult task, it is important to produce small yet representative test sets, and this representativeness is typically measured using code coverage. There is, however, a fundamental problem with the common approach of targeting one coverage goal at a time: Coverage goals are not independent, not equally difficult, and sometimes infeasible-the result of test generation is therefore dependent on the order of coverage goals and how many of them are feasible. To overcome this problem, we propose a novel paradigm in which whole test suites are evolved with the aim of covering all coverage goals at the same time while keeping the total size as small as possible. This approach has several advantages, as for example, its effectiveness is not affected by the number of infeasible targets in the code. We have implemented this novel approach in the EvoSuite tool, and compared it to the common approach of addressing one goal at a time. Evaluated on open source libraries and an industrial case study for a total of 1,741 classes, we show that EvoSuite achieved up to 188 times the branch coverage of a traditional approach targeting single branches, with up to 62 percent smaller test suites.",
"title": ""
},
{
"docid": "2a63710d79eab2e4bd59a610f874e4ab",
"text": "To a client, one of the simplest services provided by a distributed system is a time service. A client simply requests the time from any set of servers, and uses any reply. The simplicity in this interaction, however, misrepresents the complexity of implementing such a service. An algorithm is needed that will keep a set of clocks synchronized, reasonably correct and accurate with rcspcct to a standard, and able to withstand errors such as communication failures and inaccurate clocks. This paper presents a partial solution to the problem by describing two algorithms which will keep clocks both correct and synchronized.",
"title": ""
},
{
"docid": "9cb2f99aa1c745346999179132df3854",
"text": "As a complementary and alternative medicine in medical field, traditional Chinese medicine (TCM) has drawn great attention in the domestic field and overseas. In practice, TCM provides a quite distinct methodology to patient diagnosis and treatment compared to western medicine (WM). Syndrome (ZHENG or pattern) is differentiated by a set of symptoms and signs examined from an individual by four main diagnostic methods: inspection, auscultation and olfaction, interrogation, and palpation which reflects the pathological and physiological changes of disease occurrence and development. Patient classification is to divide patients into several classes based on different criteria. In this paper, from the machine learning perspective, a survey on patient classification issue will be summarized on three major aspects of TCM: sign classification, syndrome differentiation, and disease classification. With the consideration of different diagnostic data analyzed by different computational methods, we present the overview for four subfields of TCM diagnosis, respectively. For each subfield, we design a rectangular reference list with applications in the horizontal direction and machine learning algorithms in the longitudinal direction. According to the current development of objective TCM diagnosis for patient classification, a discussion of the research issues around machine learning techniques with applications to TCM diagnosis is given to facilitate the further research for TCM patient classification.",
"title": ""
}
] | scidocsrr |
4aa62373dedcfacbf87e08d983f0c72b | Global Normalization of Convolutional Neural Networks for Joint Entity and Relation Classification | [
{
"docid": "d18181640e98086732e5f32682e12127",
"text": "This paper proposes a novel context-aware joint entity and word-level relation extraction approach through semantic composition of words, introducing a Table Filling Multi-Task Recurrent Neural Network (TF-MTRNN) model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies. The proposed neural network architecture is capable of modeling multiple relation instances without knowing the corresponding relation arguments in a sentence. The experimental results show that a simple approach of piggybacking candidate entities to model the label dependencies from relations to entities improves performance. We present state-of-the-art results with improvements of 2.0% and 2.7% for entity recognition and relation classification, respectively on CoNLL04 dataset.",
"title": ""
}
] | [
{
"docid": "a8ca07bf7784d7ac1d09f84ac76be339",
"text": "AbstructEstimation of 3-D information from 2-D image coordinates is a fundamental problem both in machine vision and computer vision. Circular features are the most common quadratic-curved features that have been addressed for 3-D location estimation. In this paper, a closed-form analytical solution to the problem of 3-D location estimation of circular features is presented. Two different cases are considered: 1) 3-D orientation and 3-D position estimation of a circular feature when its radius is known, and 2) 3-D orientation and 3-D position estimation of a circular feature when its radius is not known. As well, extension of the developed method to 3-D quadratic features is addressed. Specifically, a closed-form analytical solution is derived for 3-D position estimation of spherical features. For experimentation purposes, simulated as well as real setups were employed. Simulated experimental results obtained for all three cases mentioned above verified the analytical method developed in this paper. In the case of real experiments, a set of circles located on a calibration plate, whose locations were known with respect to a reference frame, were used for camera calibration as well as for the application of the developed method. Since various distortion factors had to be compensated in order to obtain accurate estimates of the parameters of the imaged circle-an ellipse-with respect to the camera's image frame, a sequential compensation procedure was applied to the input grey-level image. The experimental results obtained once more showed the validity of the total process involved in the 3-D location estimation of circular features in general and the applicability of the analytical method developed in this paper in particular.",
"title": ""
},
{
"docid": "d698f181eb7682d9bf98b3bc103abaac",
"text": "Current database research identified the use of computational power of GPUs as a way to increase the performance of database systems. As GPU algorithms are not necessarily faster than their CPU counterparts, it is important to use the GPU only if it will be beneficial for query processing. In a general database context, only few research projects address hybrid query processing, i.e., using a mix of CPUand GPU-based processing to achieve optimal performance. In this paper, we extend our CPU/GPU scheduling framework to support hybrid query processing in database systems. We point out fundamental problems and propose an algorithm to create a hybrid query plan for a query using our scheduling framework. Additionally, we provide cost metrics, which consider the possible overlapping of data transfers and computation on the GPU. Furthermore, we present algorithms to create hybrid query plans for query sequences and query trees.",
"title": ""
},
{
"docid": "de48faf1dc4d276460b8369c9d8f36a8",
"text": "Momentum is primarily driven by firms’ performance 12 to seven months prior to portfolio formation, not by a tendency of rising and falling stocks to keep rising and falling. Strategies based on recent past performance generate positive returns but are less profitable than those based on intermediate horizon past performance, especially among the largest, most liquid stocks. These facts are not particular to the momentum observed in the cross section of US equities. Similar results hold for momentum strategies trading international equity indices, commodities, and currencies.",
"title": ""
},
{
"docid": "aa9b9c05bf09e3c6cceeb664e218a753",
"text": "Software development is an inherently team-based activity, and many software-engineering courses are structured around team projects, in order to provide students with an authentic learning experience. The collaborative-development tools through which student developers define, share and manage their tasks generate a detailed record in the process. Albeit not designed for this purpose, this record can provide the instructor with insights into the students' work, the team's progress over time, and the individual team-member's contributions. In this paper, we describe an analysis and visualization toolkit that enables instructors to interactively explore the trace of the team's collaborative work, to better understand the team dynamics, and the tasks of the individual team developers. We also discuss our grounded-theory analysis of one team's work, based on their email exchanges, questionnaires and interviews. Our analyses suggest that the inferences supported by our toolkit are congruent with the developers' feedback, while there are some discrepancies with the reflections of the team as a whole.",
"title": ""
},
{
"docid": "a7373d69f5ff9d894a630cc240350818",
"text": "The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000 series of standards, developed by the International Standards Organization, share a common concern with quality and process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it would satisfy most of the level 2 goals and many of the level 3 goals. Because there are practices in the CMM that are not addressed in ISO 9000, it is possible for a level 1 organization to receive ISO 9001 registration; similarly, there are areas addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.",
"title": ""
},
{
"docid": "f2d27b79f1ac3809f7ea605203136760",
"text": "The Internet of Things (IoT) is a fast-growing movement turning devices into always-connected smart devices through the use of communication technologies. This facilitates the creation of smart strategies allowing monitoring and optimization as well as many other new use cases for various sectors. Low Power Wide Area Networks (LPWANs) have enormous potential as they are suited for various IoT applications and each LPWAN technology has certain features, capabilities and limitations. One of these technologies, namely LoRa/LoRaWAN has several promising features and private and public LoRaWANs are increasing worldwide. Similarly, researchers are also starting to study the potential of LoRa and LoRaWANs. This paper examines the work that has already been done and identifies flaws and strengths by performing a comparison of created testbeds. Limitations of LoRaWANs are also identified.",
"title": ""
},
{
"docid": "50eaa44f8e89870750e279118a219d7a",
"text": "Fitbit fitness trackers record sensitive personal information, including daily step counts, heart rate profiles, and locations visited. By design, these devices gather and upload activity data to a cloud service, which provides aggregate statistics to mobile app users. The same principles govern numerous other Internet-of-Things (IoT) services that target different applications. As a market leader, Fitbit has developed perhaps the most secure wearables architecture that guards communication with end-to-end encryption. In this article, we analyze the complete Fitbit ecosystem and, despite the brand's continuous efforts to harden its products, we demonstrate a series of vulnerabilities with potentially severe implications to user privacy and device security. We employ a range of techniques, such as protocol analysis, software decompiling, and both static and dynamic embedded code analysis, to reverse engineer previously undocumented communication semantics, the official smartphone app, and the tracker firmware. Through this interplay and in-depth analysis, we reveal how attackers can exploit the Fitbit protocol to extract private information from victims without leaving a trace, and wirelessly flash malware without user consent. We demonstrate that users can tamper with both the app and firmware to selfishly manipulate records or circumvent Fitbit's walled garden business model, making the case for an independent, user-controlled, and more secure ecosystem. Finally, based on the insights gained, we make specific design recommendations that can not only mitigate the identified vulnerabilities, but are also broadly applicable to securing future wearable system architectures.",
"title": ""
},
{
"docid": "ae9bdb80a60dd6820c1c9d9557a73ffc",
"text": "We propose a novel method for predicting image labels by fusing image content descriptors with the social media context of each image. An image uploaded to a social media site such as Flickr often has meaningful, associated information, such as comments and other images the user has uploaded, that is complementary to pixel content and helpful in predicting labels. Prediction challenges such as ImageNet [6]and MSCOCO [19] use only pixels, while other methods make predictions purely from social media context [21]. Our method is based on a novel fully connected Conditional Random Field (CRF) framework, where each node is an image, and consists of two deep Convolutional Neural Networks (CNN) and one Recurrent Neural Network (RNN) that model both textual and visual node/image information. The edge weights of the CRF graph represent textual similarity and link-based metadata such as user sets and image groups. We model the CRF as an RNN for both learning and inference, and incorporate the weighted ranking loss and cross entropy loss into the CRF parameter optimization to handle the training data imbalance issue. Our proposed approach is evaluated on the MIR-9K dataset and experimentally outperforms current state-of-the-art approaches.",
"title": ""
},
{
"docid": "244dbf0d36d3d221e12b1844d440ecb2",
"text": "A typical scene contains many different objects that compete for neural representation due to the limited processing capacity of the visual system. At the neural level, competition among multiple stimuli is evidenced by the mutual suppression of their visually evoked responses and occurs most strongly at the level of the receptive field. The competition among multiple objects can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that biasing signals due to selective attention can modulate neural activity in visual cortex not only in the presence but also in the absence of visual stimulation. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals likely derives from a distributed network of areas in frontal and parietal cortex. Competition suggests that once attentional resources are depleted, no further processing is possible. Yet, existing data suggest that emotional stimuli activate brain regions \"automatically,\" largely immune from attentional control. We tested the alternative possibility, namely, that the neural processing of stimuli with emotional content is not automatic and instead requires some degree of attention. Our results revealed that, contrary to the prevailing view, all brain regions responding differentially to emotional faces, including the amygdala, did so only when sufficient attentional resources were available to process the faces. Thus, similar to the processing of other stimulus categories, the processing of facial expression is under top-down control.",
"title": ""
},
{
"docid": "511d631ab0d28039e2b8eeca87b825ac",
"text": "Compressive sensing (CS) is a new technique for the efficient acquisition of signals, images and other data that have a sparse representation in some basis, frame, or dictionary. By sparse we mean that the N-dimensional basis representation has just K <;<; N significant coefficients; in this case, the CS theory maintains that just M = O( K log N) random linear signal measurements will both preserve all of the signal information and enable robust signal reconstruction in polynomial time. In this paper, we extend the CS theory to pulse stream data, which correspond to S -sparse signals/images that are convolved with an unknown F-sparse pulse shape. Ignoring their convolutional structure, a pulse stream signal is K = SF sparse. Such signals figure prominently in a number of applications, from neuroscience to astronomy. Our specific contributions are threefold. First, we propose a pulse stream signal model and show that it is equivalent to an infinite union of subspaces. Second, we derive a lower bound on the number of measurements M required to preserve the essential information present in pulse streams. The bound is linear in the total number of degrees of freedom S + F, which is significantly smaller than the naïve bound based on the total signal sparsity K = SF. Third, we develop an efficient signal recovery algorithm that infers both the shape of the impulse response as well as the locations and amplitudes of the pulses. The algorithm alternatively estimates the pulse locations and the pulse shape in a manner reminiscent of classical deconvolution algorithms. Numerical experiments on synthetic and real data demonstrate the advantages of our approach over standard CS.",
"title": ""
},
{
"docid": "c2d0e11e37c8f0252ce77445bf583173",
"text": "This paper describes a method to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline to infer 3D model shapes including clothed people with 4.5mm reconstruction accuracy. At the core of our approach is the transformation of dynamic body pose into a canonical frame of reference. Our main contribution is a method to transform the silhouette cones corresponding to dynamic human silhouettes to obtain a visual hull in a common reference frame. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. Results on 4 different datasets demonstrate the effectiveness of our approach to produce accurate 3D models. Requiring only an RGB camera, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.",
"title": ""
},
{
"docid": "b2a43491283732082c65f88c9b03016f",
"text": "BACKGROUND\nExpressing breast milk has become increasingly prevalent, particularly in some developed countries. Concurrently, breast pumps have evolved to be more sophisticated and aesthetically appealing, adapted for domestic use, and have become more readily available. In the past, expressed breast milk feeding was predominantly for those infants who were premature, small or unwell; however it has become increasingly common for healthy term infants. The aim of this paper is to systematically explore the literature related to breast milk expressing by women who have healthy term infants, including the prevalence of breast milk expressing, reported reasons for, methods of, and outcomes related to, expressing.\n\n\nMETHODS\nDatabases (Medline, CINAHL, JSTOR, ProQuest Central, PsycINFO, PubMed and the Cochrane library) were searched using the keywords milk expression, breast milk expression, breast milk pumping, prevalence, outcomes, statistics and data, with no limit on year of publication. Reference lists of identified papers were also examined. A hand-search was conducted at the Australian Breastfeeding Association Lactation Resource Centre. Only English language papers were included. All papers about expressing breast milk for healthy term infants were considered for inclusion, with a focus on the prevalence, methods, reasons for and outcomes of breast milk expression.\n\n\nRESULTS\nA total of twenty two papers were relevant to breast milk expression, but only seven papers reported the prevalence and/or outcomes of expressing amongst mothers of well term infants; all of the identified papers were published between 1999 and 2012. Many were descriptive rather than analytical and some were commentaries which included calls for more research, more dialogue and clearer definitions of breastfeeding. While some studies found an association between expressing and the success and duration of breastfeeding, others found the opposite. In some cases these inconsistencies were compounded by imprecise definitions of breastfeeding and breast milk feeding.\n\n\nCONCLUSIONS\nThere is limited evidence about the prevalence and outcomes of expressing breast milk amongst mothers of healthy term infants. The practice of expressing breast milk has increased along with the commercial availability of a range of infant feeding equipment. The reasons for expressing have become more complex while the outcomes, when they have been examined, are contradictory.",
"title": ""
},
{
"docid": "5b5e69bd93f6b809c29596a54c1565fc",
"text": "Variety and veracity are two distinct characteristics of large-scale and heterogeneous data. It has been a great challenge to efficiently represent and process big data with a unified scheme. In this paper, a unified tensor model is proposed to represent the unstructured, semistructured, and structured data. With tensor extension operator, various types of data are represented as subtensors and then are merged to a unified tensor. In order to extract the core tensor which is small but contains valuable information, an incremental high order singular value decomposition (IHOSVD) method is presented. By recursively applying the incremental matrix decomposition algorithm, IHOSVD is able to update the orthogonal bases and compute the new core tensor. Analyzes in terms of time complexity, memory usage, and approximation accuracy of the proposed method are provided in this paper. A case study illustrates that approximate data reconstructed from the core set containing 18% elements can guarantee 93% accuracy in general. Theoretical analyzes and experimental results demonstrate that the proposed unified tensor model and IHOSVD method are efficient for big data representation and dimensionality reduction.",
"title": ""
},
{
"docid": "6cb43a0f16b69cad9a7e5c5a528e23f5",
"text": "New substation technology, such as nonconventional instrument transformers, and a need to reduce design and construction costs are driving the adoption of Ethernet-based digital process bus networks for high-voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include Generic Object Oriented Substation Event, Simple Network Management Protocol, and Sampled Values (SVs). A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP-based traffic segregation method. While this paper focuses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High-volume SV data and time-critical circuit breaker tripping commands do not interact on a full-duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high-voltage switchyards.",
"title": ""
},
{
"docid": "296f18277958621763646519a7224193",
"text": "This chapter examines health promotion and disease prevention from the perspective of social cognitive theory. This theory posits a multifaceted causal structure in which self-efficacy beliefs operate in concert with cognized goals, outcome expectations, and perceived environmental impediments and facilitators in the regulation of human motivation, action, and well-being. Perceived self-efficacy is a key factor in the causal structure because it operates on motivation and action both directly and through its impact on the other determinants. The areas of overlap of sociocognitive determinants with some of the most widely applied psychosocial models of health are identified. Social cognitive theory addresses the sociostructural determinants of health as well as the personal determinants. A comprehensive approach to health promotion requires changing the practices of social systems that have widespread detrimental effects on health rather than solely changing the habits of individuals. Further progress in this field requires building new structures for health promotion, new systems for risk reduction and greater emphasis on health policy initiatives. People's beliefs in their collective efficacy to accomplish social change, therefore, play a key role in the policy and public health perspective to health promotion and disease prevention. Bandura, A. (1998). Health promotion from the perspective of social cognitive theory. Psychology and Health, 13, 623-649.",
"title": ""
},
{
"docid": "1aba7883ca8a1651d951ef55d8f4bbc5",
"text": "This paper presents an improvement of the J-linkage algorithm for fitting multiple instances of a model to noisy data corrupted by outliers. The binary preference analysis implemented by J-linkage is replaced by a continuous (soft, or fuzzy) generalization that proves to perform better than J-linkage on simulated data, and compares favorably with state of the art methods on public domain real datasets.",
"title": ""
},
{
"docid": "813e41234aad749022a4d655af987ad6",
"text": "Three- and four-element eyepiece designs are presented each with a different type of radial gradient-index distribution. Both quadratic and modified quadratic index profiles are shown to provide effective control of the field aberrations. In particular, the three-element design with a quadratic index profile demonstrates that the inhomogeneous power contribution can make significant contributions to the overall system performance, especially the astigmatism correction. Using gradient-index components has allowed for increased eye relief and field of view making these designs comparable with five- and six-element ones.",
"title": ""
},
{
"docid": "da2bc0813d4108606efef507e50100e3",
"text": "Prediction is one of the most attractive aspects in data mining. Link prediction has recently attracted the attention of many researchers as an effective technique to be used in graph based models in general and in particular for social network analysis due to the recent popularity of the field. Link prediction helps to understand associations between nodes in social communities. Existing link prediction-related approaches described in the literature are limited to predict links that are anticipated to exist in the future. To the best of our knowledge, none of the previous works in this area has explored the prediction of links that could disappear in the future. We argue that the latter set of links are important to know about; they are at least equally important as and do complement the positive link prediction process in order to plan better for the future. In this paper, we propose a link prediction model which is capable of predicting both links that might exist and links that may disappear in the future. The model has been successfully applied in two different though very related domains, namely health care and gene expression networks. The former application concentrates on physicians and their interactions while the second application covers genes and their interactions. We have tested our model using different classifiers and the reported results are encouraging. Finally, we compare our approach with the internal links approach and we reached the conclusion that our approach performs very well in both bipartite and non-bipartite graphs.",
"title": ""
},
{
"docid": "d7582552589626891258f52b0d750915",
"text": "Social Live Stream Services (SLSS) exploit a new level of social interaction. One of the main challenges in these services is how to detect and prevent deviant behaviors that violate community guidelines. In this work, we focus on adult content production and consumption in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis. We use a pre-trained deep learning model to identify broadcasters of adult content. Our results indicate that moderation systems in place are highly ineffective in suspending the accounts of such users. We create two large datasets by crawling the social graphs of these platforms, which we analyze to identify characterizing traits of adult content producers and consumers, and discover interesting patterns of relationships among them, evident in both networks.",
"title": ""
},
{
"docid": "3675229608c949f883b7e400a19b66bb",
"text": "SQL injection is one of the most prominent vulnerabilities for web-based applications. Exploitation of SQL injection vulnerabilities (SQLIV) through successful attacks might result in severe consequences such as authentication bypassing, leaking of private information etc. Therefore, testing an application for SQLIV is an important step for ensuring its quality. However, it is challenging as the sources of SQLIV vary widely, which include the lack of effective input filters in applications, insecure coding by programmers, inappropriate usage of APIs for manipulating databases etc. Moreover, existing testing approaches do not address the issue of generating adequate test data sets that can detect SQLIV. In this work, we present a mutation-based testing approach for SQLIV testing. We propose nine mutation operators that inject SQLIV in application source code. The operators result in mutants, which can be killed only with test data containing SQL injection attacks. By this approach, we force the generation of an adequate test data set containing effective test cases capable of revealing SQLIV. We implement a MUtation-based SQL Injection vulnerabilities Checking (testing) tool (MUSIC) that automatically generates mutants for the applications written in Java Server Pages (JSP) and performs mutation analysis. We validate the proposed operators with five open source web-based applications written in JSP. We show that the proposed operators are effective for testing SQLIV.",
"title": ""
}
] | scidocsrr |
df7c0407671ad437eaf331cf30b7f958 | KNN-CF Approach: Incorporating Certainty Factor to kNN Classification | [
{
"docid": "be369e7935f5a56b0c5ac671c7ec315b",
"text": "Memory-based classification algorithms such as radial basis functions or K-nearest neighbors typically rely on simple distances (Euclidean, dot product ... ), which are not particularly meaningful on pattern vectors. More complex, better suited distance measures are often expensive and rather ad-hoc (elastic matching, deformable templates). We propose a new distance measure which (a) can be made locally invariant to any set of transformations of the input and (b) can be computed efficiently. We tested the method on large handwritten character databases provided by the Post Office and the NIST. Using invariances with respect to translation, rotation, scaling, shearing and line thickness, the method consistently outperformed all other systems tested on the same databases.",
"title": ""
}
] | [
{
"docid": "b1383088b26636e6ac13331a2419f794",
"text": "This paper investigates the problem of blurring caused by motion during image capture of text documents. Motion blurring prevents proper optical character recognition of the document text contents. One area of such applications is to deblur name card images obtained from handheld cameras. In this paper, a complete motion deblurring procedure for document images has been proposed. The method handles both uniform linear motion blur and uniform acceleration motion blur. Experiments on synthetic and real-life blurred images prove the feasibility and reliability of this algorithm provided that the motion is not too irregular. The restoration procedure consumes only small amount of computation time.",
"title": ""
},
{
"docid": "6080612b8858d633c3f63a3d019aef58",
"text": "Color images provide large information for human visual perception compared to grayscale images. Color image enhancement methods enhance the visual data to increase the clarity of the color image. It increases human perception of information. Different color image contrast enhancement methods are used to increase the contrast of the color images. The Retinex algorithms enhance the color images similar to the scene perceived by the human eye. Multiscale retinex with color restoration (MSRCR) is a type of retinex algorithm. The MSRCR algorithm results in graying out and halo artifacts at the edges of the images. So here the focus is on improving the MSRCR algorithm by combining it with contrast limited adaptive histogram equalization (CLAHE) using image.",
"title": ""
},
{
"docid": "356361bf2ca0e821250e4a32d299d498",
"text": "DRAM has been a de facto standard for main memory, and advances in process technology have led to a rapid increase in its capacity and bandwidth. In contrast, its random access latency has remained relatively stagnant, as it is still around 100 CPU clock cycles. Modern computer systems rely on caches or other latency tolerance techniques to lower the average access latency. However, not all applications have ample parallelism or locality that would help hide or reduce the latency. Moreover, applications' demands for memory space continue to grow, while the capacity gap between last-level caches and main memory is unlikely to shrink. Consequently, reducing the main-memory latency is important for application performance. Unfortunately, previous proposals have not adequately addressed this problem, as they have focused only on improving the bandwidth and capacity or reduced the latency at the cost of significant area overhead.\n We propose asymmetric DRAM bank organizations to reduce the average main-memory access latency. We first analyze the access and cycle times of a modern DRAM device to identify key delay components for latency reduction. Then we reorganize a subset of DRAM banks to reduce their access and cycle times by half with low area overhead. By synergistically combining these reorganized DRAM banks with support for non-uniform bank accesses, we introduce a novel DRAM bank organization with center high-aspect-ratio mats called CHARM. Experiments on a simulated chip-multiprocessor system show that CHARM improves both the instructions per cycle and system-wide energy-delay product up to 21% and 32%, respectively, with only a 3% increase in die area.",
"title": ""
},
{
"docid": "06f562ff86d8a2834616726a1d4b6e15",
"text": "This paper reports about interest operators, region detectors and region descriptors for photogrammetric applications. Features are the primary input for many applications like registration, 3D reconstruction, motion tracking, robot navigation, etc. Nowadays many detectors and descriptors algorithms are available, providing corners, edges and regions of interest together with n-dimensional vectors useful in matching procedures. The main algorithms are here described and analyzed, together with their proprieties. Experiments concerning the repeatability, localization accuracy and quantitative analysis are performed and reported. Details on how improve to location accuracy of region detectors are also reported.",
"title": ""
},
{
"docid": "af4055df4a60a241f43d453f34189d86",
"text": "We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "4930fa19f6374774a5f4575b56159e50",
"text": "We present a study of the correlation between the extent to which the cluster hypothesis holds, as measured by various tests, and the relative effectiveness of cluster-based retrieval with respect to document-based retrieval. We show that the correlation can be affected by several factors, such as the size of the result list of the most highly ranked documents that is analyzed. We further show that some cluster hypothesis tests are often negatively correlated with one another. Moreover, in several settings, some of the tests are also negatively correlated with the relative effectiveness of cluster-based retrieval.",
"title": ""
},
{
"docid": "852578afdb63985d93b1d2d0ee8fc3e8",
"text": "This paper builds on the recent ASPIC formalism, to develop a general framework for argumentation with preferences. We motivate a revised definition of conflict free sets of arguments, adapt ASPIC to accommodate a broader range of instantiating logics, and show that under some assumptions, the resulting framework satisfies key properties and rationality postulates. We then show that the generalised framework accommodates Tarskian logic instantiations extended with preferences, and then study instantiations of the framework by classical logic approaches to argumentation. We conclude by arguing that ASPIC’s modelling of defeasible inference rules further testifies to the generality of the framework, and then examine and counter recent critiques of Dung’s framework and its extensions to accommodate preferences.",
"title": ""
},
{
"docid": "97aab319e3d38d755860b141c5a4fa38",
"text": "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.",
"title": ""
},
{
"docid": "8de4182b607888e6c7cbe6d6ae8ee122",
"text": "In this article, we focus on isolated gesture recognition and explore different modalities by involving RGB stream, depth stream, and saliency stream for inspection. Our goal is to push the boundary of this realm even further by proposing a unified framework that exploits the advantages of multi-modality fusion. Specifically, a spatial-temporal network architecture based on consensus-voting has been proposed to explicitly model the long-term structure of the video sequence and to reduce estimation variance when confronted with comprehensive inter-class variations. In addition, a three-dimensional depth-saliency convolutional network is aggregated in parallel to capture subtle motion characteristics. Extensive experiments are done to analyze the performance of each component and our proposed approach achieves the best results on two public benchmarks, ChaLearn IsoGD and RGBD-HuDaAct, outperforming the closest competitor by a margin of over 10% and 15%, respectively. Our project and codes will be released at https://davidsonic.github.io/index/acm_tomm_2017.html.",
"title": ""
},
{
"docid": "b6c62936aef87ab2cce565f6142424bf",
"text": "Concerns have been raised about the performance of PC-based virtual routers as they do packet processing in software. Furthermore, it becomes challenging to maintain isolation among virtual routers due to resource contention in a shared environment. Hardware vendors recognize this issue and PC hardware with virtualization support (SR-IOV and Intel-VTd) has been introduced in recent years. In this paper, we investigate how such hardware features can be integrated with two different virtualization technologies (LXC and KVM) to enhance performance and isolation of virtual routers on shared environments. We compare LXC and KVM and our results indicate that KVM in combination with hardware support can provide better trade-offs between performance and isolation. We notice that KVM has slightly lower throughput, but has superior isolation properties by providing more explicit control of CPU resources. We demonstrate that KVM allows defining a CPU share for a virtual router, something that is difficult to achieve in LXC, where packet forwarding is done in a kernel shared by all virtual routers.",
"title": ""
},
{
"docid": "7c1b301e45da5af0f5248f04dbf33f75",
"text": "[1] We invert 115 differential interferograms derived from 47 synthetic aperture radar (SAR) scenes for a time-dependent deformation signal in the Santa Clara valley, California. The time-dependent deformation is calculated by performing a linear inversion that solves for the incremental range change between SAR scene acquisitions. A nonlinear range change signal is extracted from the ERS InSAR data without imposing a model of the expected deformation. In the Santa Clara valley, cumulative land uplift is observed during the period from 1992 to 2000 with a maximum uplift of 41 ± 18 mm centered north of Sunnyvale. Uplift is also observed east of San Jose. Seasonal uplift and subsidence dominate west of the Silver Creek fault near San Jose with a maximum peak-to-trough amplitude of 35 mm. The pattern of seasonal versus long-term uplift provides constraints on the spatial and temporal characteristics of water-bearing units within the aquifer. The Silver Creek fault partitions the uplift behavior of the basin, suggesting that it acts as a hydrologic barrier to groundwater flow. While no tectonic creep is observed along the fault, the development of a low-permeability barrier that bisects the alluvium suggests that the fault has been active since the deposition of Quaternary units.",
"title": ""
},
{
"docid": "08aa9d795464d444095bbb73c067c2a9",
"text": "Next-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual's genome 1 by calling genetic variants present in an individual using billions of short, errorful sequence reads 2 . Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome 3,4 . Here we show that a deep convolutional neural network 5 can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the \"highest performance\" award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data. Main Text Calling genetic variants from NGS data has proven challenging because NGS reads are not only errorful (with rates from ~0.1-10%) but result from a complex error process that depends on properties of the instrument, preceding data processing tools, and the genome sequence itself. State-of-the-art variant callers use a variety of statistical techniques to model these error processes and thereby accurately identify differences between the reads and the reference genome caused by real genetic variants and those arising from errors in the reads. For example, the widely-used GATK uses logistic regression to model base errors, hidden Markov models to compute read likelihoods, and naive Bayes classification to identify variants, which are then filtered to remove likely false positives using a Gaussian mixture model peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. with hand-crafted features capturing common error modes 6 . These techniques allow the GATK to achieve high but still imperfect accuracy on the Illumina sequencing platform . Generalizing these models to other sequencing technologies has proven difficult due to the need for manual retuning or extending these statistical models (see e.g. Ion Torrent 8,9 ), a major problem in an area with such rapid technological progress 1 . Here we describe a variant caller for NGS data that replaces the assortment of statistical modeling components with a single, deep learning model. Deep learning is a revolutionary machine learning technique applicable to a variety of domains, including image classification 10 , translation , gaming , and the life sciences 14–17 . This toolchain, which we call DeepVariant, (Figure 1) begins by finding candidate SNPs and indels in reads aligned to the reference genome with high-sensitivity but low specificity. The deep learning model, using the Inception-v2 architecture , emits probabilities for each of the three diploid genotypes at a locus using a pileup image of the reference and read data around each candidate variant (Figure 1). The model is trained using labeled true genotypes, after which it is frozen and can then be applied to novel sites or samples. Throughout the following experiments, DeepVariant was trained on an independent set of samples or variants to those being evaluated. This deep learning model has no specialized knowledge about genomics or next-generation sequencing, and yet can learn to call genetic variants more accurately than state-of-the-art methods. When applied to the Platinum Genomes Project NA12878 data 18 , DeepVariant produces a callset with better performance than the GATK when evaluated on the held-out chromosomes of the Genome in a Bottle ground truth set (Figure 2A). For further validation, we sequenced 35 replicates of NA12878 using a standard whole-genome sequencing protocol and called variants on 27 replicates using a GATK best-practices pipeline and DeepVariant using a model trained on the other eight replicates (see methods). Not only does DeepVariant produce more accurate results but it does so with greater consistency across a variety of quality metrics (Figure 2B). To further confirm the performance of DeepVariant, we submitted variant calls for a blinded sample, NA24385, to the Food and Drug Administration-sponsored variant calling Truth Challenge in May 2016 and won the \"highest performance\" award for SNPs by an independent team using a different evaluation methodology. Like many variant calling algorithms, GATK relies on a model that assumes read errors are independent . Though long-recognized as an invalid assumption 2 , the true likelihood function that models multiple reads simultaneously is unknown 6,19,20 . Because DeepVariant presents an image of all of the reads relevant for a putative variant together, the convolutional neural network (CNN) is able to account for the complex dependence among the reads by virtue of being a universal approximator 21 . This manifests itself as a tight concordance between the estimated probability of error from the likelihood function and the observed error rate, as seen in Figure 2C where DeepVariant's CNN is well calibrated, strikingly more so than the GATK. That the CNN has approximated this true, but unknown, inter-dependent likelihood function is the essential technical advance enabling us to replace the hand-crafted statistical models in other approaches with a single deep learning model and still achieve such high performance in variant calling. 2 peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. We further explored how well DeepVariant’s CNN generalizes beyond its training data. First, a model trained with read data aligned to human genome build GRCh37 and applied to reads aligned to GRCh38 has similar performance (overall F1 = 99.45%) to one trained on GRCh38 and then applied to GRCh38 (overall F1 = 99.53%), thereby demonstrating that a model learned from one version of the human genome reference can be applied to other versions with effectively no loss in accuracy (Table S1). Second, models learned using human reads and ground truth data achieve high accuracy when applied to a mouse dataset 22 (F1 = 98.29%), out-performing training on the mouse data itself (F1 = 97.84%, Table S4). This last experiment is especially demanding as not only do the species differ but nearly all of the sequencing parameters do as well: 50x 2x148bp from an Illumina TruSeq prep sequenced on a HiSeq 2500 for the human sample and 27x 2x100bp reads from a custom sequencing preparation run on an Illumina Genome Analyzer II for mouse . Thus, DeepVariant is robust to changes in sequencing depth, preparation protocol, instrument type, genome build, and even species. The practical benefits of this capability is substantial, as DeepVariant enables resequencing projects in non-human species, which often have no ground truth data to guide their efforts , to leverage the large and growing ground truth data in humans. To further assess its capabilities, we trained DeepVariant to call variants in eight datasets from Genome in a Bottle 24 that span a variety of sequencing instruments and protocols, including whole genome and exome sequencing technologies, with read lengths from fifty to many thousands of basepairs (Table 1 and S6). We used the already processed BAM files to introduce additional variability as these BAMs differ in their alignment and cleaning steps. The results of this experiment all exhibit a characteristic pattern: the candidate variants have the highest sensitivity but a low PPV (mean 57.6%), which varies significantly by dataset. After retraining, all of the callsets achieve high PPVs (mean of 99.3%) while largely preserving the candidate callset sensitivity (mean loss of 2.3%). The high PPVs and low loss of sensitivity indicate that DeepVariant can learn a model that captures the technology-specific error processes in sufficient detail to separate real variation from false positives with high fidelity for many different sequencing technologies. As we already shown above that DeepVariant performs well on Illumina WGS data, we analyze here the behavior of DeepVariant on two non-Illumina WGS datasets and two exome datasets from Illumina and Ion Torrent. The SOLID and Pacific Biosciences (PacBio) WGS datasets have high error rates in the candidate callsets. SOLID (13.9% PPV for SNPs, 96.2% for indels, and 14.3% overall) has many SNP artifacts from the mapping short, color-space reads. The PacBio dataset is the opposite, with many false indels (79.8% PPV for SNPs, 1.4% for indels, and 22.1% overall) due to this technology's high indel error rate. Training DeepVariant to call variants in an exome is likely to be particularly challenging. Exomes have far fewer variants (~20-30k) than found in a whole-genome (~4-5M) 26 . T",
"title": ""
},
{
"docid": "df3ef3feeaf787315188db2689dc6fb9",
"text": "Multi-class weather classification from single images is a fundamental operation in many outdoor computer vision applications. However, it remains difficult and the limited work is carried out for addressing the difficulty. Moreover, existing method is based on the fixed scene. In this paper we present a method for any scenario multi-class weather classification based on multiple weather features and multiple kernel learning. Our approach extracts multiple weather features and takes properly processing. By combining these features into high dimensional vectors, we utilize multiple kernel learning to learn an adaptive classifier. We collect an outdoor image set that contains 20K images called MWI (Multi-class Weather Image) set. Experimental results show that the proposed method can efficiently recognize weather on MWI dataset.",
"title": ""
},
{
"docid": "b8dbc4c33e51350109bf1fec5ef852ce",
"text": "Stack Overflow is one of the most popular question-and-answer sites for programmers. However, there are a great number of duplicate questions that are expected to be detected automatically in a short time. In this paper, we introduce two approaches to improve the detection accuracy: splitting body into different types of data and using word-embedding to treat word ambiguities that are not contained in the general corpuses. The evaluation shows that these approaches improve the accuracy compared with the traditional method.",
"title": ""
},
{
"docid": "8b5d7965ac154da1266874027f0b10a0",
"text": "Matching pedestrians across disjoint camera views, known as person re-identification (re-id), is a challenging problem that is of importance to visual recognition and surveillance. Most existing methods exploit local regions within spatial manipulation to perform matching in local correspondence. However, they essentially extract fixed representations from pre-divided regions for each image and perform matching based on the extracted representation subsequently. For models in this pipeline, local finer patterns that are crucial to distinguish positive pairs from negative ones cannot be captured, and thus making them underperformed. In this paper, we propose a novel deep multiplicative integration gating function, which answers the question of what-and-where to match for effective person re-id. To address what to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way. The network comprises two Convolutional Neural Networks (CNNs) to extract convolutional activations, and generates relevant descriptors for pedestrian matching. This thus, leads to flexible representations for pair-wise images. To address where to match, we combat the spatial misalignment by performing spatially recurrent pooling via a four-directional recurrent neural network to impose spatial depenEmail addresses: lin.wu@uq.edu.au (Lin Wu ), wangy@cse.unsw.edu.au (Yang Wang), xueli@itee.uq.edu.au (Xue Li), junbin.gao@sydney.edu.au (Junbin Gao) Preprint submitted to Elsevier 25·7·2017 ar X iv :1 70 7. 07 07 4v 1 [ cs .C V ] 2 1 Ju l 2 01 7 dency over all positions with respect to the entire image. The proposed network is designed to be end-to-end trainable to characterize local pairwise feature interactions in a spatially aligned manner. To demonstrate the superiority of our method, extensive experiments are conducted over three benchmark data sets: VIPeR, CUHK03 and Market-1501.",
"title": ""
},
{
"docid": "79bfb0820e43af3d7012b61f677ed206",
"text": "We derive generalizations of AdaBoost and related gradient-based coordinate descent methods that incorporate sparsity-promoting penalties for the norm of the predictor that is being learned. The end result is a family of coordinate descent algorithms that integrate forward feature induction and back-pruning through regularization and give an automatic stopping criterion for feature induction. We study penalties based on the l1, l2, and l∞ norms of the predictor and introduce mixed-norm penalties that build upon the initial penalties. The mixed-norm regularizers facilitate structural sparsity in parameter space, which is a useful property in multiclass prediction and other related tasks. We report empirical results that demonstrate the power of our approach in building accurate and structurally sparse models.",
"title": ""
},
{
"docid": "7a9a7b888b9e3c2b82e6c089d05e2803",
"text": "Background:\nBullous pemphigoid (BP) is a chronic, autoimmune blistering skin disease that affects patients' daily life and psychosocial well-being.\n\n\nObjective:\nThe aim of the study was to evaluate the quality of life, anxiety, depression and loneliness in BP patients.\n\n\nMethods:\nFifty-seven BP patients and fifty-seven healthy controls were recruited for the study. The quality of life of each patient was assessed using the Dermatology Life Quality Index (DLQI) scale. Moreover, they were evaluated for anxiety and depression according to the Hospital Anxiety Depression Scale (HADS-scale), while loneliness was measured through the Loneliness Scale-Version 3 (UCLA) scale.\n\n\nResults:\nThe mean DLQI score was 9.45±3.34. Statistically significant differences on the HADS total scale and in HADS-depression subscale (p=0.015 and p=0.002, respectively) were documented. No statistically significant difference was found between the two groups on the HADS-anxiety subscale. Furthermore, significantly higher scores were recorded on the UCLA Scale compared with healthy volunteers (p=0.003).\n\n\nConclusion:\nBP had a significant impact on quality of life and the psychological status of patients, probably due to the appearance of unattractive lesions on the skin, functional problems and disease chronicity.",
"title": ""
},
{
"docid": "43db0f06e3de405657996b46047fa369",
"text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points",
"title": ""
},
{
"docid": "07e713880604e82559ccfeece0149228",
"text": "The modern research has found a variety of applications and systems with vastly varying requirements and characteristics in Wireless Sensor Networks (WSNs). The research has led to materialization of many application specific routing protocols which must be energy-efficient. As a consequence, it is becoming increasingly difficult to discuss the design issues requirements regarding hardware and software support. Implementation of efficient system in a multidisciplinary research such as WSNs is becoming very difficult. In this paper we discuss the design issues in routing protocols for WSNs by considering its various dimensions and metrics such as QoS requirement, path redundancy etc. The paper concludes by presenting",
"title": ""
},
{
"docid": "7a202dfa59cb8c50a6999fe8a50895a9",
"text": "The process for transferring knowledge of multiple reinforcement learning policies into a single multi-task policy via distillation technique is known as policy distillation. When policy distillation is under a deep reinforcement learning setting, due to the giant parameter size and the huge state space for each task domain, it requires extensive computational efforts to train the multi-task policy network. In this paper, we propose a new policy distillation architecture for deep reinforcement learning, where we assume that each task uses its taskspecific high-level convolutional features as the inputs to the multi-task policy network. Furthermore, we propose a new sampling framework termed hierarchical prioritized experience replay to selectively choose experiences from the replay memories of each task domain to perform learning on the network. With the above two attempts, we aim to accelerate the learning of the multi-task policy network while guaranteeing a good performance. We use Atari 2600 games as testing environment to demonstrate the efficiency and effectiveness of our proposed solution for policy distillation.",
"title": ""
}
] | scidocsrr |
00a92d6f3afd28c97c9b0a6b70372fe3 | ML-KNN: A lazy learning approach to multi-label learning | [
{
"docid": "8b498cfaa07f0b2858e417e0e0d5adb4",
"text": "In classic pattern recognition problems, classes are mutually exclusive by de\"nition. Classi\"cation errors occur when the classes overlap in the feature space. We examine a di5erent situation, occurring when the classes are, by de\"nition, not mutually exclusive. Such problems arise in semantic scene and document classi\"cation and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classi\"cation, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a \"eld scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a di5erent treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classi\"cation; furthermore, our work appears to generalize to other classi\"cation problems of the same nature. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "5a03bfd124df29ed5607a13fe546e661",
"text": "Employees and/or functional managers increasingly adopt and use IT systems and services that the IS management of the organization does neither provide nor approve. To effectively counteract such shadow IT in organizations, the understanding of employees’ motivations and drivers is necessary. However, the scant literature on this topic primarily focused on various governance approaches at firm level. With the objective to open the black box of shadow IT usage at the individual unit of analysis, we develop a research model and propose a laboratory experiment to examine users’ justifications for violating implicit and explicit IT usage restrictions based on neutralization theory. To be precise, in this research-in-progress, we posit positive associations between shadow IT usage and human tendencies to downplay such kind of rule-breaking behaviors due to necessity, no injury, and injustice. We expect a lower impact of these neutralization effects in the presence of behavioral IT guidelines that explicitly prohibit users to employ exactly those shadow IT systems.",
"title": ""
},
{
"docid": "57d8f78ac76925f17b28b78992b7a7b9",
"text": "The effects of long-term aerobic exercise on endothelial function in patients with essential hypertension remain unclear. To determine whether endothelial function relating to forearm hemodynamics in these patients differs from normotensive subjects and whether endothelial function can be modified by continued physical exercise, we randomized patients with essential hypertension into a group that engaged in 30 minutes of brisk walking 5 to 7 times weekly for 12 weeks (n=20) or a group that underwent no activity modifications (control group, n=7). Forearm blood flow was measured using strain-gauge plethysmography during reactive hyperemia to test for endothelium-dependent vasodilation and after sublingual nitroglycerin administration to test endothelium-independent vasodilation. Forearm blood flow in hypertensive patients during reactive hyperemia was significantly less than that in normotensive subjects (n=17). Increases in forearm blood flow after nitroglycerin were similar between hypertensive and normotensive subjects. Exercise lowered mean blood pressure from 115.7+/-5.3 to 110.2+/-5.1 mm Hg (P<0.01) and forearm vascular resistance from 25.6+/-3.2 to 23. 2+/-2.8 mm Hg/mL per minute per 100 mL tissue (P<0.01); no change occurred in controls. Basal forearm blood flow, body weight, and heart rate did not differ with exercise. After 12 weeks of exercise, maximal forearm blood flow response during reactive hyperemia increased significantly, from 38.4+/-4.6 to 47.1+/-4.9 mL/min per 100 mL tissue (P<0.05); this increase was not seen in controls. Changes in forearm blood flow after sublingual nitroglycerin administration were similar before and after 12 weeks of exercise. Intra-arterial infusion of the nitric oxide synthase inhibitor NG-monomethyl-L-arginine abolished the enhancement of reactive hyperemia induced by 12 weeks of exercise. These findings suggest that through increased release of nitric oxide, continued physical exercise alleviates impairment of reactive hyperemia in patients with essential hypertension.",
"title": ""
},
{
"docid": "4d6c21ed39ef5d9d7e9b616338cc2dfa",
"text": "Due to the increasing threat from malicious software (malware), monitoring of vulnerable systems is becoming increasingly important. The need to log and analyze activity encompasses networks, individual computers, as well as mobile devices. While there are various automatic approaches and techniques available to detect, identify, or capture malware, the actual analysis of the ever-increasing number of suspicious samples is a time-consuming process for malware analysts. The use of visualization and highly interactive visual analytics systems can help to support this analysis process with respect to investigation, comparison, and summarization of malware samples. Currently, there is no survey available that reviews available visualization systems supporting this important and emerging field. We provide a systematic overview and categorization of malware visualization systems from the perspective of visual analytics. Additionally, we identify and evaluate data providers and commercial tools that produce meaningful input data for the reviewed malware visualization systems. This helps to reveal data types that are currently underrepresented, enabling new research opportunities in the visualization community.",
"title": ""
},
{
"docid": "1d8765a407f2b9f8728982f54ddb6ae1",
"text": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. Materials and Methods: The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. Results: We evaluate similar medical concepts across diagnosis, medication and procedure. The results show xx% relevancy between similar pairs of medical concepts. Our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. Conclusion: We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance.",
"title": ""
},
{
"docid": "bdd9760446a6412195e0742b5f1c7035",
"text": "Cyanobacteria are found globally due to their adaptation to various environments. The occurrence of cyanobacterial blooms is not a new phenomenon. The bloom-forming and toxin-producing species have been a persistent nuisance all over the world over the last decades. Evidence suggests that this trend might be attributed to a complex interplay of direct and indirect anthropogenic influences. To control cyanobacterial blooms, various strategies, including physical, chemical, and biological methods have been proposed. Nevertheless, the use of those strategies is usually not effective. The isolation of natural compounds from many aquatic and terrestrial plants and seaweeds has become an alternative approach for controlling harmful algae in aquatic systems. Seaweeds have received attention from scientists because of their bioactive compounds with antibacterial, antifungal, anti-microalgae, and antioxidant properties. The undesirable effects of cyanobacteria proliferations and potential control methods are here reviewed, focusing on the use of potent bioactive compounds, isolated from seaweeds, against microalgae and cyanobacteria growth.",
"title": ""
},
{
"docid": "2088be2c5623d7491c5692b6ebd4f698",
"text": "Machine learning (ML) is now widespread. Traditional software engineering can be applied to the development ML applications. However, we have to consider specific problems with ML applications in therms of their quality. In this paper, we present a survey of software quality for ML applications to consider the quality of ML applications as an emerging discussion. From this survey, we raised problems with ML applications and discovered software engineering approaches and software testing research areas to solve these problems. We classified survey targets into Academic Conferences, Magazines, and Communities. We targeted 16 academic conferences on artificial intelligence and software engineering, including 78 papers. We targeted 5 Magazines, including 22 papers. The results indicated key areas, such as deep learning, fault localization, and prediction, to be researched with software engineering and testing.",
"title": ""
},
{
"docid": "8057b33aa53c8017fd4050b9db401c2f",
"text": "Recent work in computer vision has yielded impressive results in automatically describing images with natural language. Most of these systems generate captions in a single language, requiring multiple language-specific models to build a multilingual captioning system. We propose a very simple technique to build a single unified model across languages, using artificial tokens to control the language, making the captioning system more compact. We evaluate our approach on generating English and Japanese captions, and show that a typical neural captioning architecture is capable of learning a single model that can switch between two different languages.",
"title": ""
},
{
"docid": "0851caf6599f97bbeaf68b57e49b4da5",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "102e1718e03b3a4e96ee8c2212738792",
"text": "This paper introduces a new method for the rapid development of complex rule bases involving cue phrases for the purpose of classifying text segments. The method is based on Ripple-Down Rules, a knowledge acquisition method that proved very successful in practice for building medical expert systems and does not require a knowledge engineer. We implemented our system KAFTAN and demonstrate the applicability of our method to the task of classifying scientific citations. Building cue phrase rules in KAFTAN is easy and efficient. We demonstrate the effectiveness of our approach by presenting experimental results where our resulting classifier clearly outperforms previously built classifiers in the recent literature.",
"title": ""
},
{
"docid": "2720f2aa50ddfc9150d6c2718f4433d3",
"text": "This paper describes InP/InGaAs double heterojunction bipolar transistor (HBT) technology that uses SiN/SiO2 sidewall spacers. This technology enables the formation of ledge passivation and narrow base metals by i-line lithography. With this process, HBTs with various emitter sizes and emitter-base (EB) spacings can be fabricated on the same wafer. The impact of the emitter size and EB spacing on the current gain and high-frequency characteristics is investigated. The reduction of the current gain is <;5% even though the emitter width decreases from 0.5 to 0.25 μm. A high current gain of over 40 is maintained even for a 0.25-μm emitter HBT. The HBTs with emitter widths ranging from 0.25 to 0.5 μm also provide peak ft of over 430 GHz. On the other hand, peak fmax greatly increases from 330 to 464 GHz with decreasing emitter width from 0.5 to 0.25 μm. These results indicate that the 0.25-μm emitter HBT with the ledge passivaiton exhibits balanced high-frequency performance (ft = 452 GHz and fmax = 464 GHz), while maintaining a current gain of over 40.",
"title": ""
},
{
"docid": "954d0ef5a1a648221ce8eb3f217f4071",
"text": "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this",
"title": ""
},
{
"docid": "4cfedb5e516692b12a610c4211e6fdd4",
"text": "Supporters of market-based education reforms argue that school autonomy and between-school competition can raise student achievement. Yet U.S. reforms based in part on these ideas charter schools, school-based management, vouchers and school choice are limited in scope, complicating evaluations of their impact. In contrast, a series of remarkable reforms enacted by the Thatcher Government in Britain in the 1980s provide an ideal testing ground for examining the effects of school autonomy and between-school competition. In this paper I study one reform described by Chubb and Moe (1992) as ‘truly revolutionary’ that allowed public high schools to ‘opt out’ of the local school authority and become quasi-independent, funded directly by central Government. In order to opt out schools had to first win a majority vote of current parents, and I assess the impact of school autonomy via a regression discontinuity design, comparing student achievement levels at schools where the vote barely won to those where it barely lost. To assess the effects of competition I use this same idea to compare student achievement levels at neighbouring schools of barely winners to neighbouring schools of barely losers. My results suggest two conclusions. First, there were large gains to schools that won the vote and opted out, on the order of a onequarter standard deviation improvement on standardised national examinations. Since results improved for those students already enrolled in the school at the time of the vote, this outcome is not likely to be driven by changes in student-body composition (cream-skimming). Second, the gains enjoyed by the opted-out schools appear not to have spilled over to their neighbours I can never reject the hypothesis of no spillovers and can always reject effects bigger than one half of the ‘own-school’ impact. I interpret my results as supportive of education reforms that seek to hand power to schools, with the caveat that I do not know precisely what opted-out schools did to improve. With regards to competition, although I cannot rule out small but economically important competition effects, my results suggest caution as to the likely benefits.",
"title": ""
},
{
"docid": "686e9d38bbbec3b6e6150789e14575a0",
"text": "Automatic License Plate Recognition (ALPR) is an important task with many applications in Intelligent Transportation and Surveillance systems. As in other computer vision tasks, Deep Learning (DL) methods have been recently applied in the context of ALPR, focusing on country-specific plates, such as American or European, Chinese, Indian and Korean. However, either they are not a complete DL-ALPR pipeline, or they are commercial and utilize private datasets and lack detailed information. In this work, we proposed an end-to-end DL-ALPR system for Brazilian license plates based on state-of-the-art Convolutional Neural Network architectures. Using a publicly available dataset with Brazilian plates, the system was able to correctly detect and recognize all seven characters of a license plate in 63.18% of the test set, and 97.39% when considering at least five correct characters (partial match). Considering the segmentation and recognition of each character individually, we are able to segment 99% of the characters, and correctly recognize 93% of them.",
"title": ""
},
{
"docid": "6ae78c5e82030e76c87ef9759ba8a464",
"text": "The European innovation project PERFoRM (Production harmonizEd Reconfiguration of Flexible Robots and Machinery) is aiming for a harmonized integration of research results in the area of flexible and reconfigurable manufacturing systems. Based on the cyber-physical system (CPS) paradigm, existing technologies and concepts are researched and integrated in an architecture which is enabling the application of these new technologies in real industrial environments. To implement such a flexible cyber-physical system, one of the core requirements for each involved component is a harmonized communication, which enables the capability to collaborate with each other in an intelligent way. But especially when integrating multiple already existing production components into such a cyber-physical system, one of the major issues is to deal with the various communication protocols and data representations coming with each individual cyber-physical component. To tackle this issue, the solution foreseen within PERFoRM's architecture is to use an integration platform, the PERFoRM Industrial Manufacturing Middleware, to enable all connected components to interact with each other through the Middleware and without having to implement new interfaces for each. This paper describes the basic requirements of such a Middleware and how it fits into the PERFoRM architecture and gives an overview about the internal design and functionality.",
"title": ""
},
{
"docid": "48019a3106c6d74e4cfcc5ac596d4617",
"text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.",
"title": ""
},
{
"docid": "0b5431e668791d180239849c53faa7f7",
"text": "Crowdfunding is quickly emerging as an alternative to traditional methods of funding new products. In a crowdfunding campaign, a seller solicits financial contributions from a crowd, usually in the form of pre-buying an unrealized product, and commits to producing the product if the total amount pledged is above a certain threshold. We provide a model of crowdfunding in which consumers arrive sequentially and make decisions about whether to pledge or not. Pledging is not costless, and hence consumers would prefer not to pledge if they think the campaign will not succeed. This can lead to cascades where a campaign fails to raise the required amount even though there are enough consumers who want the product. The paper introduces a novel stochastic process --- anticipating random walks --- to analyze this problem. The analysis helps explain why some campaigns fail and some do not, and provides guidelines about how sellers should design their campaigns in order to maximize their chances of success. More broadly, Anticipating Random Walks can also find application in settings where agents make decisions sequentially and these decisions are not just affected by past actions of others, but also by how they will impact the decisions of future actors as well.",
"title": ""
},
{
"docid": "4e253e57dd1dba0ef804017d0ee9a2eb",
"text": "This paper presents an original probabilistic method for the numerical computations of Greeks (i.e. price sensitivities) in finance. Our approach is based on theintegration-by-partsformula, which lies at the core of the theory of variational stochastic calculus, as developed in the Malliavin calculus. The Greeks formulae, both with respect to initial conditions and for smooth perturbations of the local volatility, are provided for general discontinuous path-dependent payoff functionals of multidimensional diffusion processes. We illustrate the results by applying the formula to exotic European options in the framework of the Black and Scholes model. Our method is compared to the Monte Carlo finite difference approach and turns out to be very efficient in the case of discontinuous payoff functionals.",
"title": ""
},
{
"docid": "ac2eee03876d4260390972862ac12452",
"text": "Cross-validation (CV) is often used to select the regularization parameter in high dimensional problems. However, when applied to the sparse modeling method Lasso, CV leads to models that are unstable in high-dimensions, and consequently not suited for reliable interpretation. In this paper, we propose a model-free criterion ESCV based on a new estimation stability (ES) metric and CV . Our proposed ESCV finds a smaller and locally ES -optimal model smaller than the CV choice so that the it fits the data and also enjoys estimation stability property. We demonstrate that ESCV is an effective alternative to CV at a similar easily parallelizable computational cost. In particular, we compare the two approaches with respect to several performance measures when applied to the Lasso on both simulated and real data sets. For dependent predictors common in practice, our main finding is that, ESCV cuts down false positive rates often by a large margin, while sacrificing little of true positive rates. ESCV usually outperforms CV in terms of parameter estimation while giving similar performance as CV in terms of prediction. For the two real data sets from neuroscience and cell biology, the models found by ESCV are less than half of the model sizes by CV , but preserves CV’s predictive performance and corroborates with subject knowledge and independent work. We also discuss some regularization parameter alignment issues that come up in both approaches. Supplementary materials are available online.",
"title": ""
},
{
"docid": "4d8cc4d8a79f3d35ccc800c9f4f3dfdc",
"text": "Many common events in our daily life affect us in positive and negative ways. For example, going on vacation is typically an enjoyable event, while being rushed to the hospital is an undesirable event. In narrative stories and personal conversations, recognizing that some events have a strong affective polarity is essential to understand the discourse and the emotional states of the affected people. However, current NLP systems mainly depend on sentiment analysis tools, which fail to recognize many events that are implicitly affective based on human knowledge about the event itself and cultural norms. Our goal is to automatically acquire knowledge of stereotypically positive and negative events from personal blogs. Our research creates an event context graph from a large collection of blog posts and uses a sentiment classifier and semi-supervised label propagation algorithm to discover affective events. We explore several graph configurations that propagate affective polarity across edges using local context, discourse proximity, and event-event co-occurrence. We then harvest highly affective events from the graph and evaluate the agreement of the polarities with human judgements.",
"title": ""
},
{
"docid": "109838175d109002e022115d84cae0fa",
"text": "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).",
"title": ""
}
] | scidocsrr |
144dece26525a57f4c531eb4f1d3760b | Dynamic trees as search trees via Euler tours, applied to the network simplex algorithm | [
{
"docid": "5e5780bbd151ccf981fe69d5eb70b067",
"text": "We give efficient algorithms for maintaining a minimum spanning forest of a planar graph subject to on-line modifications. The modifications supported include changes in the edge weights, and insertion and deletion of edges and vertices. To implement the algorithms, we develop a data structure called an edge-or&reck dynumic tree, which is a variant of the dynamic tree data structure of Sleator and Tarjan. Using this data structure, our algorithms run in O(logn) time per operation and O(n) space. The algorithms can be used to maintain the connected components of a dynamic planar graph in O(logn) time per operation. *Computer Science Laboratory, Xerox PARC, 3333 Coyote Hill Rd., Palo Alto, CA 94304. This work was done while the author was at the Department of Computer Science, Columbia University, New York, NY 10027. **Department of Computer Science, Columbia University, New York, NY 10027 and Dipartmento di Informatica e Sistemistica, Universitb di Roma, Rome, Italy. ***Department of Computer Science, Brown University, Box 1910, Providence, RI 02912-1910. #Department of Computer Science, Princeton University, Princeton, NJ 08544, and AT&T Bell Laboratories, Murray Hill, New Jersey 07974. ##Department of Computer Science, Stanford University, Stanford, CA 94305. This work was done while the author was at Department of Computer Science, Princeton University, Princeton, NJ 08544. ###IBM Research Division, T. J. Watson Research Center, Yorktown Heights, NY 10598. + Research supported in part by NSF grant CCR-8X-14977, NSF grant DCR-86-05962, ONR Contract N00014-87-H-0467 and Esprit II Basic Research Actions Program of the European Communities Contract No. 3075.",
"title": ""
}
] | [
{
"docid": "1f1c4c69a4c366614f0cc9ecc24365ba",
"text": "BACKGROUND\nBurnout is a major issue among medical students. Its general characteristics are loss of interest in study and lack of motivation. A study of the phenomenon must extend beyond the university environment and personality factors to consider whether career choice has a role in the occurrence of burnout.\n\n\nMETHODS\nQuantitative, national survey (n = 733) among medical students, using a 12-item career motivation list compiled from published research results and a pilot study. We measured burnout by the validated Hungarian version of MBI-SS.\n\n\nRESULTS\nThe most significant career choice factor was altruistic motivation, followed by extrinsic motivations: gaining a degree, finding a job, accessing career opportunities. Lack of altruism was found to be a major risk factor, in addition to the traditional risk factors, for cynicism and reduced academic efficacy. Our study confirmed the influence of gender differences on both career choice motivations and burnout.\n\n\nCONCLUSION\nThe structure of career motivation is a major issue in the transformation of the medical profession. Since altruism is a prominent motivation for many women studying medicine, their entry into the profession in increasing numbers may reinforce its traditional character and act against the present trend of deprofessionalization.",
"title": ""
},
{
"docid": "07381e533ec04794a74abc0560d7c8af",
"text": "Many applications in several domains such as telecommunications, network security, large-scale sensor networks, require online processing of continuous data flows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static configurations that lead to either under or overprovisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation, and a thorough evaluation of the scalability and elasticity of the fully implemented system.",
"title": ""
},
{
"docid": "662ae9d792b3889dbd0450a65259253a",
"text": "We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of linearity. Importantly, our parametrization can cope with features over a huge range of depths, even those that are so far from the camera that they present little parallax during motion---maintaining sufficient representative uncertainty that these points retain the opportunity to \"come in'' smoothly from infinity if the camera makes larger movements. Feature initialization is undelayed in the sense that even distant features are immediately used to improve camera motion estimates, acting initially as bearing references but not permanently labeled as such. The inverse depth parametrization remains well behaved for features at all stages of SLAM processing, but has the drawback in computational terms that each point is represented by a 6-D state vector as opposed to the standard three of a Euclidean XYZ representation. We show that once the depth estimate of a feature is sufficiently accurate, its representation can safely be converted to the Euclidean XYZ form, and propose a linearity index that allows automatic detection and conversion to maintain maximum efficiency---only low parallax features need be maintained in inverse depth form for long periods. We present a real-time implementation at 30 Hz, where the parametrization is validated in a fully automatic 3-D SLAM system featuring a handheld single camera with no additional sensing. Experiments show robust operation in challenging indoor and outdoor environments with a very large ranges of scene depth, varied motion, and also real time 360deg loop closing.",
"title": ""
},
{
"docid": "5f21a1348ad836ded2fd3d3264455139",
"text": "To date, brain imaging has largely relied on X-ray computed tomography and magnetic resonance angiography with limited spatial resolution and long scanning times. Fluorescence-based brain imaging in the visible and traditional near-infrared regions (400-900 nm) is an alternative but currently requires craniotomy, cranial windows and skull thinning techniques, and the penetration depth is limited to 1-2 mm due to light scattering. Here, we report through-scalp and through-skull fluorescence imaging of mouse cerebral vasculature without craniotomy utilizing the intrinsic photoluminescence of single-walled carbon nanotubes in the 1.3-1.4 micrometre near-infrared window. Reduced photon scattering in this spectral region allows fluorescence imaging reaching a depth of >2 mm in mouse brain with sub-10 micrometre resolution. An imaging rate of ~5.3 frames/s allows for dynamic recording of blood perfusion in the cerebral vessels with sufficient temporal resolution, providing real-time assessment of blood flow anomaly in a mouse middle cerebral artery occlusion stroke model.",
"title": ""
},
{
"docid": "88530d3d70df372b915556eab919a3fe",
"text": "The airway mucosa is lined by a continuous epithelium comprised of multiple cell phenotypes, several of which are secretory. Secretions produced by these cells mix with a variety of macromolecules, ions and water to form a respiratory tract fluid that protects the more distal airways and alveoli from injury and infection. The present article highlights the structure of the mucosa, particularly its secretory cells, gives a synopsis of the structure of mucus, and provides new information on the localization of mucin (MUC) genes that determine the peptide sequence of the protein backbone of the glycoproteins, which are a major component of mucus. Airway secretory cells comprise the mucous, serous, Clara and dense-core granulated cells of the surface epithelium, and the mucous and serous acinar cells of the submucosal glands. Several transitional phenotypes may be found, especially during irritation or disease. Respiratory tract mucins constitute a heterogeneous group of high molecular weight, polydisperse richly glycosylated molecules: both secreted and membrane-associated forms of mucin are found. Several mucin (MUC) genes encoding the protein core of mucin have been identified. We demonstrate the localization of MUC gene expression to a number of distinct cell types and their upregulation both in response to experimentally administered lipopolysaccharide and cystic fibrosis.",
"title": ""
},
{
"docid": "654b7a674977969237301cd874bda5d1",
"text": "This paper and its successor examine the gap between ecotourism theory as revealed in the literature and ecotourism practice as indicated by its on-site application. A framework is suggested which, if implemented through appropriate management, can help to achieve a balance between conservation and development through the promotion of synergistic relationships between natural areas, local populations and tourism. The framework can also be used to assess the status of ecotourism at particular sites. ( 1999 Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c15618df21bce45cbad6766326de3dbd",
"text": "The birth of intersexed infants, babies born with genitals that are neither clearly male nor clearly female, has been documented throughout recorded time.' In the late twentieth century, medical technology has advanced to allow scientists to determine chromosomal and hormonal gender, which is typically taken to be the real, natural, biological gender, usually referred to as \"sex.\"2 Nevertheless, physicians who handle the cases of intersexed infants consider several factors beside biological ones in determining, assigning, and announcing the gender of a particular infant. Indeed, biological factors are often preempted in their deliberations by such cultural factors as the \"correct\" length of the penis and capacity of the vagina.",
"title": ""
},
{
"docid": "8433f58b63632abf9074eefdf5fa429f",
"text": "We are developing a monopivot centrifugal pump for circulatory assist for a period of more than 2 weeks. The impeller is supported by a pivot bearing at one end and by a passive magnetic bearing at the other. The pivot undergoes concentrated exposure to the phenomena of wear, hemolysis, and thrombus formation. The pivot durability, especially regarding the combination of male/female pivot radii, was examined through rotating wear tests and animal tests. As a result, combinations of similar radii for the male/female pivots were found to provide improved pump durability. In the extreme case, the no-gap combination would result in no thrombus formation.",
"title": ""
},
{
"docid": "74ea9bde4e265dba15cf9911fce51ece",
"text": "We consider a system aimed at improving the resolution of a conventional airborne radar, looking in the forward direction, by forming an end-fire synthetic array along the airplane line of flight. The system is designed to operate even in slant (non-horizontal) flight trajectories, and it allows imaging along the line of flight. By using the array theory, we analyze system geometry and ambiguity problems, and analytically evaluate the achievable resolution and the required pulse repetition frequency. Processing computational burden is also analyzed, and finally some simulation results are provided.",
"title": ""
},
{
"docid": "98889e4861485fdc04cff54640f4d3ab",
"text": "The design, prototype implementation, and demonstration of an ethical governor capable of restricting lethal action of an autonomous system in a manner consistent with the Laws of War and Rules of Engagement is presented.",
"title": ""
},
{
"docid": "c07f30465dc4ed355847d015fee1cadb",
"text": "0747-5632/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.chb.2008.06.002 * Corresponding author. Tel.: +86 13735892489. E-mail addresses: luyb@mail.hust.edu.cn (Y. Lu), zh binwang@utpa.edu (B. Wang). 1 Tel.: +1 956 3813336. Instant messaging (IM) is a popular Internet application around the world. In China, the competition in the IM market is very intense and there are over 10 IM products available. We examine the intrinsic and extrinsic motivations that affect Chinese users’ acceptance of IM based on the theory of planned behavior (TPB), the technology acceptance model (TAM), and the flow theory. Results demonstrate that users’ perceived usefulness and perceived enjoyment significantly influence their attitude towards using IM, which in turn impacts their behavioral intention. Furthermore, perceived usefulness, users’ concentration, and two components of the theory of planned behavior (TPB): subjective norm and perceived behavioral control, also have significant impact on the behavioral intention. Users’ intention determines their actual usage behavior. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1f6637ecfc9415dd0f827ab6d3149af3",
"text": "Impaired renal function due to acute kidney injury (AKI) and/or chronic kidney diseases (CKD) is frequent in cirrhosis. Recurrent episodes of AKI may occur in end-stage cirrhosis. Differential diagnosis between functional (prerenal and hepatorenal syndrome) and acute tubular necrosis (ATN) is crucial. The concept that AKI and CKD represent a continuum rather than distinct entities, is now emerging. Not all patients with AKI have a potential for full recovery. Precise evaluation of kidney function and identification of kidney changes in patients with cirrhosis is central in predicting reversibility. This review examines current biomarkers for assessing renal function and identifying the cause and mechanisms of impaired renal function. When CKD is suspected, clearance of exogenous markers is the reference to assess glomerular filtration rate, as creatinine is inaccurate and cystatin C needs further evaluation. Recent biomarkers may help differentiate ATN from hepatorenal syndrome. Neutrophil gelatinase-associated lipocalin has been the most extensively studied biomarker yet, however, there are no clear-cut values that differentiate each of these conditions. Studies comparing ATN and hepatorenal syndrome in cirrhosis, do not include a gold standard. Combinations of innovative biomarkers are attractive to identify patients justifying simultaneous liver and kidney transplantation. Accurate biomarkers of underlying CKD are lacking and kidney biopsy is often contraindicated in this population. Urinary microRNAs are attractive although not definitely validated. Efforts should be made to develop biomarkers of kidney fibrosis, a common and irreversible feature of CKD, whatever the cause. Biomarkers of maladaptative repair leading to irreversible changes and CKD after AKI are also promising.",
"title": ""
},
{
"docid": "c6645086397ba0825f5f283ba5441cbf",
"text": "Anomalies have broad patterns corresponding to their causes. In industry, anomalies are typically observed as equipment failures. Anomaly detection aims to detect such failures as anomalies. Although this is usually a binary classification task, the potential existence of unseen (unknown) failures makes this task difficult. Conventional supervised approaches are suitable for detecting seen anomalies but not for unseen anomalies. Although, unsupervised neural networks for anomaly detection now detect unseen anomalies well, they cannot utilize anomalous data for detecting seen anomalies even if some data have been made available. Thus, providing an anomaly detector that finds both seen and unseen anomalies well is still a tough problem. In this paper, we introduce a novel probabilistic representation of anomalies to solve this problem. The proposed model defines the normal and anomaly distributions using the analogy between a set and the complementary set. We applied these distributions to an unsupervised variational autoencoder (VAE)-based method and turned it into a supervised VAE-based method. We tested the proposed method with well-known data and real industrial data to show that the proposed method detects seen anomalies better than the conventional unsupervised method without degrading the detection performance for unseen anomalies.",
"title": ""
},
{
"docid": "12cac87e781307224db2c3edf0d217b8",
"text": "Fetal ventriculomegaly (VM) refers to the enlargement of the cerebral ventricles in utero. It is associated with the postnatal diagnosis of hydrocephalus. VM is clinically diagnosed on ultrasound and is defined as an atrial diameter greater than 10 mm. Because of the anatomic detailed seen with advanced imaging, VM is often further characterized by fetal magnetic resonance imaging (MRI). Fetal VM is a heterogeneous condition with various etiologies and a wide range of neurodevelopmental outcomes. These outcomes are heavily dependent on the presence or absence of associated anomalies and the direct cause of the ventriculomegaly rather than on the absolute degree of VM. In this review article, we discuss diagnosis, work-up, counseling, and management strategies as they relate to fetal VM. We then describe imaging-based research efforts aimed at using prenatal data to predict postnatal outcome. Finally, we review the early experience with fetal therapy such as in utero shunting, as well as the advances in prenatal diagnosis and fetal surgery that may begin to address the limitations of previous therapeutic efforts.",
"title": ""
},
{
"docid": "7ec2bb00153e124e76fa7d6ab39c0b77",
"text": "Goal: Sensorimotor-based brain-computer interfaces (BCIs) have achieved successful control of real and virtual devices in up to three dimensions; however, the traditional sensor-based paradigm limits the intuitive use of these systems. Many control signals for state-of-the-art BCIs involve imagining the movement of body parts that have little to do with the output command, revealing a cognitive disconnection between the user's intent and the action of the end effector. Therefore, there is a need to develop techniques that can identify with high spatial resolution the self-modulated neural activity reflective of the actions of a helpful output device. Methods: We extend previous EEG source imaging (ESI) work to decoding natural hand/wrist manipulations by applying a novel technique to classifying four complex motor imaginations of the right hand: flexion, extension, supination, and pronation. Results: We report an increase of up to 18.6% for individual task classification and 12.7% for overall classification using the proposed ESI approach over the traditional sensor-based method. Conclusion: ESI is able to enhance BCI performance of decoding complex right-hand motor imagery tasks. Significance: This study may lead to the development of BCI systems with naturalistic and intuitive motor imaginations, thus facilitating broad use of noninvasive BCIs.",
"title": ""
},
{
"docid": "f2c846f200d9c59362bf285b2b68e2cd",
"text": "A Root Cause Failure Analysis (RCFA) for repeated impeller blade failures in a five stage centrifugal propane compressor is described. The initial failure occurred in June 2007 with a large crack found in one blade on the third impeller and two large pieces released from adjacent blades on the fourth impeller. An RCFA was performed to determine the cause of the failures. The failure mechanism was identified to be high cycle fatigue. Several potential causes related to the design, manufacture, and operation of the compressor were examined. The RCFA concluded that the design and manufacture were sound and there were no conclusive issues with respect to operation. A specific root cause was not identified. In June 2009, a second case of blade cracking occurred with a piece once again released from a single blade on the fourth impeller. Due to the commonality with the previous instance this was identified as a repeat failure. Specifically, both cases had occurred in the same compressor whereas, two compressors operating in identical service in adjacent Liquefied natural Gas (LNG) trains had not encountered the problem. A second RCFA was accordingly launched with the ultimate objective of preventing further repeated failures. Both RCFA teams were established comprising of engineers from the End User (RasGas), the OEM (Elliott Group) and an independent consultancy (Southwest Research Institute). The scope of the current investigation included a detailed metallurgical assessment, impeller modal frequency assessment, steady and unsteady computational fluid dynamics (CFD) assessment, finite element analyses (FEA), fluid structure interaction (FSI) assessment, operating history assessment and a comparison change analysis. By the process of elimination, the most probable causes were found to be associated with: • vane wake excitation of either the impeller blade leading edge modal frequency from severe mistuning and/or unusual response of the 1-diameter cover/blades modal frequency • mist carry over from third side load upstream scrubber • end of curve operation in the compressor rear section INTRODUCTION RasGas currently operates seven LNG trains at Ras Laffan Industrial City, Qatar. Train 3 was commissioned in 2004 with a nameplate LNG production of 4.7 Mtpa which corresponds to a wet sour gas feed of 790 MMscfd (22.37 MMscmd). Trains 4 and 5 were later commissioned in 2005 and 2006 respectively. They were also designed for a production 4.7 Mtpa LNG but have higher wet sour gas feed rates of 850 MMscfd (24.05 MMscmd). Despite these differences, the rated operation of the propane compressor is identical in each train. Figure 1. APCI C3-MR Refrigeration system for Trains 3, 4 and 5 The APCI C3-MR refrigeration cycle (Roberts, et al. 2002), depicted in Figure 1 is common for all three trains. Propane is circulated in a continuous loop between four compressor inlets and a single discharge. The compressed discharge gas is cooled and condensed in three sea water cooled heat exchangers before being routed to the LLP, LP, MP and HP evaporators. Here, the liquid propane is evaporated by the transfer of heat from the warmer feed and MR gas streams. It finally passes through one of the four suction scrubbers before re-entering the compressor as a gas. Although not shown, each section inlet has a dedicated anti-surge control loop from the de-superheater discharge to the suction scrubber inlet. A cross section of the propane compressor casing and rotor is illustrated in Figure 2. It is a straight through centrifugal unit with a horizontally split casing. Five impellers are mounted upon the 21.3 ft (6.5 m) long shaft. Three side loads add gas upstream of the suction at impellers 2, 3 & 4. The impellers are of two piece construction, with each piece fabricated from AISI 4340 forgings that were heat treated such that the material has sufficient strength and toughness for operation at temperatures down to -50F (-45.5C). The blades are milled to the hub piece and the cover piece was welded to the blades using a robotic metal inert gas (MIG) welding process. The impellers are mounted to the shaft with an interference fit. The thrust disc is mounted to the shaft with a line on line fit and antirotation key. The return channel and side load inlets are all vaned to align the downstream swirl angle. The impeller diffusers are all vaneless. A summary of the relevant compressor design parameters is given in Table 1. The complete compressor string is also depicted in Figure 1. The propane compressor is coupled directly to the HP MR compressor and driven by a GE Frame 7EA gas turbine and ABB 16086 HP (12 MW) helper motor at 3600 rpm rated shaft speed. Table 1. Propane Compressor design parameters Component Material No of",
"title": ""
},
{
"docid": "ccefef1618c7fa637de366e615333c4b",
"text": "Context: Systems development normally takes place in a specific organizational context, including organizational culture. Previous research has identified organizational culture as a factor that potentially affects the deployment systems development methods. Objective: The purpose is to analyze the relationship between organizational culture and the postadoption deployment of agile methods. Method: This study is a theory development exercise. Based on the Competing Values Model of organizational culture, the paper proposes a number of hypotheses about the relationship between organizational culture and the deployment of agile methods. Results: Inspired by the agile methods thirteen new hypotheses are introduced and discussed. They have interesting implications, when contrasted with ad hoc development and with traditional systems devel-",
"title": ""
},
{
"docid": "1b2991f84433c96c6f0d61378baebbea",
"text": "This article analyzes the topic of leadership from an evolutionary perspective and proposes three conclusions that are not part of mainstream theory. First, leading and following are strategies that evolved for solving social coordination problems in ancestral environments, including in particular the problems of group movement, intragroup peacekeeping, and intergroup competition. Second, the relationship between leaders and followers is inherently ambivalent because of the potential for exploitation of followers by leaders. Third, modern organizational structures are sometimes inconsistent with aspects of our evolved leadership psychology, which might explain the alienation and frustration of many citizens and employees. The authors draw several implications of this evolutionary analysis for leadership theory, research, and practice.",
"title": ""
},
{
"docid": "e3d1b0383d0f8b2382586be15961a765",
"text": "The critical study of political discourse has up until very recently rested solely within the domain of the social sciences. Working within a linguistics framework, Critical Discourse Analysis (CDA), in particular Fairclough (Fairclough 1989, 1995a, 1995b, 2001; Fairclough and Wodak 1997), has been heavily influenced by Foucault. 2 The linguistic theory that CDA and critical linguistics especially (which CDA subsumes) has traditionally drawn upon is Halliday‟s Systemic-Functional Grammar, which is largely concerned with the function of language in the social structure 3 (Fowler et al. 1979; Fowler 1991; Kress and Hodge 1979).",
"title": ""
},
{
"docid": "832c48916e04744188ed71bf3ab1f784",
"text": "Internet is commonly accepted as an important aspect in successful tourism promotion as well as destination marketing in this era. The main aim of this study is to explore how online promotion and its influence on destination awareness and loyalty in the tourism industry. This study proposes a structural model of the relationships among online promotion (OP), destination awareness (DA), tourist satisfaction (TS) and destination loyalty (DL). Randomly-selected respondents from the population of international tourists departing from Vietnamese international airports were selected as the questionnaire samples in the study. Initially, the exploratory factor analysis (EFA) was performed to test the validity of constructs, and the confirmatory factor analysis (CFA), using AMOS, was used to test the significance of the proposed hypothesizes model. The results show that the relationships among OP, DA, TS and DL appear significant in this study. The result also indicates that online promotion could improve the destination loyalty. Finally, the academic contribution, implications of the findings for tourism marketers and limitation are also discussed in this study. JEL classification numbers: L11",
"title": ""
}
] | scidocsrr |
bf5b515ee871395f23464714e30d64e3 | Digital Didactical Designs in iPad-Classrooms | [
{
"docid": "f1af321a5d7c2e738c181373d5dbfc9a",
"text": "This research examined how motivation (perceived control, intrinsic motivation, and extrinsic motivation), cognitive learning strategies (deep and surface strategies), and intelligence jointly predict long-term growth in students' mathematics achievement over 5 years. Using longitudinal data from six annual waves (Grades 5 through 10; Mage = 11.7 years at baseline; N = 3,530), latent growth curve modeling was employed to analyze growth in achievement. Results showed that the initial level of achievement was strongly related to intelligence, with motivation and cognitive strategies explaining additional variance. In contrast, intelligence had no relation with the growth of achievement over years, whereas motivation and learning strategies were predictors of growth. These findings highlight the importance of motivation and learning strategies in facilitating adolescents' development of mathematical competencies.",
"title": ""
}
] | [
{
"docid": "e6be28ac4a4c74ca2f8967b6a661b9cf",
"text": "This paper describes the design and simulation of a MEMS-based oscillator using a synchronous amplitude limiter. The proposed solution does not require external control signals to keep the resonator drive amplitude within the desired range. In a MEMS oscillator the oscillation amplitude needs to be limited to avoid over-driving the resonator which could cause unwanted nonlinear behavior [1] or component failure. The interface electronics has been implemented and simulated in 0.35μm HV CMOS process. The resonator was fabricated using a custom rapid-prototyping process involving Focused Ion Beam masking and Cryogenic Deep Reactive Ion Etching.",
"title": ""
},
{
"docid": "cb396e80b143c76a5be5aa4cff169ac2",
"text": "This article describes a quantitative model, which suggests what the underlying mechanisms of cognitive control in a particular task-switching paradigm are, with relevance to task-switching performance in general. It is suggested that participants dynamically control response accuracy by selective attention, in the particular paradigm being used, by controlling stimulus representation. They are less efficient in dynamically controlling response representation. The model fits reasonably well the pattern of reaction time results concerning task switching, congruency, cue-target interval and response-repetition in a mixed task condition, as well as the differences between mixed task and pure task conditions.",
"title": ""
},
{
"docid": "3d9c02413c80913cb32b5094dcf61843",
"text": "There is an explosion of youth subscriptions to original content-media-sharing Web sites such as YouTube. These Web sites combine media production and distribution with social networking features, making them an ideal place to create, connect, collaborate, and circulate. By encouraging youth to become media creators and social networkers, new media platforms such as YouTube offer a participatory culture in which youth can develop, interact, and learn. As youth development researchers, we must be cognizant of this context and critically examine what this platform offers that might be unique to (or redundant of) typical adolescent experiences in other developmental contexts.",
"title": ""
},
{
"docid": "1d4f89bb3e289ed138f45af0f1e3fc39",
"text": "The “covariance” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudo-covariance. A characterization of uncorrelatedness and wide-sense stationarity in terms of covariance and pseudocovariance is given. Complex random variables and processes with a vanishing pseudo-covariance are called proper. It is shown that properness is preserved under affine transformations and that the complex-multivariate Gaussian density assumes a natural form only for proper random variables. The maximum-entropy theorem is generalized to the complex-multivariate case. The differential entropy of a complex random vector with a fixed correlation matrix is shown to be maximum, if and only if the random vector is proper, Gaussian and zero-mean. The notion of circular stutionarity is introduced. For the class of proper complex random processes, a discrete Fourier transform correspondence is derived relating circular stationarity in the time domain to uncorrelatedness in the frequency domain. As an application of the theory, the capacity of a discrete-time channel with complex inputs, proper complex additive white Gaussian noise, and a finite complex unit-sample response is determined. This derivation is considerably simpler than an earlier derivation for the real discrete-time Gaussian channel with intersymbol interference, whose capacity is obtained as a by-product of the results for the complex channel. Znder Terms-Proper complex random processes, circular stationarity, intersymbol interference, capacity.",
"title": ""
},
{
"docid": "f5311de600d7e50d5c9ecff5c49f7167",
"text": "Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. One example is the task of interpreting regulations to answer “Can I...?” or “Do I have to...?” questions such as “I am working in Canada. Do I have to carry on paying UK National Insurance?” after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as “How long have you been working abroad?” when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.",
"title": ""
},
{
"docid": "4dc05debbbe6c8103d772d634f91c86c",
"text": "In this paper we shows the experimental results using a microcontroller and hardware integration with the EMC2 software, using the Fuzzy Gain Scheduling PI Controller in a mechatronic prototype. The structure of the fuzzy 157 Research in Computing Science 116 (2016) pp. 157–169; rec. 2016-03-23; acc. 2016-05-11 controller is composed by two-inputs and two-outputs, is a TITO system. The error control feedback and their derivative are the inputs, while the proportional and integral gains are the fuzzy controller outputs. Was defined five Gaussian membership functions for the fuzzy sets by each input, the product fuzzy logic operator (AND connective) and the centroid defuzzifier was used to infer the gains outputs. The structure of fuzzy rule base are type Sugeno, zero-order. The experimental result in closed-loop shows the viability end effectiveness of the position fuzzy controller strategy. To verify the robustness of this controller structure, two different experiments was making: undisturbed and disturbance both in closed-loop. This work presents comparative experimental results, using the Classical tune rule of Ziegler-Nichols and the Fuzzy Gain Scheduling PI Controller, for a mechatronic system widely used in various industries applications.",
"title": ""
},
{
"docid": "6bd9fc02c8e26e64cecb13dab1a93352",
"text": "Kohlberg, who was born in 1927, grew up in Bronxville, New York, and attended the Andover Academy in Massachusetts, a private high school for bright and usually wealthy students. He did not go immediately to college, but instead went to help the Israeli cause, in which he was made the Second Engineer on an old freighter carrying refugees from parts of Europe to Israel. After this, in 1948, he enrolled at the University of Chicago, where he scored so high on admission tests that he had to take only a few courses to earn his bachelor's degree. This he did in one year. He stayed on at Chicago for graduate work in psychology, at first thinking he would become a clinical psychologist. However, he soon became interested in Piaget and began interviewing children and adolescents on moral issues. The result was his doctoral dissertation (1958a), the first rendition of his new stage theory.",
"title": ""
},
{
"docid": "0696f518544589e4f7dbee4b50886685",
"text": "This research was designed to theoretically address and empirically examine research issues related to customer’s satisfaction with social commerce. To investigate these research issues, data were collected using a written survey as part of a free simulation experiment. In this experiment, 136 participants were asked to evaluate two social commerce websites using an instrument designed to measure relationships between s-commerce website quality, customer psychological empowerment and customer satisfaction. A total of 278 usable s-commerce site evaluations were collected and analyzed. The results showed that customer satisfaction with social commerce is correlated with social commerce sites quality and customer psychological empowerment.",
"title": ""
},
{
"docid": "7ce9f8cbba0bf56e68443f1ed759b6d3",
"text": "We present a Connected Learning Analytics (CLA) toolkit, which enables data to be extracted from social media and imported into a Learning Record Store (LRS), as defined by the new xAPI standard. A number of implementation issues are discussed, and a mapping that will enable the consistent storage and then analysis of xAPI verb/object/activity statements across different social media and online environments is introduced. A set of example learning activities are proposed, each facilitated by the Learning Analytics beyond the LMS that the toolkit enables.",
"title": ""
},
{
"docid": "8689f4b13343fc9a09135fca1f259976",
"text": "In this work, we propose a novel framework named Coconditional Autoencoding Adversarial Networks (CocoAAN) for Chinese font learning, which jointly learns a generation network and two encoding networks of different feature domains using an adversarial process. The encoding networks map the glyph images into style and content features respectively via the pairwise substitution optimization strategy, and the generation network maps these two kinds of features to glyph samples. Together with a discriminative network conditioned on the extracted features, our framework succeeds in producing realistic-looking Chinese glyph images flexibly. Unlike previous models relying on the complex segmentation of Chinese components or strokes, our model can “parse” structures in an unsupervised way, through which the content feature representation of each character is captured. Experiments demonstrate our framework has a powerful generalization capacity to other unseen fonts and characters.",
"title": ""
},
{
"docid": "c8d2e69a0f58204a648dd4b18447e11a",
"text": "Today, the common vision of smart components is usually based on the concept of the Internet of Things (IoT). Intelligent infrastructures combine sensor networks, network connectivity and software to oversee and analyze complex systems to identify inefficiencies and inform operational decision-making. Wireless Sensor nodes collect operational information over time and provide real-time data on current conditions such as (volcano activities, disaster parameters in general). The security of wireless sensor networks in the world of the Internet of Things is a big challenge, since there are several types of attacks against different layers of OSI model, in their goal is falsified the values of the detected parameters.",
"title": ""
},
{
"docid": "9b8317646ce6cad433e47e42198be488",
"text": "OBJECTIVE\nDigital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention.\n\n\nMETHODS\nA qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change.\n\n\nRESULTS\nThe core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness.\n\n\nCONCLUSIONS\nMobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "784d75662234e45f78426c690356d872",
"text": "Chinese-English parallel corpora are key resources for Chinese-English cross-language information processing, Chinese-English bilingual lexicography, Chinese-English language research and teaching. But so far large-scale Chinese-English corpus is still unavailable yet, given the difficulties and the intensive labours required. In this paper, our work towards building a large-scale Chinese-English parallel corpus is presented. We elaborate on the collection, annotation and mark-up of the parallel Chinese-English texts and the workflow that we used to construct the corpus. In addition, we also present our work toward building tools for constructing and using the corpus easily for different purposes. Among these tools, a parallel concordance tool developed by us is examined in detail. Several applications of the corpus being conducted are also introduced briefly in the paper.",
"title": ""
},
{
"docid": "9533193407869250854157e89d2815eb",
"text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.",
"title": ""
},
{
"docid": "2e11a8170ec8b2547548091443d46cc6",
"text": "This chapter presents the theory of the Core Elements of the Gaming Experience (CEGE). The CEGE are the necessary but not sufficient conditions to provide a positive experience while playing video-games. This theory, formulated using qualitative methods, is presented with the aim of studying the gaming experience objectively. The theory is abstracted using a model and implemented in questionnaire. This chapter discusses the formulation of the theory, introduces the model, and shows the use of the questionnaire in an experiment to differentiate between two different experiences. In loving memory of Samson Cairns 4.1 The Experience of Playing Video-games The experience of playing video-games is usually understood as the subjective relation between the user and the video-game beyond the actual implementation of the game. The implementation is bound by the speed of the microprocessors of the gaming console, the ergonomics of the controllers, and the usability of the interface. Experience is more than that, it is also considered as a personal relationship. Understanding this relationship as personal is problematic under a scientific scope. Personal and subjective knowledge does not allow a theory to be generalised or falsified (Popper 1994). In this chapter, we propose a theory for understanding the experience of playing video-games, or gaming experience, that can be used to assess and compare different experiences. This section introduces the approach taken towards understanding the gaming experience under the aforementioned perspective. It begins by presenting an E.H. Calvillo-Gámez (B) División de Nuevas Tecnologías de la Información, Universidad Politécnica de San Luis Potosí, San Luis Potosí, México e-mail: e.calvillo@upslp.edu.mx 47 R. Bernhaupt (ed.), Evaluating User Experience in Games, Human-Computer Interaction Series, DOI 10.1007/978-1-84882-963-3_4, C © Springer-Verlag London Limited 2010 48 E.H. Calvillo-Gámez et al. overview of video-games and user experience in order to familiarise the reader with such concepts. Last, the objective and overview of the whole chapter are presented. 4.1.",
"title": ""
},
{
"docid": "0adf96e7c34bfb374b81c579d952a839",
"text": "Metric learning has attracted wide attention in face and kinship verification, and a number of such algorithms have been presented over the past few years. However, most existing metric learning methods learn only one Mahalanobis distance metric from a single feature representation for each face image and cannot make use of multiple feature representations directly. In many face-related tasks, we can easily extract multiple features for a face image to extract more complementary information, and it is desirable to learn distance metrics from these multiple features, so that more discriminative information can be exploited than those learned from individual features. To achieve this, we present a large-margin multi-metric learning (LM3L) method for face and kinship verification, which jointly learns multiple global distance metrics under which the correlations of different feature representations of each sample are maximized, and the distance of each positive pair is less than a low threshold and that of each negative pair is greater than a high threshold. To better exploit the local structures of face images, we also propose a local metric learning and local LM3Lmethods to learn a set of local metrics. Experimental results on three face data sets show that the proposed methods achieve very competitive results compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "16488fc65794a318e06777189edc3e4b",
"text": "This work details Sighthoundś fully automated license plate detection and recognition system. The core technology of the system is built using a sequence of deep Convolutional Neural Networks (CNNs) interlaced with accurate and efficient algorithms. The CNNs are trained and fine-tuned so that they are robust under different conditions (e.g. variations in pose, lighting, occlusion, etc.) and can work across a variety of license plate templates (e.g. sizes, backgrounds, fonts, etc). For quantitative analysis, we show that our system outperforms the leading license plate detection and recognition technology i.e. ALPR on several benchmarks. Our system is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud",
"title": ""
},
{
"docid": "b3068a1b1acb0782d2c2b1dac65042cf",
"text": "Measurement of N (nitrogen), P (phosphorus) and K ( potassium) contents of soil is necessary to decide how much extra contents of these nutrients are to b e added in the soil to increase crop fertility. Thi s improves the quality of the soil which in turn yields a good qua lity crop. In the present work fiber optic based c olor sensor has been developed to determine N, P, and K values in t he soil sample. Here colorimetric measurement of aq ueous solution of soil has been carried out. The color se nsor is based on the principle of absorption of col or by solution. It helps in determining the N, P, K amounts as high, m edium, low, or none. The sensor probe along with p roper signal conditioning circuits is built to detect the defici ent component of the soil. It is useful in dispensi ng only required amount of fertilizers in the soil.",
"title": ""
}
] | scidocsrr |
a90986c95d2e4c08094b461909151d99 | Web-Service Clustering with a Hybrid of Ontology Learning and Information-Retrieval-Based Term Similarity | [
{
"docid": "639bbe7b640c514ab405601c7c3cfa01",
"text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.",
"title": ""
}
] | [
{
"docid": "27488ded8276967b9fd71ec40eec28d8",
"text": "This paper discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.",
"title": ""
},
{
"docid": "7c8f38386322d9095b6950c4f31515a0",
"text": "Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.",
"title": ""
},
{
"docid": "23ef781d3230124360f24cc6e38fb15f",
"text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "301fb951bb2720ebc71202ee7be37be2",
"text": "This work incorporates concepts from the behavioral confirmation tradition, self tradition, and interdependence tradition to identify an interpersonal process termed the Michelangelo phenomenon. The Michelangelo phenomenon describes the means by which the self is shaped by a close partner's perceptions and behavior. Specifically, self movement toward the ideal self is described as a product of partner affirmation, or the degree to which a partner's perceptions of the self and behavior toward the self are congruent with the self's ideal. The results of 4 studies revealed strong associations between perceived partner affirmation and self movement toward the ideal self, using a variety of participant populations and measurement methods. In addition, perceived partner affirmation--particularly perceived partner behavioral affirmation--was strongly associated with quality of couple functioning and stability in ongoing relationships.",
"title": ""
},
{
"docid": "b4ac5df370c0df5fdb3150afffd9158b",
"text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.",
"title": ""
},
{
"docid": "39e38d7825ff7a74e6bbf9975826ddea",
"text": "Online display advertising has becomes a billion-dollar industry, and it keeps growing. Advertisers attempt to send marketing messages to attract potential customers via graphic banner ads on publishers’ webpages. Advertisers are charged for each view of a page that delivers their display ads. However, recent studies have discovered that more than half of the ads are never shown on users’ screens due to insufficient scrolling. Thus, advertisers waste a great amount of money on these ads that do not bring any return on investment. Given this situation, the Interactive Advertising Bureau calls for a shift toward charging by viewable impression, i.e., charge for ads that are viewed by users. With this new pricing model, it is helpful to predict the viewability of an ad. This paper proposes two probabilistic latent class models (PLC) that predict the viewability of any given scroll depth for a user-page pair. Using a real-life dataset from a large publisher, the experiments demonstrate that our models outperform comparison systems.",
"title": ""
},
{
"docid": "8ed6c9e82c777aa092a78959391a37b2",
"text": "The trie data structure has many properties which make it especially attractive for representing large files of data. These properties include fast retrieval time, quick unsuccessful search determination, and finding the longest match to a given identifier. The main drawback is the space requirement. In this paper the concept of trie compaction is formalized. An exact algorithm for optimal trie compaction and three algorithms for approximate trie compaction are given, and an analysis of the three algorithms is done. The analysis indicate that for actual tries, reductions of around 70 percent in the space required by the uncompacted trie can be expected. The quality of the compaction is shown to be insensitive to the number of nodes, while a more relevant parameter is the alphabet size of the key.",
"title": ""
},
{
"docid": "4e2b0b82a6f7e342f10d1a66795e57f6",
"text": "A fully electrical startup boost converter is presented in this paper. With a three-stage stepping-up architecture, the proposed circuit is capable of performing thermoelectric energy harvesting at an input voltage as low as 50 mV. Due to the zero-current-switching (ZCS) operation of the boost converter and automatic shutdown of the low-voltage starter and the auxiliary converter, conversion efficiency up to 73% is demonstrated. The boost converter does not require bulky transformers or mechanical switches for kick-start, making it very attractive for body area sensor network applications.",
"title": ""
},
{
"docid": "51fbebff61232e46381b243023c35dc5",
"text": "In this paper, mechanical design of a novel spherical wheel shape for a omni-directional mobile robot is presented. The wheel is used in a omnidirectional mobile robot realizing high step-climbing capability with its hemispherical wheel. Conventional Omniwheels can realize omnidirectional motion, however they have a poor step overcoming ability due to the sub-wheel small size. The proposed design solves this drawback by means of a 4 wheeled design. \"Omni-Ball\" is formed by two passive rotational hemispherical wheels and one active rotational axis. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Omnidirectional vehicle with this proposed Omni-Ball mechanism was confirmed. An prototype has been developed to illustrate the concept. Motion experiments, with a test vehicle are also presented.",
"title": ""
},
{
"docid": "c1c177ee96a0da0a4bbc6749364a14e5",
"text": "Knowledge graphs are used to represent relational information in terms of triples. To enable learning about domains, embedding models, such as tensor factorization models, can be used to make predictions of new triples. Often there is background taxonomic information (in terms of subclasses and subproperties) that should also be taken into account. We show that existing fully expressive (a.k.a. universal) models cannot provably respect subclass and subproperty information. We show that minimal modifications to an existing knowledge graph completion method enables injection of taxonomic information. Moreover, we prove that our model is fully expressive, assuming a lower-bound on the size of the embeddings. Experimental results on public knowledge graphs show that despite its simplicity our approach is surprisingly effective. The AI community has long noticed the importance of structure in data. While traditional machine learning techniques have been mostly focused on feature-based representations, the primary form of data in the subfield of Statistical Relational AI (STARAI) (Getoor and Taskar, 2007; Raedt et al., 2016) is in the form of entities and relationships among them. Such entity-relationships are often in the form of (head, relationship, tail) triples, which can also be expressed in the form of a graph, with nodes as entities and labeled directed edges as relationships among entities. Predicting the existence, identity, and attributes of entities and their relationships are among the main goals of StaRAI. Knowledge Graphs (KGs) are graph structured knowledge bases that store facts about the world. A large number of KGs have been created such as NELL (Carlson et al., 2010), FREEBASE (Bollacker et al., 2008), and Google Knowledge Vault (Dong et al., 2014). These KGs have applications in several fields including natural language processing, search, automatic question answering and recommendation systems. Since accessing and storing all the facts in the world is difficult, KGs are incomplete. The goal of link prediction for KGs – a.k.a. KG completion – is to predict the unknown links or relationships in a KG based on the existing ones. This often amounts to infer (the probability of) new triples from the existing triples. Copyright © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A common approach to apply machine learning to symbolic data, such as text, graph and entity-relationships, is through embeddings. Word, sentence and paragraph embeddings (Mikolov et al., 2013; Pennington, Socher, and Manning, 2014), which vectorize words, sentences and paragraphs using context information, are widely used in a variety of natural language processing tasks from syntactic parsing to sentiment analysis. Graph embeddings (Hoff, Raftery, and Handcock, 2002; Grover and Leskovec, 2016; Perozzi, Al-Rfou, and Skiena, 2014) are used in social network analysis for link prediction and community detection. In relational learning, embeddings for entities and relationships are used to generalize from existing data. These embeddings are often formulated in terms of tensor factorization (Nickel, Tresp, and Kriegel, 2012; Bordes et al., 2013; Trouillon et al., 2016; Kazemi and Poole, 2018c). Here, the embeddings are learned such that their interaction through (tensor-)products best predicts the (probability of the) existence of the observed triples; see (Nguyen, 2017; Wang et al., 2017) for details and discussion. Tensor factorization methods have been very successful, yet they rely on a large number of annotated triples to learn useful representations. There is often other information in ontologies which specifies the meaning of the symbols used in a knowledge base. One type of ontological information is represented in a hierarchical structure called a taxonomy. For example, a knowledge base might contain information that DJTrump, whose name is “Donald Trump” is a president, but may not contain information that he is a person, a mammal and an animal, because these are implied by taxonomic knowledge. Being told that mammals are chordates, lets us conclude that DJTrump is also a chordate, without needing to have triples specifying this about multiple mammals. We could also have information about subproperties, such as that being president is a subproperty of “managing”, which in turn is a subproperty of “interacts with”. This paper is about combining taxonomic information in the form of subclass and subproperty (e.g., managing implies interaction) into relational embedding models. We show that existing factorization models that are fully expressive cannot reflect such constraints for all legal entity embeddings. We propose a model that is provably fully expressive and can represent such taxonomic information, and evaluate its performance on real-world datasets. ar X iv :1 81 2. 03 23 5v 1 [ cs .L G ] 7 D ec 2 01 8 Factorization and Embedding Let E represent the set of entities and R represent the set of relations. Let W be a set of triples (h, r, t) that are true in the world, where h, t ∈ E are head and tail, and r ∈ R is the relation in the triple. We use W to represent the triples that are false – i.e., W ≐ {(h, r, t) ∈ E × R × E ∣ (h, r, t) ∉ W}. An example of a triple in W can be (Paris, CapitalCityOfCountry, France) and an example of a triple in W can be (Paris, CapitalCityOfCountry, Germany). A KG K ⊆ W is a subset of all the facts. The problem of the KG completion is to infer W from its subset KG. There exists a variety of methods for KG completion. Here, we consider embedding methods and in particular using tensor-factorization. For a broader review of the existing KG completion that can use background information see Related Work. Embeddings: An embedding is a function from an entity or a relation to a vector (or sometimes higher order tensors) over a field. We use bold lower-case for vectors – that is s ∈ R is an embedding of an entity and r ∈ R is an embedding of a relation. Taxonomies: It is common to have structure over the symbols used in the triples, see (e.g., Shoham, 2016). The Ontology Web Language (OWL) (Hitzler et al., 2012) defines (among many other meta-relations) subproperties and subclasses, where p1 is a subproperty of p2 if ∀x, y ∶ (x, p1, y)→ (x, p2, y), that is whenever p1 is true, p2 is also true. Classes can be defined either as a set with a class assertion (often called “type”) between an entity and a class, e.g., saying x is in class C using (x, type,C) or in terms of the characteristic function of the class, a function that is true of element of the class. If c is the characteristic function of class C, then x is in class c is written (x, c, true). For representations that treat entities and properties symmetrically, the two ways to define classes are essentially the same. C1 is a subclass of C2 if every entity in class C1 is in class C2, that is, ∀x ∶ (x, type,C1) → (x, type,C2) or ∀x ∶ (x, c1, true) → (x, c2, true) . If we treat true as an entity, then subclass can be seen as a special case of subproperty. For the rest of the paper we will refer to subsumption in terms of subproperty (and so also of subclass). A non-trivial subsumption is one which is not symmetric; p1 is a subproperty of p2 and there is some relations that is true of p1 that is not true of p2. We want the subsumption to be over all possible entities; those entities that have a legal embedding according to the representation used, not just those we know exist. Let E∗ be the set of all possible entities with a legal embedding according to the representation used. Tensor factorization: For KG completion a tensor factorization defines a function μ ∶ R ×R ×R → [0,1] that takes the embeddings h, r and t of a triple (h, r, t) as input, and generates a prediction, e.g., a probability, of the triple being true (h, r, t) ∈ W . In particular, μ is often a nonlinearity applied to a multi-linear function of h, r, t. The family of methods that we study uses the following multilinear form: Let x, y, and z be vectors of length k. Define ⟨x,y,z⟩ to be the sum of their element-wise product, namely",
"title": ""
},
{
"docid": "7b6c93b9e787ab0ba512cc8aaff185af",
"text": "INTRODUCTION The field of second (or foreign) language teaching has undergone many fluctuations and dramatic shifts over the years. As opposed to physics or chemistry, where progress is more or less steady until a major discovery causes a radical theoretical revision (Kuhn, 1970), language teaching is a field where fads and heroes have come and gone in a manner fairly consistent with the kinds of changes that occur in youth culture. I believe that one reason for the frequent changes that have been taking place until recently is the fact that very few language teachers have even the vaguest sense of history about their profession and are unclear concerning the historical bases of the many methodological options they currently have at their disposal. It is hoped that this brief and necessarily oversimplified survey will encourage many language teachers to learn more about the origins of their profession. Such knowledge will give some healthy perspective in evaluating the socalled innovations or new approaches to methodology that will continue to emerge over time.",
"title": ""
},
{
"docid": "78c477aeb6a27cf5b4de028c0ecd7b43",
"text": "This paper addresses the problem of speaker clustering in telephone conversations. Recently, a new clustering algorithm named affinity propagation (AP) is proposed. It exhibits fast execution speed and finds clusters with low error. However, AP is an unsupervised approach which may make the resulting number of clusters different from the actual one. This deteriorates the speaker purity dramatically. This paper proposes a modified method named supervised affinity propagation (SAP), which automatically reruns the AP procedure to make the final number of clusters converge to the specified number. Experiments are carried out to compare SAP with traditional k-means and agglomerative hierarchical clustering on 4-hour summed channel conversations in the NIST 2004 Speaker Recognition Evaluation. Experiment results show that the SAP method leads to a noticeable speaker purity improvement with slight cluster purity decrease compared with AP.",
"title": ""
},
{
"docid": "c84a0f630b4fb2e547451d904e1c63a5",
"text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.",
"title": ""
},
{
"docid": "8b2d6ce5158c94f2e21ff4ebd54af2b5",
"text": "Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.",
"title": ""
},
{
"docid": "864adf6f82a0d1af98339f92035b15fc",
"text": "Typically in neuroimaging we are looking to extract some pertinent information from imperfect, noisy images of the brain. This might be the inference of percent changes in blood flow in perfusion FMRI data, segmentation of subcortical structures from structural MRI, or inference of the probability of an anatomical connection between an area of cortex and a subthalamic nucleus using diffusion MRI. In this article we will describe how Bayesian techniques have made a significant impact in tackling problems such as these, particularly in regards to the analysis tools in the FMRIB Software Library (FSL). We shall see how Bayes provides a framework within which we can attempt to infer on models of neuroimaging data, while allowing us to incorporate our prior belief about the brain and the neuroimaging equipment in the form of biophysically informed or regularising priors. It allows us to extract probabilistic information from the data, and to probabilistically combine information from multiple modalities. Bayes can also be used to not only compare and select between models of different complexity, but also to infer on data using committees of models. Finally, we mention some analysis scenarios where Bayesian methods are impractical, and briefly discuss some practical approaches that we have taken in these cases.",
"title": ""
},
{
"docid": "205a44a35cc1af14f2b40424cc2654bc",
"text": "This paper focuses on human-pose estimation using a stationary depth sensor. The main challenge concerns reducing the feature ambiguity and modeling human poses in high-dimensional human-pose space because of the curse of dimensionality. We propose a 3-D-point-cloud system that captures the geometric properties (orientation and shape) of the 3-D point cloud of a human to reduce the feature ambiguity, and use the result from action classification to discover low-dimensional manifolds in human-pose space in estimating the underlying probability distribution of human poses. In the proposed system, a 3-D-point-cloud feature called viewpoint and shape feature histogram (VISH) is proposed to extract the 3-D points from a human and arrange them into a tree structure that preserves the global and local properties of the 3-D points. A nonparametric action-mixture model (AMM) is then proposed to model human poses using low-dimensional manifolds based on the concept of distributed representation. Since human poses estimated using the proposed AMM are in discrete space, a kinematic model is added in the last stage of the proposed system to model the spatial relationship of body parts in continuous space to reduce the quantization error in the AMM. The proposed system has been trained and evaluated on a benchmark dataset. Computer-simulation results showed that the overall error and standard deviation of the proposed 3-D-point-cloud system were reduced compared with some existing approaches without action classification.",
"title": ""
},
{
"docid": "63d19f75bc0baee93404488a1d307a32",
"text": "Mitochondria can unfold importing precursor proteins by unraveling them from their N-termini. However, how this unraveling is induced is not known. Two candidates for the unfolding activity are the electrical potential across the inner mitochondrial membrane and mitochondrial Hsp70 in the matrix. Here, we propose that many precursors are unfolded by the electrical potential acting directly on positively charged amino acid side chains in the targeting sequences. Only precursor proteins with targeting sequences that are long enough to reach the matrix at the initial interaction with the import machinery are unfolded by mitochondrial Hsp70, and this unfolding occurs even in the absence of a membrane potential.",
"title": ""
},
{
"docid": "cd55fc3fafe2618f743a845d89c3a796",
"text": "According to the notation proposed by the International Federation for the Theory of Mechanisms and Machines IFToMM (Ionescu, 2003); a parallel manipulator is a mechanism where the motion of the end-effector, namely the moving or movable platform, is controlled by means of at least two kinematic chains. If each kinematic chain, also known popularly as limb or leg, has a single active joint, then the mechanism is called a fully-parallel mechanism, in which clearly the nominal degree of freedom equates the number of limbs. Tire-testing machines (Gough & Whitehall, 1962) and flight simulators (Stewart, 1965), appear to be the first transcendental applications of these complex mechanisms. Parallel manipulators, and in general mechanisms with parallel kinematic architectures, due to benefits --over their serial counterparts-such as higher stiffness and accuracy, have found interesting applications such as walking machines, pointing devices, multi-axis machine tools, micro manipulators, and so on. The pioneering contributions of Gough and Stewart, mainly the theoretical paper of Stewart (1965), influenced strongly the development of parallel manipulators giving birth to an intensive research field. In that way, recently several parallel mechanisms for industrial purposes have been constructed using the, now, classical hexapod as a base mechanism: Octahedral Hexapod HOH-600 (Ingersoll), HEXAPODE CMW 300 (CMW), Cosmo Center PM-600 (Okuma), F-200i (FANUC) and so on. On the other hand one cannot ignore that this kind of parallel kinematic structures have a limited and complex-shaped workspace. Furthermore, their rotation and position capabilities are highly coupled and therefore the control and calibration of them are rather complicated. It is well known that many industrial applications do not require the six degrees of freedom of a parallel manipulator. Thus in order to simplify the kinematics, mechanical assembly and control of parallel manipulators, an interesting trend is the development of the so called defective parallel manipulators, in other words, spatial parallel manipulators with fewer than six degrees of freedom. Special mention deserves the Delta robot, invented by Clavel (1991); which proved that parallel robotic manipulators are an excellent option for industrial applications where the accuracy and stiffness are fundamental characteristics. Consider for instance that the Adept Quattro robot, an application of the Delta robot, developed by Francois Pierrot in collaboration with Fatronik (Int. patent appl. WO/2006/087399), has a",
"title": ""
},
{
"docid": "8ddb7c62f032fb07116e7847e69b51d1",
"text": "Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools",
"title": ""
},
{
"docid": "32b96d4d23a03b1828f71496e017193e",
"text": "Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fullyautonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre-or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.",
"title": ""
}
] | scidocsrr |
c159c06516b5e75bd8ea00789a521c43 | A new posterolateral approach without fibula osteotomy for the treatment of tibial plateau fractures. | [
{
"docid": "f91007844639e431b2f332f6f32df33b",
"text": "Moore type II Entire Condyle fractures of the tibia plateau represent a rare and highly unstable fracture pattern that usually results from high impact traumas. Specific recommendations regarding the surgical treatment of these fractures are sparse. We present a series of Moore type II fractures treated by open reduction and internal fixation through a direct dorsal approach. Five patients (3 females, 2 males) with Entire Condyle fractures were retrospectively analyzed after a mean follow-up period of 39 months (range 12–61 months). Patient mean age at the time of operation was 36 years (range 26–43 years). Follow-up included clinical and radiological examination. Furthermore, all patient finished a SF36 and Lysholm knee score questionnaire. Average range of motion was 127/0/1° with all patients reaching full extension at the time of last follow up. Patients reached a mean Lysholm score of 81.2 points (range 61–100 points) and an average SF36 of 82.36 points (range 53.75–98.88 points). One patient sustained deep wound infection after elective implant removal 1 year after the initial surgery. Overall all patients were highly satisfied with the postoperative result. The direct dorsal approach to the tibial plateau represents an adequate method to enable direct fracture exposure, open reduction, and internal fixation in posterior shearing medial Entire Condyle fractures and is especially valuable when also the dorso-lateral plateau is depressed.",
"title": ""
}
] | [
{
"docid": "ea5a07b07631248a2f5cbee80420924d",
"text": "Coordinating fleets of autonomous, non-holonomic vehicles is paramount to many industrial applications. While there exists solutions to efficiently calculate trajectories for individual vehicles, an effective methodology to coordinate their motions and to avoid deadlocks is still missing. Decoupled approaches, where motions are calculated independently for each vehicle and then centrally coordinated for execution, have the means to identify deadlocks, but not to solve all of them. We present a novel approach that overcomes this limitation and that can be used to complement the deficiencies of decoupled solutions with centralized coordination. Here, we formally define an extension of the framework of lattice-based motion planning to multi-robot systems and we validate it experimentally. Our approach can jointly plan for multiple vehicles and it generates kinematically feasible and deadlock-free motions.",
"title": ""
},
{
"docid": "34667babdde26a81244c7e1c929e7653",
"text": "Noise level estimation is crucial in many image processing applications, such as blind image denoising. In this paper, we propose a novel noise level estimation approach for natural images by jointly exploiting the piecewise stationarity and a regular property of the kurtosis in bandpass domains. We design a $K$ -means-based algorithm to adaptively partition an image into a series of non-overlapping regions, each of whose clean versions is assumed to be associated with a constant, but unknown kurtosis throughout scales. The noise level estimation is then cast into a problem to optimally fit this new kurtosis model. In addition, we develop a rectification scheme to further reduce the estimation bias through noise injection mechanism. Extensive experimental results show that our method can reliably estimate the noise level for a variety of noise types, and outperforms some state-of-the-art techniques, especially for non-Gaussian noises.",
"title": ""
},
{
"docid": "260c12152d9bd38bd0fde005e0394e17",
"text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.",
"title": ""
},
{
"docid": "c8d690eb4dd2831f28106c3cfca4552c",
"text": "While ASCII art is a worldwide popular art form, automatic generating structure-based ASCII art from natural photographs remains challenging. The major challenge lies on extracting the perception-sensitive structure from the natural photographs so that a more concise ASCII art reproduction can be produced based on the structure. However, due to excessive amount of texture in natural photos, extracting perception-sensitive structure is not easy, especially when the structure may be weak and within the texture region. Besides, to fit different target text resolutions, the amount of the extracted structure should also be controllable. To tackle these challenges, we introduce a visual perception mechanism of non-classical receptive field modulation (non-CRF modulation) from physiological findings to this ASCII art application, and propose a new model of non-CRF modulation which can better separate the weak structure from the crowded texture, and also better control the scale of texture suppression. Thanks to our non-CRF model, more sensible ASCII art reproduction can be obtained. In addition, to produce more visually appealing ASCII arts, we propose a novel optimization scheme to obtain the optimal placement of proportional-font characters. We apply our method on a rich variety of images, and visually appealing ASCII art can be obtained in all cases.",
"title": ""
},
{
"docid": "eec886c9c758e90acc4b97df85057b61",
"text": "A full-term male foal born in a farm holidays in Maremma (Tuscany, Italy) was euthanized shortly after birth due to the presence of several malformations. The rostral maxilla and the nasal septum were deviated to the right (wry nose), and a severe cervico-thoracic scoliosis and anus atresia were evident. Necropsy revealed ileum atresia and agenesis of the right kidney. The brain showed an incomplete separation of the hemispheres of the rostral third of the forebrain and the olfactory bulbs and tracts were absent (olfactory aplasia). A diagnosis of semilobar holoprosencephaly (HPE) was achieved. This is the first case of semilobar HPE associated with other organ anomalies in horses.",
"title": ""
},
{
"docid": "83709dc50533c28221d89490bcb3a5aa",
"text": "Hyperspectral image classification has attracted extensive research efforts in the recent decades. The main difficulty lies in the few labeled samples versus high dimensional features. The spectral-spatial classification method using Markov random field (MRF) has been shown to perform well in improving the classification performance. Moreover, active learning (AL), which iteratively selects the most informative unlabeled samples and enlarges the training set, has been widely studied and proven useful in remotely sensed data. In this paper, we focus on the combination of MRF and AL in the classification of hyperspectral images, and a new MRF model-based AL (MRF-AL) framework is proposed. In the proposed framework, the unlabeled samples whose predicted results vary before and after the MRF processing step is considered as uncertain. In this way, subset is firstly extracted from the entire unlabeled set, and AL process is then performed on the samples in the subset. Moreover, hybrid AL methods which combine the MRF-AL framework with either the passive random selection method or the existing AL methods are investigated. To evaluate and compare the proposed AL approaches with other state-of-the-art techniques, experiments were conducted on two hyperspectral data sets. Results demonstrated the effectiveness of the hybrid AL methods, as well as the advantage of the proposed MRF-AL framework.",
"title": ""
},
{
"docid": "a436bdc20d63dcf4f0647005bb3314a7",
"text": "The purpose of this study is to evaluate the feasibility of the integration of concept maps and tablet PCs in anti-phishing education for enhancing students’ learning motivation and achievement. The subjects were 155 students from grades 8 and 9. They were divided into an experimental group (77 students) and a control group (78 students). To begin with, the two groups received identical anti-phishing training: the teacher explained the concept of anti-phishing and asked the students questions; the students then used tablet PCs for polling and answering the teachers’ questions. Afterwards, the two groups performed different group activities: the experimental group was divided into smaller groups, which used tablet PCs to draw concept maps; the control group was also divided into groups which completed worksheets. The study found that the use of concept maps on tablet PCs during the anti-phishing education significantly enhanced the students’ learning motivation when their initial motivation was already high. For learners with low initial motivation or prior knowledge, the use of worksheets could increase their posttest achievement and motivation. This study therefore proposes that motivation and achievement in teaching the anti-phishing concept can be effectively enhanced if the curriculum is designed based on the students’ learning preferences or prior knowledge, in conjunction with the integration of mature and accessible technological media into the learning activities. The findings can also serve as a reference for anti-phishing educators and researchers.",
"title": ""
},
{
"docid": "cc3f821bd9617d31a8b303c4982e605f",
"text": "Body composition in older adults can be assessed using simple, convenient but less precise anthropometric methods to assess (regional) body fat and skeletal muscle, or more elaborate, precise and costly methods such as computed tomography and magnetic resonance imaging. Body weight and body fat percentage generally increase with aging due to an accumulation of body fat and a decline in skeletal muscle mass. Body weight and fatness plateau at age 75–80 years, followed by a gradual decline. However, individual weight patterns may differ and the periods of weight loss and weight (re)gain common in old age may affect body composition. Body fat redistributes with aging, with decreasing subcutaneous and appendicular fat and increasing visceral and ectopic fat. Skeletal muscle mass declines with aging, a process called sarcopenia. Obesity in old age is associated with a higher risk of mobility limitations, disability and mortality. A higher waist circumference and more visceral fat increase these risks, independent of overall body fatness, as do involuntary weight loss and weight cycling. The role of low skeletal muscle mass in the development of mobility limitations and disability remains controversial, but it is much smaller than the role of high body fat. Low muscle mass does not seem to increase mortality risk in older adults.",
"title": ""
},
{
"docid": "b134cf07e01f1568d127880777492770",
"text": "This paper addresses the problem of recovering 3D nonrigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, Tomasi and Kanades’ factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.",
"title": ""
},
{
"docid": "87eb54a981fca96475b73b3dfa99b224",
"text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.",
"title": ""
},
{
"docid": "7f7e7f7ddcbb4d98270c0ba50a3f7a25",
"text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.",
"title": ""
},
{
"docid": "914b38c4a5911a481bf9088f75adef30",
"text": "This paper presents a mixed-integer LP approach to the solution of the long-term transmission expansion planning problem. In general, this problem is large-scale, mixed-integer, nonlinear, and nonconvex. We derive a mixed-integer linear formulation that considers losses and guarantees convergence to optimality using existing optimization software. The proposed model is applied to Garver’s 6-bus system, the IEEE Reliability Test System, and a realistic Brazilian system. Simulation results show the accuracy as well as the efficiency of the proposed solution technique.",
"title": ""
},
{
"docid": "dae2ef494ca779e701288414e1cbf0ef",
"text": "API example code search is an important applicationin software engineering. Traditional approaches to API codesearch are based on information retrieval. Recent advance inWord2Vec has been applied to support the retrieval of APIexamples. In this work, we perform a preliminary study thatcombining traditional IR with Word2Vec achieves better retrievalaccuracy. More experiments need to be done to study differenttypes of combination among two lines of approaches.",
"title": ""
},
{
"docid": "a2253bf241f7e5f60e889258e4c0f40c",
"text": "BACKGROUND-Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE-This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD-The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS-Seven distinct evaluation strategies were identified, wherein the most common one, “Pre-Post Comparison,” was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, “Project” represents the majority with 66 percent. CONCLUSION-The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that “Pre-Post Comparison” was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used.",
"title": ""
},
{
"docid": "e584549afba4c444c32dfe67ee178a84",
"text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d8fc5a8bc075343b2e70a9b441ecf6e5",
"text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.",
"title": ""
},
{
"docid": "0056d305c7689d45e7cd9f4b87cac79e",
"text": "A method is presented that uses a vectorial multiscale feature image for wave front propagation between two or more user defined points to retrieve the central axis of tubular objects in digital images. Its implicit scale selection mechanism makes the method more robust to overlap and to the presence of adjacent structures than conventional techniques that propagate a wave front over a scalar image representing the maximum of a range of filters. The method is shown to retain its potential to cope with severe stenoses or imaging artifacts and objects with varying widths in simulated and actual two-dimensional angiographic images.",
"title": ""
},
{
"docid": "844dcf80b2feba89fced99a0f8cbe9bf",
"text": "Communication could potentially be an effective way for multi-agent cooperation. However, information sharing among all agents or in predefined communication architectures that existing methods adopt can be problematic. When there is a large number of agents, agents cannot differentiate valuable information that helps cooperative decision making from globally shared information. Therefore, communication barely helps, and could even impair the learning of multi-agent cooperation. Predefined communication architectures, on the other hand, restrict communication among agents and thus restrain potential cooperation. To tackle these difficulties, in this paper, we propose an attentional communication model that learns when communication is needed and how to integrate shared information for cooperative decision making. Our model leads to efficient and effective communication for large-scale multi-agent cooperation. Empirically, we show the strength of our model in a variety of cooperative scenarios, where agents are able to develop more coordinated and sophisticated strategies than existing methods.",
"title": ""
},
{
"docid": "f4fb632268bbbf76878472183c511b05",
"text": "Mid-way through the 2007 DARPA Urban Challenge, MIT’s autonomous Land Rover LR3 ‘Talos’ and Team Cornell’s autonomous Chevrolet Tahoe ‘Skynet’ collided in a low-speed accident, one of the first well-documented collisions between two full-size autonomous vehicles. This collaborative study between MIT and Cornell examines the root causes of the collision, which are identified in both teams’ system designs. Systems-level descriptions of both autonomous vehicles are given, and additional detail is provided on sub-systems and algorithms implicated in the collision. A brief summary of robot–robot interactions during the race is presented, followed by an in-depth analysis of both robots’ behaviors leading up to and during the Skynet–Talos collision. Data logs from the vehicles are used to show the gulf between autonomous and human-driven vehicle behavior at low speeds and close proximities. Contributing factors are shown to be: (1) difficulties in sensor data association leading to phantom obstacles and an inability to detect slow moving vehicles, (2) failure to anticipate vehicle intent, and (3) an over emphasis on lane constraints versus vehicle proximity in motion planning. Eye contact between human road users is a crucial communications channel for slow-moving close encounters between vehicles. Inter-vehicle communication may play a similar role for autonomous vehicles; however, there are availability and denial-of-service issues to be addressed.",
"title": ""
},
{
"docid": "6ed4d5ae29eef70f5aae76ebed76b8ca",
"text": "Web services that thrive on mining user interaction data such as search engines can currently track clicks and mouse cursor activity on their Web pages. Cursor interaction mining has been shown to assist in user modeling and search result relevance, and is becoming another source of rich information that data scientists and search engineers can tap into. Due to the growing popularity of touch-enabled mobile devices, search systems may turn to tracking touch interactions in place of cursor interactions. However, unlike cursor interactions, touch interactions are difficult to record reliably and their coordinates have not been shown to relate to regions of user interest. A better approach may be to track the viewport coordinates instead, which the user must manipulate to view the content on a mobile device. These recorded viewport coordinates can potentially reveal what regions of the page interest users and to what degree. Using this information, search system can then improve the design of their pages or use this information in click models or learning to rank systems. In this position paper, we discuss some of the challenges faced in mining interaction data for new modes of interaction, and future research directions in this field.",
"title": ""
}
] | scidocsrr |
2f40ac55162bde7a6b103798fdcdb1ac | Robust Top-k Multiclass SVM for Visual Category Recognition | [
{
"docid": "b5347e195b44d5ae6d4674c685398fa3",
"text": "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensional image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory.",
"title": ""
}
] | [
{
"docid": "28d8ef2f63b0b4f55c60ae06484365d1",
"text": "Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data.\n We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks.\n In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.",
"title": ""
},
{
"docid": "eab052e8172c62fec9b532400fe5eeb6",
"text": "An overview on state of the art automotive radar usage is presented and the changing requirements from detection and ranging towards radar based environmental understanding for highly automated and autonomous driving deduced. The traditional segmentation in driving, manoeuvering and parking tasks vanishes at the driver less stage. Situation assessment and trajectory/manoeuver planning need to operate in a more thorough way. Hence, fast situational up-date, motion prediction of all kind of dynamic objects, object dimension, ego-motion estimation, (self)-localisation and more semantic/classification information, which allows to put static and dynamic world into correlation/context with each other is mandatory. All these are new areas for radar signal processing and needs revolutionary new solutions. The article outlines the benefits that make radar essential for autonomous driving and presents recent approaches in radar based environmental perception.",
"title": ""
},
{
"docid": "244df843f56a59f20a2fc1d2293a7b53",
"text": "We propose a new time-release protocol based on the bitcoin protocol and witness encryption. We derive a “public key” from the bitcoin block chain for encryption. The decryption key are the unpredictable information in the future blocks (e.g., transactions, nonces) that will be computed by the bitcoin network. We build this protocol by witness encryption and encrypt with the bitcoin proof-of-work constraints. The novelty of our protocol is that the decryption key will be automatically and publicly available in the bitcoin block chain when the time is due. Witness encryption was originally proposed by Garg, Gentry, Sahai and Waters. It provides a means to encrypt to an instance, x, of an NP language and to decrypt by a witness w that x is in the language. Encoding CNF-SAT in the existing witness encryption schemes generate poly(n · k) group elements in the ciphertext where n is the number of variables and k is the number of clauses of the CNF formula. We design a new witness encryption for CNF-SAT which achieves ciphertext size of 2n + 2k group elements. Our witness encryption is based on an intuitive reduction from SAT to Subset-Sum problem. Our scheme uses the framework of multilinear maps, but it is independent of the implementation details of multilinear maps.",
"title": ""
},
{
"docid": "62d23e00d13903246cc7128fe45adf12",
"text": "The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machines we call ‘computers’. In this paper, I want to suggest some ways that this might be done, using ideas from the mathematical theory of uncomputability (or Recursion Theory). And I want to suggest some uses to which the resulting models might be put. (The reader more interested in the models and their uses than the mathematics and its theorems, might want to skim or skip the mathematical parts.)",
"title": ""
},
{
"docid": "5e6990d8f1f81799e2e7fdfe29d14e4d",
"text": "Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.",
"title": ""
},
{
"docid": "f61ea212d71eebf43fd677016ce9770a",
"text": "Learning to drive faithfully in highly stochastic urban settings remains an open problem. To that end, we propose a Multi-task Learning from Demonstration (MTLfD) framework which uses supervised auxiliary task prediction to guide the main task of predicting the driving commands. Our framework involves an end-to-end trainable network for imitating the expert demonstrator’s driving commands. The network intermediately predicts visual affordances and action primitives through direct supervision which provide the aforementioned auxiliary supervised guidance. We demonstrate that such joint learning and supervised guidance facilitates hierarchical task decomposition, assisting the agent to learn faster, achieve better driving performance and increases transparency of the otherwise black-box end-to-end network. We run our experiments to validate the MT-LfD framework in CARLA, an open-source urban driving simulator. We introduce multiple non-player agents in CARLA and induce temporal noise in them for realistic stochasticity.",
"title": ""
},
{
"docid": "1ac0b1971ee476d3343c8746c5f3dc1f",
"text": "OBJECTIVE\nThis work describes the experimental validation of a cardiac simulator for three heart rates (60, 80 and 100 beats per minute), under physiological conditions, as a suitable environment for prosthetic heart valves testing in the mitral or aortic position.\n\n\nMETHODS\nIn the experiment, an aortic bileaflet mechanical valve and a mitral bioprosthesis were employed in the left ventricular model. A test fluid of 47.6% by volume of glycerin solution in water at 36.5ºC was used as blood analogue fluid. A supervisory control and data acquisition system implemented previously in LabVIEW was applied to induce the ventricular operation and to acquire the ventricular signals. The parameters of the left ventricular model operation were based on in vivo and in vitro data. The waves of ventricular and systemic pressures, aortic flow, stroke volume, among others, were acquired while manual adjustments in the arterial impedance model were also established.\n\n\nRESULTS\nThe acquired waves showed good results concerning some in vivo data and requirements from the ISO 5840 standard.\n\n\nCONCLUSION\nThe experimental validation was performed, allowing, in future studies, characterizing the hydrodynamic performance of prosthetic heart valves.",
"title": ""
},
{
"docid": "097414fbbbf19f7b244d4726d5d27f96",
"text": "Touch is both the first sense to develop and a critical means of information acquisition and environmental manipulation. Physical touch experiences may create an ontological scaffold for the development of intrapersonal and interpersonal conceptual and metaphorical knowledge, as well as a springboard for the application of this knowledge. In six experiments, holding heavy or light clipboards, solving rough or smooth puzzles, and touching hard or soft objects nonconsciously influenced impressions and decisions formed about unrelated people and situations. Among other effects, heavy objects made job candidates appear more important, rough objects made social interactions appear more difficult, and hard objects increased rigidity in negotiations. Basic tactile sensations are thus shown to influence higher social cognitive processing in dimension-specific and metaphor-specific ways.",
"title": ""
},
{
"docid": "bd3cc8370fd8669768f62d465f2c5531",
"text": "Cognitive radio technology has been proposed to improve spectrum efficiency by having the cognitive radios act as secondary users to opportunistically access under-utilized frequency bands. Spectrum sensing, as a key enabling functionality in cognitive radio networks, needs to reliably detect signals from licensed primary radios to avoid harmful interference. However, due to the effects of channel fading/shadowing, individual cognitive radios may not be able to reliably detect the existence of a primary radio. In this paper, we propose an optimal linear cooperation framework for spectrum sensing in order to accurately detect the weak primary signal. Within this framework, spectrum sensing is based on the linear combination of local statistics from individual cognitive radios. Our objective is to minimize the interference to the primary radio while meeting the requirement of opportunistic spectrum utilization. We formulate the sensing problem as a nonlinear optimization problem. By exploiting the inherent structures in the problem formulation, we develop efficient algorithms to solve for the optimal solutions. To further reduce the computational complexity and obtain solutions for more general cases, we finally propose a heuristic approach, where we instead optimize a modified deflection coefficient that characterizes the probability distribution function of the global test statistics at the fusion center. Simulation results illustrate significant cooperative gain achieved by the proposed strategies. The insights obtained in this paper are useful for the design of optimal spectrum sensing in cognitive radio networks.",
"title": ""
},
{
"docid": "655a95191700e24c6dcd49b827de4165",
"text": "With the increasing demand for express delivery, a courier needs to deliver many tasks in one day and it's necessary to deliver punctually as the customers expect. At the same time, they want to schedule the delivery tasks to minimize the total time of a courier's one-day delivery, considering the total travel time. However, most of scheduling researches on express delivery focus on inter-city transportation, and they are not suitable for the express delivery to customers in the “last mile”. To solve the issue above, this paper proposes a personalized service for scheduling express delivery, which not only satisfies all the customers' appointment time but also makes the total time minimized. In this service, personalized and accurate travel time estimation is important to guarantee delivery punctuality when delivering shipments. Therefore, the personalized scheduling service is designed to consist of two basic services: (1) personalized travel time estimation service for any path in express delivery using courier trajectories, (2) an express delivery scheduling service considering multiple factors, including customers' appointments, one-day delivery costs, etc., which is based on the accurate travel time estimation provided by the first service. We evaluate our proposed service based on extensive experiments, using GPS trajectories generated by more than 1000 couriers over a period of two months in Beijing. The results demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "fb812ad6355e10dafff43c3d4487f6a7",
"text": "Image priors are of great importance in image restoration tasks. These problems can be addressed by decomposing the degraded image into overlapping patches, treating the patches individually and averaging them back together. Recently, the Expected Patch Log Likelihood (EPLL) method has been introduced, arguing that the chosen model should be enforced on the final reconstructed image patches. In the context of a Gaussian Mixture Model (GMM), this idea has been shown to lead to state-of-the-art results in image denoising and debluring. In this paper we combine the EPLL with a sparse-representation prior. Our derivation leads to a close yet extended variant of the popular K-SVD image denoising algorithm, where in order to effectively maximize the EPLL the denoising process should be iterated. This concept lies at the core of the K-SVD formulation, but has not been addressed before due the need to set different denoising thresholds in the successive sparse coding stages. We present a method that intrinsically determines these thresholds in order to improve the image estimate. Our results show a notable improvement over K-SVD in image denoising and inpainting, achieving comparable performance to that of EPLL with GMM in denoising.",
"title": ""
},
{
"docid": "e91ace8f6eaf2fc2101bd715c7a43f1d",
"text": "We demonstrated the in vivo feasibility of using focused ultrasound (FUS) to transiently modulate (through either stimulation or suppression) the function of regional brain tissue in rabbits. FUS was delivered in a train of pulses at low acoustic energy, far below the cavitation threshold, to the animal's somatomotor and visual areas, as guided by anatomical and functional information from magnetic resonance imaging (MRI). The temporary alterations in the brain function affected by the sonication were characterized by both electrophysiological recordings and functional brain mapping achieved through the use of functional MRI (fMRI). The modulatory effects were bimodal, whereby the brain activity could either be stimulated or selectively suppressed. Histological analysis of the excised brain tissue after the sonication demonstrated that the FUS did not elicit any tissue damages. Unlike transcranial magnetic stimulation, FUS can be applied to deep structures in the brain with greater spatial precision. Transient modulation of brain function using image-guided and anatomically-targeted FUS would enable the investigation of functional connectivity between brain regions and will eventually lead to a better understanding of localized brain functions. It is anticipated that the use of this technology will have an impact on brain research and may offer novel therapeutic interventions in various neurological conditions and psychiatric disorders.",
"title": ""
},
{
"docid": "ee240969f586cb9f8ef51a192daa0526",
"text": "The location of mobile terminals has received considerable attention in the recent years. The performance of mobile location systems is limited by errors primarily caused by nonline-of-sight (NLOS) propagation conditions. We investigate the NLOS error identification and correction techniques for mobile user location in wireless cellular systems. Based on how much a priori knowledge of the NLOS error is available, two NLOS mitigation algorithms are proposed. Simulation results demonstrate that with the prior information database, the location estimate can be obtained with good accuracy even in severe NLOS propagation conditions.",
"title": ""
},
{
"docid": "62353069a6c29c4f8bccce46b257e19e",
"text": "Abstract -This paper presents the overall concept of Road Power Generator (RPG) that deals with the mechanism to generate electricity from the wasted kinetic energy of vehicles. It contains a flip-plate, gear mechanism, flywheel, and finally a generator is coupled at the end so that the rotational motion of the flywheel is used to rotate the shaft of the generator, thus producing electricity. RPG does not require any piezoelectric material. It is novel concept based on flip-plate mechanism. The project can be installed at highways where a huge number of vehicles pass daily, thus resulting in more amount of electricity generated. This generated electricity can be utilized for different types of applications and mainly for street lighting, on road battery charging units and many domestic applications like air conditioning, lighting, heating, etc.",
"title": ""
},
{
"docid": "422d13161686a051be201fb17bece304",
"text": "Due to the growing demand on electricity, how to improve the efficiency of equipment in a thermal power plant has become one of the critical issues. Reports indicate that efficiency and availability are heavily dependant upon high reliability and maintainability. Recently, the concept of e-maintenance has been introduced to reduce the cost of maintenance. In e-maintenance systems, the intelligent fault detection system plays a crucial role for identifying failures. Data mining techniques are at the core of such intelligent systems and can greatly influence their performance. Applying these techniques to fault detection makes it possible to shorten shutdown maintenance and thus increase the capacity utilization rates of equipment. Therefore, this work proposes a support vector machines (SVM) based model which integrates a dimension reduction scheme to analyze the failures of turbines in thermal power facilities. Finally, a real case from a thermal power plant is provided to evaluate the effectiveness of the proposed SVM based model. Experimental results show that SVM outperforms linear discriminant analysis (LDA) and back-propagation neural networks (BPN) in classification performance. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "95c634481e8c4468483ef447676098b6",
"text": "The success of cancer immunotherapy has generated tremendous interest in identifying new immunotherapeutic targets. To date, the majority of therapies have focussed on stimulating the adaptive immune system to attack cancer, including agents targeting CTLA-4 and the PD-1/PD-L1 axis. However, macrophages and other myeloid immune cells offer much promise as effectors of cancer immunotherapy. The CD47/signal regulatory protein alpha (SIRPα) axis is a critical regulator of myeloid cell activation and serves a broader role as a myeloid-specific immune checkpoint. CD47 is highly expressed on many different types of cancer, and it transduces inhibitory signals through SIRPα on macrophages and other myeloid cells. In a diverse range of preclinical models, therapies that block the CD47/SIRPα axis stimulate phagocytosis of cancer cells in vitro and anti-tumour immune responses in vivo. A number of therapeutics that target the CD47/SIRPα axis are under preclinical and clinical investigation. These include anti-CD47 antibodies, engineered receptor decoys, anti-SIRPα antibodies and bispecific agents. These therapeutics differ in their pharmacodynamic, pharmacokinetic and toxicological properties. Clinical trials are underway for both solid and haematologic malignancies using anti-CD47 antibodies and recombinant SIRPα proteins. Since the CD47/SIRPα axis also limits the efficacy of tumour-opsonising antibodies, additional trials will examine their potential synergy with agents such as rituximab, cetuximab and trastuzumab. Phagocytosis in response to CD47/SIRPα-blocking agents results in antigen uptake and presentation, thereby linking the innate and adaptive immune systems. CD47/SIRPα blocking therapies may therefore synergise with immune checkpoint inhibitors that target the adaptive immune system. As a critical regulator of macrophage phagocytosis and activation, the potential applications of CD47/SIRPα blocking therapies extend beyond human cancer. They may be useful for the treatment of infectious disease, conditioning for stem cell transplant, and many other clinical indications.",
"title": ""
},
{
"docid": "dc48b68a202974f62ae63d1d14002adf",
"text": "In the speed sensorless vector control system, the amended method of estimating the rotor speed about model reference adaptive system (MRAS) based on radial basis function neural network (RBFN) for PMSM sensorless vector control system was presented. Based on the PI regulator, the radial basis function neural network which is more prominent learning efficiency and performance is combined with MRAS. The reference model and the adjust model are the PMSM itself and the PMSM current, respectively. The proposed scheme only needs the error signal between q axis estimated current and q axis actual current. Then estimated speed is gained by using RBFN regulator which adjusted error signal. Comparing study of simulation and experimental results between this novel sensorless scheme and the scheme in reference literature, the results show that this novel method is capable of precise estimating the rotor position and speed under the condition of high or low speed. It also possesses good performance of static and dynamic.",
"title": ""
},
{
"docid": "fe116849575dd91759a6c1ef7ed239f3",
"text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"title": ""
},
{
"docid": "243391e804c06f8a53af906b31d4b99a",
"text": "As key decisions are often made based on information contained in a database, it is important for the database to be as complete and correct as possible. For this reason, many data cleaning tools have been developed to automatically resolve inconsistencies in databases. However, data cleaning tools provide only best-effort results and usually cannot eradicate all errors that may exist in a database. Even more importantly, existing data cleaning tools do not typically address the problem of determining what information is missing from a database.\n To overcome the limitations of existing data cleaning techniques, we present QOCO, a novel query-oriented system for cleaning data with oracles. Under this framework, incorrect (resp. missing) tuples are removed from (added to) the result of a query through edits that are applied to the underlying database, where the edits are derived by interacting with domain experts which we model as oracle crowds. We show that the problem of determining minimal interactions with oracle crowds to derive database edits for removing (adding) incorrect (missing) tuples to the result of a query is NP-hard in general and present heuristic algorithms that interact with oracle crowds. Finally, we implement our algorithms in our prototype system QOCO and show that it is effective and efficient through a comprehensive suite of experiments.",
"title": ""
},
{
"docid": "da088acea8b1d2dc68b238e671649f4f",
"text": "Water is a naturally circulating resource that is constantly recharged. Therefore, even though the stocks of water in natural and artificial reservoirs are helpful to increase the available water resources for human society, the flow of water should be the main focus in water resources assessments. The climate system puts an upper limit on the circulation rate of available renewable freshwater resources (RFWR). Although current global withdrawals are well below the upper limit, more than two billion people live in highly water-stressed areas because of the uneven distribution of RFWR in time and space. Climate change is expected to accelerate water cycles and thereby increase the available RFWR. This would slow down the increase of people living under water stress; however, changes in seasonal patterns and increasing probability of extreme events may offset this effect. Reducing current vulnerability will be the first step to prepare for such anticipated changes.",
"title": ""
}
] | scidocsrr |
9207aeff4ea2fda134fa9d19d9cf821c | A Switchable Iris Bandpass Filter Using RF MEMS Switchable Planar Resonators | [
{
"docid": "9c3aed8548b61b70ae35be98050fb4bf",
"text": "In the present work, a widely tunable high-Q air filled evanescent cavity bandpass filter is created in an LTCC substrate. A low loss Rogers Duroidreg flexible substrate forms the top of the filter, acting as a membrane for a tunable parasitic capacitor that allows variable frequency loading. A commercially available piezoelectric actuator is mounted on the Duroidreg substrate for precise electrical tuning of the filter center frequency. The filter is tuned from 2.71 to 4.03 GHz, with insertion losses ranging from 1.3 to 2.4 dB across the range for a 2.5% bandwidth filter. Secondarily, an exceptionally narrow band filter is fabricated to show the potential for using the actuators to fine tune the response to compensate for fabrication tolerances. While most traditional machining techniques would not allow for such narrow band filtering, the high-Q and the sensitive tuning combine to allow for near channel selection for a front-end receiver. For further analysis, a widely tunable resonator is also created with a 100% tunable frequency range, from 2.3 to 4.6 GHz. The resonator analysis gives unloaded quality factors ranging from 360 to 700 with a maximum frequency loading of 89%. This technique shows a lot of promise for tunable RF filtering applications.",
"title": ""
}
] | [
{
"docid": "2a68a1bcdd4b764f7981c76199f96cc9",
"text": "In this paper we present a method for logo detection in image collections and streams. The proposed method is based on features, extracted from reference logo images and test images. Extracted features are combined with respect to their similarity in their descriptors' space and afterwards with respect to their geometric consistency on the image plane. The contribution of this paper is a novel method for fast geometric consistency test. Using state of the art fast matching methods, it produces pairs of similar features between the test image and the reference logo image and then examines which pairs are forming a consistent geometry on both the test and the reference logo image. It is noteworthy that the proposed method is scale, rotation and translation invariant. The key advantage of the proposed method is that it exhibits a much lower computational complexity and better performance than the state of the art methods. Experimental results on large scale datasets are presented to support these statements.",
"title": ""
},
{
"docid": "2706e8ed981478ad4cb2db060b3d9844",
"text": "We develop a technique for transfer learning in machine comprehension (MC) using a novel two-stage synthesis network (SynNet). Given a high-performing MC model in one domain, our technique aims to answer questions about documents in another domain, where we use no labeled data of question-answer pairs. Using the proposed SynNet with a pretrained model on the SQuAD dataset, we achieve an F1 measure of 46.6% on the challenging NewsQA dataset, approaching performance of in-domain models (F1 measure of 50.0%) and outperforming the out-ofdomain baseline by 7.6%, without use of provided annotations.1",
"title": ""
},
{
"docid": "3377b2b7712a6e2b99008f150d30e54c",
"text": "This chapter studies how and to what extent it is possible to design for well-being. Well-being is rarely considered in the design literature, and is rarely linked to technology and design in philosophy and the social sciences. A few approaches to design for well-being have recently materialized, however, including Emotional Design, capability approaches, positive psychology approaches, and Life-Based Design. In this chapter, the notion of well-being will first be clarified and contemporary theories of and approaches to wellbeing will be reviewed. Next, theoretical and methodological issues in design for well-being will be discussed that must be accounted for in any successful approach. This will be followed by a review of the abovementioned four approaches to design for well-being. The chapter will conclude by considering open issues and future work in the development of design approaches for well-being.",
"title": ""
},
{
"docid": "efaec92cf49a0bf2a48f9c50742d0199",
"text": "This paper presents an integrated inverted stripline-like tunable transmission line structure where the propagation velocity can be modified as the characteristic impedance remains constant. As one application of this structure, a mm-wave phase shifter for massive hybrid MIMO applications is implemented in a 45 nm CMOS SOI process. Measurement results at 45 GHz of this phase shifter demonstrate a 79° phase shift tuning range, worst-case insertion loss of 3.3 dB, and effective area of 0.072 mm2. Compared to an on-chip reference phase shifter implemented based on a previously-reported tunable transmission line structure, this work achieves 35% less area occupied and 1.0 dB less insertion loss, while maintaining approximately the same phase shift tuning range.",
"title": ""
},
{
"docid": "c21ec977b0b90738c6ea8c403b6e39d5",
"text": "Brain correlates comparing pleasant and unpleasant states induced by three dissimilar masterpiece excerpts were obtained. Related emotional reactions to the music were studied using Principal Component Analysis of validated reports, fMRI, and EEG coherent activity. A piano selection by Bach and a symphonic passage from Mahler widely differing in musical features were used as pleasing pieces. A segment by Prodromidès was used as an unpleasing stimulus. Ten consecutive 30 s segments of each piece alternating with random static noise were played to 19 non-musician volunteers for a total of 30 min of auditory stimulation. Both brain approaches identified a left cortical network involved with pleasant feelings (Bach and Mahler vs. Prodromidès) including the left primary auditory area, posterior temporal, inferior parietal and prefrontal regions. While the primary auditory zone may provide an early affective quality, left cognitive areas may contribute to pleasant feelings when melodic sequences follow expected rules. In contrast, unpleasant emotions (Prodromidès vs. Bach and Mahler) involved the activation of the right frontopolar and paralimbic areas. Left activation with pleasant and right with unpleasant musical feelings is consistent with right supremacy in novel situations and left in predictable processes. When all musical excerpts were jointly compared to noise, in addition to bilateral auditory activation, the left temporal pole, inferior frontal gyrus, and frontopolar area were activated suggesting that cognitive and language processes were recruited in general responses to music. Sensory and cognitive integration seems required for musical emotion.",
"title": ""
},
{
"docid": "cf248f6d767072a4569e31e49918dea1",
"text": "We describe resources aimed at increasing the usability of the semantic representations utilized within the DELPH-IN (Deep Linguistic Processing with HPSG) consortium. We concentrate in particular on the Dependency Minimal Recursion Semantics (DMRS) formalism, a graph-based representation designed for compositional semantic representation with deep grammars. Our main focus is on English, and specifically English Resource Semantics (ERS) as used in the English Resource Grammar. We first give an introduction to ERS and DMRS and a brief overview of some existing resources and then describe in detail a new repository which has been developed to simplify the use of ERS/DMRS. We explain a number of operations on DMRS graphs which our repository supports, with sketches of the algorithms, and illustrate how these operations can be exploited in application building. We believe that this work will aid researchers to exploit the rich and effective but complex DELPH-IN resources.",
"title": ""
},
{
"docid": "24b2cedc9512566e44f9fd7e1acf8a85",
"text": "This paper presents an alternative visual authentication scheme with two secure layers for desktops or laptops. The first layer is a recognition-based scheme that addresses human factors for protection against bots by recognizing a Captcha and images with specific patterns. The second layer uses a clicked based Cued-Recall graphical password scheme for authentication, it also exploits emotions perceived by humans and use them as decision factor. The proposed authentication system is effective against brute-force, online guessing and relay attacks. We believe that the perception of security is enhaced using human emotions as main decision factor. The proposed scheme usability was tested using the Computer System Usability Questionnaires, results showed that it is highly usable and could improve the security level on ATM machines.",
"title": ""
},
{
"docid": "a8da4af316b4012ee5c735eb064911d3",
"text": "Social networking (SNW) services such as Facebook and MySpace are growing exponentially. While many people are spending an increasing amount of their time on the services, others use them minimally or discontinue use after a short period of time. This research is asking the question: What are the salient factors influencing individuals to continue using and extending the range of SNW services after their initial acceptance? This research recognizes that long-term viability and the eventual success of these services depends on continued usage rather than initial acceptance, and usage continuance of SNW services at the individual level is fundamental to the survival of many social technology-empowered businesses and organizations. We look to the Expectation-Confirmation Model of information systems (IS) continuance and a series of social theories as the underlying theoretical foundations. We develop the Usage Continuance Model of SNW Services to investigate continued usage behavior and enduring impacts of SNW services. The model proposes that usage continuance behavior of SNW services is a joint function of individuals’ perceptions of (1) intrinsic flow experience of SNW services, expected instrumentality of SNW services in managing and improving informational and relational values, and social influence as the outgrowth of social capital, and (2) costs in informational risks and participative efforts of time and social exchanges. The joint function is moderated by individuals’ use history of SNW features. The proposed model and hypotheses offer a comprehensive framework for empirically extending the IS continuance research to the ever pervasive SNW context.",
"title": ""
},
{
"docid": "d3a97a5015e27e0b2a043dc03d20228b",
"text": "The exponential growth of cyber-physical systems (CPS), especially in safety-critical applications, has imposed several security threats (like manipulation of communication channels, hardware components, and associated software) due to complex cybernetics and the interaction among (independent) CPS domains. These security threats have led to the development of different static as well as adaptive detection and protection techniques on different layers of the CPS stack, e.g., cross-layer and intra-layer connectivity. This paper first presents a brief overview of various security threats at different CPS layers, their respective threat models and associated research challenges to develop robust security measures. Moreover, this paper provides a brief yet comprehensive survey of the state-of-the-art static and adaptive techniques for detection and prevention, and their inherent limitations, i.e., incapability to capture the dormant or uncertainty-based runtime security attacks. To address these challenges, this paper also discusses the intelligent security measures (using machine learning-based techniques) against several characterized attacks on different layers of the CPS stack. Furthermore, we identify the associated challenges and open research problems in developing intelligent security measures for CPS. Towards the end, we provide an overview of our project on security for smart CPS along with important analyses.",
"title": ""
},
{
"docid": "a27cd7eb3fa8d7cb4bdba6aed5beb592",
"text": "The goal of this chapter is to give a brief introduction to the modern view of information security as a prerequisite to organizing an open, free, and democratic information society. It introduces design principles and, on a high level of abstraction, the technical terminology needed to discuss economic incentives around the provision of information security. It is targeted to people with a background in economics or social sciences. Readers trained in computer science or engineering disciplines may recognize most topics. They are invited to simply skip this chapter, though many computer scientists may find the presentation and emphasis different from what they have previously encountered.",
"title": ""
},
{
"docid": "fa7645dd9623879d7442c944ca3fac3c",
"text": "Human communication involves conveying messages both through verbal and non-verbal channels (facial expression, gestures, prosody, etc.). Nonetheless, the task of learning these patterns for a computer by combining cues from multiple modalities is challenging because it requires effective representation of the signals and also taking into consideration the complex interactions between them. From the machine learning perspective this presents a two-fold challenge: a) Modeling the intermodal variations and dependencies; b) Representing the data using an apt number of features, such that the necessary patterns are captured but at the same time allaying concerns such as over-fitting. In this work we attempt to address these aspects of multimodal recognition, in the context of recognizing two essential speaker traits, namely passion and credibility of online movie reviewers. We propose a novel ensemble classification approach that combines two different perspectives on classifying multimodal data. Each of these perspectives attempts to independently address the two-fold challenge. In the first, we combine the features from multiple modalities but assume inter-modality conditional independence. In the other one, we explicitly capture the correlation between the modalities but in a space of few dimensions and explore a novel clustering based kernel similarity approach for recognition. Additionally, this work investigates a recent technique for encoding text data that captures semantic similarity of verbal content and preserves word-ordering. The experimental results on a recent public dataset shows significant improvement of our approach over multiple baselines. Finally, we also analyze the most discriminative elements of a speaker's non-verbal behavior that contribute to his/her perceived credibility/passionateness.",
"title": ""
},
{
"docid": "cd6d2ec9d24752e7cc3b716bd8cded1a",
"text": "Metric learning has attracted increasing attention due to its critical role in image analysis and classification. Conventional metric learning always assumes that the training and test data are sampled from the same or similar distribution. However, to build an effective distance metric, we need abundant supervised knowledge (i.e., side/label information), which is generally inaccessible in practice, because of the expensive labeling cost. In this paper, we develop a robust transfer metric learning (RTML) framework to effectively assist the unlabeled target learning by transferring the knowledge from the well-labeled source domain. Specifically, RTML exploits knowledge transfer to mitigate the domain shift in two directions, i.e., sample space and feature space. In the sample space, domain-wise and class-wise adaption schemes are adopted to bridge the gap of marginal and conditional distribution disparities across two domains. In the feature space, our metric is built in a marginalized denoising fashion and low-rank constraint, which make it more robust to tackle noisy data in reality. Furthermore, we design an explicit rank constraint regularizer to replace the rank minimization NP-hard problem to guide the low-rank metric learning. Experimental results on several standard benchmarks demonstrate the effectiveness of our proposed RTML by comparing it with the state-of-the-art transfer learning and metric learning algorithms.",
"title": ""
},
{
"docid": "1fa6abe84be2f1240b4c5c077bbbb171",
"text": "The largest eigenvalue of the adjacency matrix of a network (referred to as the spectral radius) is an important metric in its own right. Further, for several models of epidemic spread on networks (e.g., the ‘flu-like’ SIS model), it has been shown that an epidemic dies out quickly if the spectral radius of the graph is below a certain threshold that depends on the model parameters. This motivates a strategy to control epidemic spread by reducing the spectral radius of the underlying network. In this paper, we develop a suite of provable approximation algorithms for reducing the spectral radius by removing the minimum cost set of edges (modeling quarantining) or nodes (modeling vaccinations), with different time and quality tradeoffs. Our main algorithm, GreedyWalk, is based on the idea of hitting closed walks of a given length, and gives an O(log n)-approximation, where n denotes the number of nodes; it also performs much better in practice compared to all prior heuristics proposed for this problem. We further present a novel sparsification method to improve its running time. In addition, we give a new primal-dual based algorithm with an even better approximation guarantee (O(log n)), albeit with slower running time. We also give lower bounds on the worst-case performance of some of the popular heuristics. Finally we demonstrate the applicability of our algorithms and the properties of our solutions via extensive experiments on multiple synthetic and real networks.",
"title": ""
},
{
"docid": "babdf14e560236f5fcc8a827357514e5",
"text": "Email: zettammanal@gmail.com Abstract: The NP-hard (complete) team orienteering problem is a particular vehicle routing problem with the aim of maximizing the profits gained from visiting control points without exceeding a travel cost limit. The team orienteering problem has a number of applications in several fields such as athlete recruiting, technician routing and tourist trip. Therefore, solving optimally the team orienteering problem would play a major role in logistic management. In this study, a novel randomized population constructive heuristic is introduced. This heuristic constructs a diversified initial population for population-based metaheuristics. The heuristics proved its efficiency. Indeed, experiments conducted on the well-known benchmarks of the team orienteering problem show that the initial population constructed by the presented heuristic wraps the best-known solution for 131 benchmarks and good solutions for a great number of benchmarks.",
"title": ""
},
{
"docid": "f8f8b43d78b361caee537d043e385b16",
"text": "We demonstrate the value of collecting semantic parse labels for knowledge base question answering. In particular, (1) unlike previous studies on small-scale datasets, we show that learning from labeled semantic parses significantly improves overall performance, resulting in absolute 5 point gain compared to learning from answers, (2) we show that with an appropriate user interface, one can obtain semantic parses with high accuracy and at a cost comparable or lower than obtaining just answers, and (3) we have created and shared the largest semantic-parse labeled dataset to date in order to advance research in question answering.",
"title": ""
},
{
"docid": "f114d71135c251db044e2560cb2a2204",
"text": "Crowd density estimation is important for intelligent video surveillance. Many methods based on texture features have been proposed to solve this problem. Most of the existing algorithms only estimate crowd density on the whole image while ignore crowd density in local region. In this paper, we propose a novel texture descriptor based on Local Binary Pattern (LBP) Co-occurrence Matrix (LBPCM) for crowd density estimation. LBPCM is constructed from several overlapping cells in an image block, which is going to be classified into different crowd density levels. LBPCM describes both the statistical properties and the spatial information of LBP and thus makes full use of LBP for local texture features. Additionally, we both extract LBPCM on gray and gradient images to improve the performance of crowd density estimation. Finally, the sliding window technique is used to detect the potential crowded area. The experimental results show the proposed method has better performance than other texture based crowd density estimation methods.",
"title": ""
},
{
"docid": "e5f363097c310d08b34015790aa5111e",
"text": "A substrate integrated magneto-electric (ME) dipole antenna with metasurface is proposed for the 5G/WiMAX/WLAN/X-band MIMO applications. In order to provide a low profile, the radiated electric dipoles integrated with shorted wall are used in the multi-layer substrates at different heights. Owing to the coordination of the metasurface and the ME dipole, dual wideband and high gain have been obtained. As a result of the 3-D hexagonal structure, good envelope correlation coefficient and mean effective gain performance can be achieved by the MIMO antenna system. The antenna element can provide an impedance bandwidth of 66.7% (3.1–6.2 GHz) with a stable gain of 7.6±1.5 dBi and an impedance bandwidth of 20.3% (7.1–8.7 GHz) with a gain of 7.4±1.8 dBi for the lower and upper bands, respectively. The overall size of the element is <inline-formula> <tex-math notation=\"LaTeX\">$60\\times 60\\times 7.92$ </tex-math></inline-formula> mm<sup>3</sup>. Hence, it is well-suited for the future 5G/WiMAX/WLAN/X-band MIMO communications.",
"title": ""
},
{
"docid": "951213cd4412570709fb34f437a05c72",
"text": "In this paper, we present directional skip-gram (DSG), a simple but effective enhancement of the skip-gram model by explicitly distinguishing left and right context in word prediction. In doing so, a direction vector is introduced for each word, whose embedding is thus learned by not only word co-occurrence patterns in its context, but also the directions of its contextual words. Theoretical and empirical studies on complexity illustrate that our model can be trained as efficient as the original skip-gram model, when compared to other extensions of the skip-gram model. Experimental results show that our model outperforms others on different datasets in semantic (word similarity measurement) and syntactic (partof-speech tagging) evaluations, respectively.",
"title": ""
},
{
"docid": "efeffb457003012eb8db209fe025294c",
"text": "TV white space refers to TV channels that are not used by any licensed services at a particular location and at a particular time. To exploit this unused TVWS spectrum for improved spectrum efficiency, regulatory agencies have begun developing regulations to permit its use this TVWS by unlicensed wireless devices as long as they do not interfere with any licensed services. In the future many heterogeneous, and independently operated, wireless networks may utilize the TVWS. Coexistence between these networks is essential in order to provide a high level of QoS to end users. Consequently, the IEEE 802 LAN/MAN standards committee has approved the P802.19.1 standardization project to specify radio-technology-independent methods for coexistence among dissimilar or independently operated wireless devices and networks. In this article we provide a detailed overview of the regulatory status of TVWS in the United States and Europe, analyze the coexistence problem in TVWS, and summarize existing coexisting mechanisms to improve coexistence in TVWS. The main focus of the article is the IEEE P802.19.1 standardization project, including its requirements and system design, and the major technical challenges ahead.",
"title": ""
},
{
"docid": "db3b14f6298771b44506a17da57c21ae",
"text": "Virtuosos are human beings who exhibit exceptional performance in their field of activity. In particular, virtuosos are interesting for creativity studies because they are exceptional problem solvers. However, virtuosity is an under-studied field of human behaviour. Little is known about the processes involved to become a virtuoso, and in how they distinguish themselves from normal performers. Virtuosos exist in virtually all domains of human activities, and we focus in this chapter on the specific case of virtuosity in jazz improvisation. We first introduce some facts about virtuosos coming from physiology, and then focus on the case of jazz. Automatic generation of improvisation has long been a subject of study for computer science, and many techniques have been proposed to generate music improvisation in various genres. The jazz style in particular abounds with programs that create improvisations of a reasonable level. However, no approach so far exhibits virtuosolevel performance. We describe an architecture for the generation of virtuoso bebop phrases which integrates novel music generation mechanisms in a principled way. We argue that modelling such outstanding phenomena can contribute substantially to the understanding of creativity in humans and machines. 5.1 Virtuosos as Exceptional Humans 5.1.1 Virtuosity in Art There is no precise definition of virtuosity, but only a commonly accepted view that virtuosos are human beings that excel in their practice to the point of exhibiting exceptional performance. Virtuosity exists in virtually all forms of human activity. In painting, several artists use virtuosity as a means to attract the attention of their audience. Felice Varini paints on urban spaces in such a way that there is a unique viewpoint from which a spectator sees the painting as a perfect geometrical figure. The F. Pachet ( ) Sony CSL-Paris, 6, rue Amyot, 75005 Paris, France e-mail: pachet@csl.sony.fr J. McCormack, M. d’Inverno (eds.), Computers and Creativity, DOI 10.1007/978-3-642-31727-9_5, © Springer-Verlag Berlin Heidelberg 2012 115",
"title": ""
}
] | scidocsrr |
55e2dc25b7119ad55fec5cb1fee9e87f | Co-analysis of RAS Log and Job Log on Blue Gene/P | [
{
"docid": "f910996af5983cf121b7912080c927d6",
"text": "In large-scale networked computing systems, component failures become norms instead of exceptions. Failure prediction is a crucial technique for self-managing resource burdens. Failure events in coalition systems exhibit strong correlations in time and space domain. In this paper, we develop a spherical covariance model with an adjustable timescale parameter to quantify the temporal correlation and a stochastic model to describe spatial correlation. We further utilize the information of application allocation to discover more correlations among failure instances. We cluster failure events based on their correlations and predict their future occurrences. We implemented a failure prediction framework, called PREdictor of Failure Events Correlated Temporal-Spatially (hPREFECTs), which explores correlations among failures and forecasts the time-between-failure of future instances. We evaluate the performance of hPREFECTs in both offline prediction of failure by using the Los Alamos HPC traces and online prediction in an institute-wide clusters coalition environment. Experimental results show the system achieves more than 76% accuracy in offline prediction and more than 70% accuracy in online prediction during the time from May 2006 to April 2007.",
"title": ""
}
] | [
{
"docid": "4ce6063786afa258d8ae982c7f17a8b1",
"text": "This paper proposes a hybrid phase-shift-controlled three-level (TL) and LLC dc-dc converter. The TL dc-dc converter and LLC dc-dc converter have their own transformers. Compared with conventional half-bridge TL dc-dc converters, the proposed one has no additional switch at the primary side of the transformer, where the TL converter shares the lagging switches with the LLC converter. At the secondary side of the transformers, the TL and LLC converters are connected by an active switch. With the aid of the LLC converter, the zero voltage switching (ZVS) of the lagging switches can be achieved easily even under light load conditions. Wide ZVS range for all the switches can be ensured. Both the circulating current at the primary side and the output filter inductance are reduced. Furthermore, the efficiency of the converter is improved dramatically. The features of the proposed converter are analyzed, and the design guidelines are given in the paper. Finally, the performance of the converter is verified by a 1-kW experimental prototype.",
"title": ""
},
{
"docid": "2274f3d3dc25bec4b86988615d421f10",
"text": "Sepsis is a dangerous condition that is a leading cause of patient mortality. Treating sepsis is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. In this work, we explore the use of continuous state-space model-based reinforcement learning (RL) to discover high-quality treatment policies for sepsis patients. Our quantitative evaluation reveals that by blending the treatment strategy discovered with RL with what clinicians follow, we can obtain improved policies, potentially allowing for better medical treatment for sepsis.",
"title": ""
},
{
"docid": "688bacdee25152e1de6bcc5005b75d9a",
"text": "Data Mining provides powerful techniques for various fields including education. The research in the educational field is rapidly increasing due to the massive amount of students’ data which can be used to discover valuable pattern pertaining students’ learning behaviour. This paper proposes a framework for predicting students’ academic performance of first year bachelor students in Computer Science course. The data were collected from 8 year period intakes from July 2006/2007 until July 2013/2014 that contains the students’ demographics, previous academic records, and family background information. Decision Tree, Naïve Bayes, and Rule Based classification techniques are applied to the students’ data in order to produce the best students’ academic performance prediction model. The experiment result shows the Rule Based is a best model among the other techniques by receiving the highest accuracy value of 71.3%. The extracted knowledge from prediction model will be used to identify and profile the student to determine the students’ level of success in the first semester.",
"title": ""
},
{
"docid": "8c0f20061bd09b328748d256d5ece7cc",
"text": "Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.",
"title": ""
},
{
"docid": "d7d66f89e5f5f2d6507e0939933b3a17",
"text": "The discarded clam shell waste, fossil and edible oil as biolubricant feedstocks create environmental impacts and food chain dilemma, thus this work aims to circumvent these issues by using activated saltwater clam shell waste (SCSW) as solid catalyst for conversion of Jatropha curcas oil as non-edible sources to ester biolubricant. The characterization of solid catalyst was done by Differential Thermal Analysis-Thermo Gravimetric Analysis (DTATGA), X-Ray Fluorescence (XRF), X-Ray Diffraction (XRD), Brunauer-Emmett-Teller (BET), Field Emission Scanning Electron Microscopy (FESEM) and Fourier Transformed Infrared Spectroscopy (FTIR) analysis. The calcined catalyst was used in the transesterification of Jatropha oil to methyl ester as the first step, and the second stage was involved the reaction of Jatropha methyl ester (JME) with trimethylolpropane (TMP) based on the various process parameters. The formated biolubricant was analyzed using the capillary column (DB-5HT) equipped Gas Chromatography (GC). The conversion results of Jatropha oil to ester biolubricant can be found nearly 96.66%, and the maximum distribution composition mainly contains 72.3% of triester (TE). Keywords—Conversion, ester biolubricant, Jatropha curcas oil, solid catalyst.",
"title": ""
},
{
"docid": "00e5c92435378e4fdcee5f9fa58271b5",
"text": "Because the position transducers commonly used (optical encoders and electromagnetic resolvers) do not inherently produce a true, instantaneous velocity measurement, some signal processing techniques are generally used to estimate the velocity at each sample instant. This estimated signal is then used as the velocity feedback signal for the velocity loop control. An analysis is presented of the limitations of such approaches, and a technique which optimally estimates the velocity at each sample instant is presented. The method is shown to offer a significant improvement in command-driven systems and to reduce the effect of quantized angular resolution which limits the ultimate performance of all digital servo drives. The noise reduction is especially relevant for AC servo drives due to the high current loop bandwidths required for their correct operation. The method demonstrates improved measurement performance over a classical DC tachometer.<<ETX>>",
"title": ""
},
{
"docid": "b61c9f69a2fffcf2c3753e51a3bbfa14",
"text": "..............................................................................................................ix 1 Interoperability .............................................................................................1 1.",
"title": ""
},
{
"docid": "0660dc780eda869aabc1f856ec3f193f",
"text": "This paper provides a study of the smart grid projects realised in Europe and presents their technological solutions with a focus on smart metering Low Voltage (LV) applications. Special attention is given to the telecommunications technologies used. For this purpose, we present the telecommunication technologies chosen by several European utilities for the accomplishment of their smart meter national roll-outs. Further on, a study is performed based on the European Smart Grid Projects, highlighting their technological options. The range of the projects analysed covers the ones including smart metering implementation as well as those in which smart metering applications play a significant role in the overall project success. The survey reveals that various topics are directly or indirectly linked to smart metering applications, like smart home/building, energy management, grid monitoring and integration of Renewable Energy Sources (RES). Therefore, the technological options that lie behind such projects are pointed out. For reasons of completeness, we also present the main characteristics of the telecommunication technologies that are found to be used in practice for the LV grid.",
"title": ""
},
{
"docid": "e6a97c3365e16d77642a84f0a80863e2",
"text": "The current statuses and future promises of the Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano-Things (IoNT) are extensively reviewed and a summarized survey is presented. The analysis clearly distinguishes between IoT and IoE, which are wrongly considered to be the same by many commentators. After evaluating the current trends of advancement in the fields of IoT, IoE and IoNT, this paper identifies the 21 most significant current and future challenges as well as scenarios for the possible future expansion of their applications. Despite possible negative aspects of these developments, there are grounds for general optimism about the coming technologies. Certainly, many tedious tasks can be taken over by IoT devices. However, the dangers of criminal and other nefarious activities, plus those of hardware and software errors, pose major challenges that are a priority for further research. Major specific priority issues for research are identified.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
},
{
"docid": "30a6a3df784c2a8cc69a1bd75ad1998b",
"text": "Traditional stock market prediction approaches commonly utilize the historical price-related data of the stocks to forecast their future trends. As the Web information grows, recently some works try to explore financial news to improve the prediction. Effective indicators, e.g., the events related to the stocks and the people’s sentiments towards the market and stocks, have been proved to play important roles in the stocks’ volatility, and are extracted to feed into the prediction models for improving the prediction accuracy. However, a major limitation of previous methods is that the indicators are obtained from only a single source whose reliability might be low, or from several data sources but their interactions and correlations among the multi-sourced data are largely ignored. In this work, we extract the events from Web news and the users’ sentiments from social media, and investigate their joint impacts on the stock price movements via a coupled matrix and tensor factorization framework. Specifically, a tensor is firstly constructed to fuse heterogeneous data and capture the intrinsic ∗Corresponding author Email addresses: zhangx@bupt.edu.cn (Xi Zhang), 2011213120@bupt.edu.cn (Yunjia Zhang), szwang@nuaa.edu.cn (Senzhang Wang), yaoyuntao@bupt.edu.cn (Yuntao Yao), fangbx@bupt.edu.cn (Binxing Fang), psyu@uic.edu (Philip S. Yu) Preprint submitted to Journal of LTEX Templates September 2, 2018 ar X iv :1 80 1. 00 58 8v 1 [ cs .S I] 2 J an 2 01 8 relations among the events and the investors’ sentiments. Due to the sparsity of the tensor, two auxiliary matrices, the stock quantitative feature matrix and the stock correlation matrix, are constructed and incorporated to assist the tensor decomposition. The intuition behind is that stocks that are highly correlated with each other tend to be affected by the same event. Thus, instead of conducting each stock prediction task separately and independently, we predict multiple correlated stocks simultaneously through their commonalities, which are enabled via sharing the collaboratively factorized low rank matrices between matrices and the tensor. Evaluations on the China A-share stock data and the HK stock data in the year 2015 demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "741078742178d09f911ef9633befeb9b",
"text": "We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [4] is made showing encouraging results.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "553a86035f5013595ef61c4c19997d7c",
"text": "This paper proposes a novel self-oscillating, boost-derived (SOBD) dc-dc converter with load regulation. This proposed topology utilizes saturable cores (SCs) to offer self-oscillating and output regulation capabilities. Conventionally, the self-oscillating dc transformer (SODT) type of scheme can be implemented in a very cost-effective manner. The ideal dc transformer provides both input and output currents as pure, ripple-free dc quantities. However, the structure of an SODT-type converter will not provide regulation, and its oscillating frequency will change in accordance with the load. The proposed converter with SCs will allow output-voltage regulation to be accomplished by varying only the control current between the transformers, as occurs in a pulse-width modulation (PWM) converter. A control network that combines PWM schemes with a regenerative function is used for this converter. The optimum duty cycle is implemented to achieve low levels of input- and output-current ripples, which are characteristic of an ideal dc transformer. The oscillating frequency will spontaneously be kept near-constant, regardless of the load, without adding any auxiliary or compensation circuits. The typical voltage waveforms of the transistors are found to be close to quasisquare. The switching surges are well suppressed, and the voltage stress of the component is well clamped. The turn-on/turn-off of the switch is zero-voltage switching (ZVS), and its resonant transition can occur over a wide range of load current levels. A prototype circuit of an SOBD converter shows 86% efficiency at 48-V input, with 12-V, 100-W output, and presents an operating frequency of 100 kHz.",
"title": ""
},
{
"docid": "563183ff51d1a218bf54db6400e25365",
"text": "In this paper wireless communication using white, high brightness LEDs (light emitting diodes) is considered. In particular, the use of OFDM (orthogonal frequency division multiplexing) for intensity modulation is investigated. The high peak-to-average ratio (PAR) in OFDM is usually considered a disadvantage in radio frequency transmission systems due to non-linearities of the power amplifier. It is demonstrated theoretically and by means of an experimental system that the high PAR in OFDM can be exploited constructively in visible light communication to intensity modulate LEDs. It is shown that the theoretical and the experimental results match very closely, and that it is possible to cover a distance of up to one meter using a single LED",
"title": ""
},
{
"docid": "c3bfe9b5231c5f9b4499ad38b6e8eac6",
"text": "As the World Wide Web has increasingly become a necessity in daily life, the acute need to safeguard user privacy and security has become manifestly apparent. After users realized that browser cookies could allow websites to track their actions without permission or notification, many have chosen to reject cookies in order to protect their privacy. However, more recently, methods of fingerprinting a web browser have become an increasingly common practice. In this paper, we classify web browser fingerprinting into four main categories: (1) Browser Specific, (2) Canvas, (3) JavaScript Engine, and (4) Cross-browser. We then summarize the privacy and security implications, discuss commercial fingerprinting techniques, and finally present some detection and prevention methods.",
"title": ""
},
{
"docid": "ff6b4840787027df75873f38fbb311b4",
"text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] | scidocsrr |
98be0f0ae00334fb78ef7b1be3995455 | 3DSRnet: Video Super-resolution using 3D Convolutional Neural Networks | [
{
"docid": "8b581e9ae50ed1f1aa1077f741fa4504",
"text": "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.",
"title": ""
}
] | [
{
"docid": "2f20bca0134eb1bd9d65c4791f94ddcc",
"text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"title": ""
},
{
"docid": "f4166e4121dbd6f6ab209e6d99aac63f",
"text": "In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.",
"title": ""
},
{
"docid": "e60a6bc2ecbc2abacd9d393e83fbd912",
"text": "A key task in conducting research integration studies is determining what features to account for in the research reports eligible for inclusion. In the course of a methodological project, the authors found a remarkable uniformity in the way findings were produced and presented, no matter what the stated or implied frame of reference or method. They describe a typology of findings, which they developed to bypass the discrepancy between method claims and the actual use of methods, and efforts to ascertain its utility and reliability. The authors propose that the findings in journal reports of qualitative studies in the health domain can be classified on a continuum of data transformation as no finding, topical survey, thematic survey, conceptual/thematic description, or interpretive explanation.",
"title": ""
},
{
"docid": "a5763d4909edb39b421272be2a546e82",
"text": "We summarize all available amphibian and reptile species distribution data from the northeast Mindanao faunal region, including small islands associated with this subcenter of endemic vertebrate biodiversity. Together with all publicly available historical information from biodiversity repositories, we present new data from several major herpetological surveys, including recently conducted inventories on four major mountains of northeast Mindanao, and adjacent islands of Camiguin Sur, Dinagat, and Siargao. We present species accounts for all taxa, comment on unresolved taxonomic problems, and provide revisions to outdated IUCN conservation status assessments in cases where our new data significantly alter earlier classification status summaries. Together, our comprehensive analysis of this fauna suggests that the greater Mindanao faunal region possesses distinct subcenters of amphibian and reptile species diversity, and that until this area is revisited and its fauna and actually studied, with on-the-ground field work including targeted surveys of species distributions coupled to the study their natural history, our understanding of the diversity and conservation status of southern Philippine herpetological fauna will remain incomplete. Nevertheless, the northeast Mindanao geographical area (Caraga Region) appears to have the highest herpetological species diversity (at least 126 species) of any comparably-sized Philippine faunal subregion.",
"title": ""
},
{
"docid": "100152f120c93e5845bd11eb66d3d46b",
"text": "Mobile Augmented Reality (AR) applications allow the user to interact with virtual objects positioned within the real world via a smart phone, tablet or smart glasses. As the popularity of these applications grows, recent researchers have identified several security and privacy issues pertaining to the collection and storage of sensitive data from device sensors. Location-based AR applications typically not only collect user location data, but transmit it to a remote server in order to download nearby virtual content. In this paper we show that the pattern of network traffic generated by this process alone can be used to infer the user’s location. We demonstrate a side-channel attack against a widely available Mobile AR application inspired by Website Fingerprinting methods. Through the strategic placement of virtual content and prerecording of the network traffic produced by interacting with this content, we are able to identify the location of a user within the target area with an accuracy of 94%.This finding reveals a previously unexplored vulnerability in the implementation of Mobile AR applications and we offer several recommendations to mitigate this threat.",
"title": ""
},
{
"docid": "d912931af094b91634e2c194e5372c1e",
"text": "Threats from social engineering can cause organisations severe damage if they are not considered and managed. In order to understand how to manage those threats, it is important to examine reasons why organisational employees fall victim to social engineering. In this paper, the objective is to understand security behaviours in practice by investigating factors that may cause an individual to comply with a request posed by a perpetrator. In order to attain this objective, we collect data through a scenario-based survey and conduct phishing experiments in three organisations. The results from the experiment reveal that the degree of target information in an attack increases the likelihood that an organisational employee fall victim to an actual attack. Further, an individual’s trust and risk behaviour significantly affects the actual behaviour during the phishing experiment. Computer experience at work, helpfulness and gender (females tend to be less susceptible to a generic attack than men), has a significant correlation with behaviour reported by respondents in the scenario-based survey. No correlation between the performance in the scenario-based survey and experiment was found. We argue that the result does not imply that one or the other method should be ruled out as they have both advantages and disadvantages which should be considered in the context of collecting data in the critical domain of information security. Discussions of the findings, implications and recommendations for future research are further provided.",
"title": ""
},
{
"docid": "5ca8d0ad48ff44e0659f916af41a7efc",
"text": "Automatic retinal vessel segmentation is a fundamental step in the diagnosis of eye-related diseases, in which both thick vessels and thin vessels are important features for symptom detection. All existing deep learning models attempt to segment both types of vessels simultaneously by using a unified pixelwise loss which treats all vessel pixels with equal importance. Due to the highly imbalanced ratio between thick vessels and thin vessels (namely the majority of vessel pixels belong to thick vessels), the pixel-wise loss would be dominantly guided by thick vessels and relatively little influence comes from thin vessels, often leading to low segmentation accuracy for thin vessels. To address the imbalance problem, in this paper, we explore to segment thick vessels and thin vessels separately by proposing a three-stage deep learning model. The vessel segmentation task is divided into three stages, namely thick vessel segmentation, thin vessel segmentation and vessel fusion. As better discriminative features could be learned for separate segmentation of thick vessels and thin vessels, this process minimizes the negative influence caused by their highly imbalanced ratio. The final vessel fusion stage refines the results by further identifying non-vessel pixels and improving the overall vessel thickness consistency. The experiments on public datasets DRIVE, STARE and CHASE DB1 clearly demonstrate that the proposed threestage deep learning model outperforms the current state-of-theart vessel segmentation methods.",
"title": ""
},
{
"docid": "618496f6e0b1da51e1e2c81d72c4a6f1",
"text": "Paid employment within clinical setting, such as externships for undergraduate student, are used locally and globally to better prepare and retain new graduates for actual practice and facilitate their transition into becoming registered nurses. However, the influence of paid employment on the post-registration experience of such nurses remains unclear. Through the use of narrative inquiry, this study explores how the experience of pre-registration paid employment shapes the post-registration experience of newly graduated registered nurses. Repeated individual interviews were conducted with 18 new graduates, and focus group interviews were conducted with 11 preceptors and 10 stakeholders recruited from 8 public hospitals in Hong Kong. The data were subjected to narrative and paradigmatic analyses. Taken-for-granted assumptions about the knowledge and performance of graduates who worked in the same unit for their undergraduate paid work experience were uncovered. These assumptions affected the quantity and quality of support and time that other senior nurses provided to these graduates for their further development into competent nurses and patient advocates, which could have implications for patient safety. It is our hope that this narrative inquiry will heighten awareness of taken-for-granted assumptions, so as to help graduates transition to their new role and provide quality patient care.",
"title": ""
},
{
"docid": "4d3ed5dd5d4f08c9ddd6c9b8032a77fd",
"text": "The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging.",
"title": ""
},
{
"docid": "a74a063dfe2be9fbf0769277785c7e53",
"text": "There has been considerable interest in improving the capability to identify communities within large collections of social networking data. However, many of the existing algorithms will compartment an actor (node) into a single group, ignoring the fact that in real-world situations people tend to belong concurrently to multiple groups. Our work focuses on the ability to find overlapping communities by aggregating the community perspectives of friendship groups, derived from egonets. We will demonstrate that our algorithm not only finds overlapping communities, but additionally helps identify key members, which bind communities together. Additionally, we will highlight the parallel feature of the algorithm as a means of improving runtime performance.",
"title": ""
},
{
"docid": "8015f5668df95f83e353550d54eac4da",
"text": "Counterfeit currency is a burning question throughout the world. The counterfeiters are becoming harder to track down because of their rapid adoption of and adaptation with highly advanced technology. One of the most effective methods to stop counterfeiting can be the widespread use of counterfeit detection tools/software that are easily available and are efficient in terms of cost, reliability and accuracy. This paper presents a core software system to build a robust automated counterfeit currency detection tool for Bangladeshi bank notes. The software detects fake currency by extracting existing features of banknotes such as micro-printing, optically variable ink (OVI), water-mark, iridescent ink, security thread and ultraviolet lines using OCR (Optical Character recognition), Contour Analysis, Face Recognition, Speeded UP Robust Features (SURF) and Canny Edge & Hough transformation algorithm of OpenCV. The success rate of this software can be measured in terms of accuracy and speed. This paper also focuses on the pros and cons of implementation details that may degrade the performance of image processing based paper currency authentication systems.",
"title": ""
},
{
"docid": "2a8f2e8e4897f03c89d9e8a6bf8270f3",
"text": "BACKGROUND\nThe aging of the population is an inexorable change that challenges governments and societies in every developed country. Based on clinical and empirical data, social isolation is found to be prevalent among elderly people, and it has negative consequences on the elderly's psychological and physical health. Targeting social isolation has become a focus area for policy and practice. Evidence indicates that contemporary information and communication technologies (ICT) have the potential to prevent or reduce the social isolation of elderly people via various mechanisms.\n\n\nOBJECTIVE\nThis systematic review explored the effects of ICT interventions on reducing social isolation of the elderly.\n\n\nMETHODS\nRelevant electronic databases (PsycINFO, PubMed, MEDLINE, EBSCO, SSCI, Communication Studies: a SAGE Full-Text Collection, Communication & Mass Media Complete, Association for Computing Machinery (ACM) Digital Library, and IEEE Xplore) were systematically searched using a unified strategy to identify quantitative and qualitative studies on the effectiveness of ICT-mediated social isolation interventions for elderly people published in English between 2002 and 2015. Narrative synthesis was performed to interpret the results of the identified studies, and their quality was also appraised.\n\n\nRESULTS\nTwenty-five publications were included in the review. Four of them were evaluated as rigorous research. Most studies measured the effectiveness of ICT by measuring specific dimensions rather than social isolation in general. ICT use was consistently found to affect social support, social connectedness, and social isolation in general positively. The results for loneliness were inconclusive. Even though most were positive, some studies found a nonsignificant or negative impact. More importantly, the positive effect of ICT use on social connectedness and social support seemed to be short-term and did not last for more than six months after the intervention. The results for self-esteem and control over one's life were consistent but generally nonsignificant. ICT was found to alleviate the elderly's social isolation through four mechanisms: connecting to the outside world, gaining social support, engaging in activities of interests, and boosting self-confidence.\n\n\nCONCLUSIONS\nMore well-designed studies that contain a minimum risk of research bias are needed to draw conclusions on the effectiveness of ICT interventions for elderly people in reducing their perceived social isolation as a multidimensional concept. The results of this review suggest that ICT could be an effective tool to tackle social isolation among the elderly. However, it is not suitable for every senior alike. Future research should identify who among elderly people can most benefit from ICT use in reducing social isolation. Research on other types of ICT (eg, mobile phone-based instant messaging apps) should be conducted to promote understanding and practice of ICT-based social-isolation interventions for elderly people.",
"title": ""
},
{
"docid": "5fa9efcdb3b414b38784bd146f71fa3e",
"text": "Successful fine-grained image classification methods learn subtle details between visually similar (sub-)classes, but the problem becomes significantly more challenging if the details are missing due to low resolution. Encouraged by the recent success of Convolutional Neural Network (CNN) architectures in image classification, we propose a novel resolution-aware deep model which combines convolutional image super-resolution and convolutional fine-grained classification into a single model in an end-to-end manner. Extensive experiments on multiple benchmarks demonstrate that the proposed model consistently performs better than conventional convolutional networks on classifying fine-grained object classes in low-resolution images.",
"title": ""
},
{
"docid": "c41038d0e3cf34e8a1dcba07a86cce9a",
"text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.",
"title": ""
},
{
"docid": "98b4703412d1c8ccce22ea6fb05d73bf",
"text": "Clinical evaluation of scapular dyskinesis (SD) aims to identify abnormal scapulothoracic movement, underlying causal factors, and the potential relationship with shoulder symptoms. The literature proposes different methods of dynamic clinical evaluation of SD, but improved reliability and agreement values are needed. The present study aimed to evaluate the intrarater and interrater agreement and reliability of three SD classifications: 1) 4-type classification, 2) Yes/No classification, and 3) scapular dyskinesis test (SDT). Seventy-five young athletes, including 45 men and 30 women, were evaluated. Raters evaluated the SD based on the three methods during one series of 8-10 cycles (at least eight and maximum of ten) of forward flexion and abduction with an external load under the observation of two raters trained to diagnose SD. The evaluation protocol was repeated after 3 h for intrarater analysis. The agreement percentage was calculated by dividing the observed agreement by the total number of observations. Reliability was calculated using Cohen Kappa coefficient, with a 95% confidence interval (CI), defined by Kappa coefficient ±1.96 multiplied by the measurement standard error. The interrater analyses showed an agreement percentage between 80% and 95.9% and an almost perfect reliability (k>0.81) for the three classification methods in all the test conditions, except the 4-type and SDT classification methods, which had substantial reliability (k<0.80) in shoulder abduction. Intrarater analyses showed agreement percentages between 80.7% and 89.3% and substantial reliability (0.67 to 0.81) for both raters in the three classifications. CIs ranged from moderate to almost perfect categories. This indicates that the three SD classification methods investigated in this study showed high reliability values for both intrarater and interrater evaluation throughout a protocol that provided SD evaluation training of raters and included several repetitions of arm movements with external load during a live assessment.",
"title": ""
},
{
"docid": "3b4e704d6685bf7a15974638f3ae3ca9",
"text": "This study presents a design of two-dimensional (2D) discrete cosine transform (DCT) hardware architecture dedicated for High Efficiency Video Coding (HEVC) in field programmable gate array (FPGA) platforms. The proposed methodology efficiently proceeds 2D-DCT computation to fit internal components and characteristics of FPGA resources. A four-stage circuit architecture is developed to implement the proposed methodology. This architecture supports variable size of DCT computation, including 4×4, 8×8, 16×16, and 32×32. The proposed architecture has been implemented in System Verilog and synthesized in various FPGA platforms. Compared with existing related works in literature, this proposed architecture demonstrates significant advantages in hardware cost and performance improvement. The proposed architecture is able to sustain 4K@30fps ultra high definition (UHD) TV real-time encoding applications with a reduction of 31-64% in hardware cost. Keywords—H.265/HEVC, two-dimensional discrete cosine transform (2D-DCT), FPGA platform, hardware architecture",
"title": ""
},
{
"docid": "269e3e965031e92550b9b78d06ed8664",
"text": "While Internet and network technology have been growing rapidly, cyber attack incidents also increase accordingly. The increasing occurrence of network attacks is an important problem to network services. In this paper, we present a network based Intrusion Detection and Prevention System DPS), which can efficiently detect many well-known attack types and can immediately prevent the network system from network attacks. Our approach is simple and efficient and can be used with several machine learning algorithms. We actually implement the IDPS using different machine learning algorithms and test in an online network environment. The experimental results show that our IDPS can distinguish normal network activities from main attack types (Probe and Denial of Service) with high accuracy of detection rate in a few seconds and automatically prevent the victim's computer network from the attacks. In addition, we apply a well-known machine learning technique called C4.5 Decision Tree in our approach to consider unknown or new network attack types. Surprisingly, the supervised Decision Tree technique can work very well, when experiencing with untrained or unknown network attack types.",
"title": ""
},
{
"docid": "7813dc93e6bcda97768d87e80f8efb2b",
"text": "The inclusion of transaction costs is an essential element of any realistic portfolio optimization. In this paper, we consider an extension of the standard portfolio problem in which transaction costs are incurred to rebalance an investment portfolio. The Markowitz framework of mean-variance efficiency is used with costs modelled as a percentage of the value transacted. Each security in the portfolio is represented by a pair of continuous decision variables corresponding to the amounts bought and sold. In order to properly represent the variance of the resulting portfolio, it is necessary to rescale by the funds available after paying the transaction costs. We show that the resulting fractional quadratic programming problem can be solved as a quadratic programming problem of size comparable to the model without transaction costs. Computational results for two empirical datasets are presented.",
"title": ""
},
{
"docid": "bb94ac9ac0c1e1f1155fc56b13bc103e",
"text": "In contrast to the Android application layer, Android’s application framework’s internals and their influence on the platform security and user privacy are still largely a black box for us. In this paper, we establish a static runtime model of the application framework in order to study its internals and provide the first high-level classification of the framework’s protected resources. We thereby uncover design patterns that differ highly from the runtime model at the application layer. We demonstrate the benefits of our insights for security-focused analysis of the framework by re-visiting the important use-case of mapping Android permissions to framework/SDK API methods. We, in particular, present a novel mapping based on our findings that significantly improves on prior results in this area that were established based on insufficient knowledge about the framework’s internals. Moreover, we introduce the concept of permission locality to show that although framework services follow the principle of separation of duty, the accompanying permission checks to guard sensitive operations violate it.",
"title": ""
},
{
"docid": "22445127362a9a2b16521a4a48f24686",
"text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.",
"title": ""
}
] | scidocsrr |
8a354fce87f03ef6245c20ddc225cc15 | Multi-domain Neural Network Language Generation for Spoken Dialogue Systems | [
{
"docid": "c12c9fa98f672ec1bfde404d5bf54a35",
"text": "Speech recognition has become an important feature in smartphones in recent years. Different from traditional automatic speech recognition, the speech recognition on smartphones can take advantage of personalized language models to model the linguistic patterns and wording habits of a particular smartphone owner better. Owing to the popularity of social networks in recent years, personal texts and messages are no longer inaccessible. However, data sparseness is still an unsolved problem. In this paper, we propose a three-step adaptation approach to personalize recurrent neural network language models (RNNLMs). We believe that its capability to model word histories as distributed representations of arbitrary length can help mitigate the data sparseness problem. Furthermore, we also propose additional user-oriented features to empower the RNNLMs with stronger capabilities for personalization. The experiments on a Facebook dataset showed that the proposed method not only drastically reduced the model perplexity in preliminary experiments, but also moderately reduced the word error rate in n-best rescoring tests.",
"title": ""
}
] | [
{
"docid": "503c6039a8b875666a7c984098dfa710",
"text": "This paper is a survey on the application of neural networks in forecasting stock market prices. With their ability to discover patterns in nonlinear and chaotic systems, neural networks offer the ability to predict market directions more accurately than current techniques. Common market analysis techniques such as technical analysis, fundamental analysis, and regression are discussed and compared with neural network performance. Also, the Efficient Market Hypothesis (EMH) is presented and contrasted with chaos theory and neural networks. This paper refutes the EMH based on previous neural network work. Finally, future directions for applying neural networks to the financial markets are discussed.",
"title": ""
},
{
"docid": "2a3a1c67e118784aff9191e259ed32fd",
"text": "1474-0346/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.aei.2010.06.006 * Corresponding author. Tel.: +1 4048949881. E-mail address: brilakis@gatech.edu (I. Brilakis). Only very few constructed facilities today have a complete record of as-built information. Despite the growing use of Building Information Modelling and the improvement in as-built records, several more years will be required before guidelines that require as-built data modelling will be implemented for the majority of constructed facilities, and this will still not address the stock of existing buildings. A technical solution for scanning buildings and compiling Building Information Models is needed. However, this is a multidisciplinary problem, requiring expertise in scanning, computer vision and videogrammetry, machine learning, and parametric object modelling. This paper outlines the technical approach proposed by a consortium of researchers that has gathered to tackle the ambitious goal of automating as-built modelling as far as possible. The top level framework of the proposed solution is presented, and each process, input and output is explained, along with the steps needed to validate them. Preliminary experiments on the earlier stages (i.e. processes) of the framework proposed are conducted and results are shown; the work toward implementation of the remainder is ongoing. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d8f7b138124e7b1a251e8bd92e47f35c",
"text": "Autonomous delivery of goods using a Micro Air Vehicle (MAV) is a difficult problem, as it poses high demand on the MAV's control, perception and manipulation capabilities. This problem is especially challenging if the exact shape, location and configuration of the objects are unknown. In this paper, we report our findings during the development and evaluation of a fully integrated system that is energy efficient and enables MAVs to pick up and deliver objects with partly ferrous surface of varying shapes and weights. This is achieved by using a novel combination of an electro-permanent magnetic gripper with a passively compliant structure and integration with detection, control and servo positioning algorithms. The system's ability to grasp stationary and moving objects was tested, as well as its ability to cope with different shapes of the object and external disturbances. We show that such a system can be successfully deployed in scenarios where an object with partly ferrous parts needs to be gripped and placed in a predetermined location.",
"title": ""
},
{
"docid": "d1c33990b7642ea51a8a568fa348d286",
"text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.",
"title": ""
},
{
"docid": "8b1b0ee79538a1f445636b0798a0c7ca",
"text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.",
"title": ""
},
{
"docid": "78f7af509d4982a0dfd64f2272866ede",
"text": "It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained !1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms !1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted !1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the !1 norm of the coefficient sequence as is common, but by reweighting the !1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.",
"title": ""
},
{
"docid": "71ec2c62f6371c810b35aeef4172a392",
"text": "This survey, aimed mainly at mathematicians rather than practitioners, covers recent developments in homomorphic encryption (computing on encrypted data) and program obfuscation (generating encrypted but functional programs). Current schemes for encrypted computation all use essentially the same “noisy” approach: they encrypt via a noisy encoding of the message, they decrypt using an “approximate” ring homomorphism, and in between they employ techniques to carefully control the noise as computations are performed. This noisy approach uses a delicate balance between structure and randomness: structure that allows correct computation despite the randomness of the encryption, and randomness that maintains privacy against the adversary despite the structure. While the noisy approach “works”, we need new techniques and insights, both to improve efficiency and to better understand encrypted computation conceptually. Mathematics Subject Classification (2010). Primary 68Qxx; Secondary 68P25.",
"title": ""
},
{
"docid": "561c41873eeb64ec13a5745f235d4fb4",
"text": "There are many studies that look into the relationship between public debt and economic growth. It is hard to find, however, research addressing the role of corruption between these two variables. Noticing this vacancy in current literature, we strive to investigate the effect of corruption on the relationship between public debt and economic growth. For this purpose, the pooled ordinary least squares (OLS), fixed effects models and the dynamic panel generalized method of moments (GMM) models (Arellano-Bond, 1991) are estimated with data of 77 countries from 1990 to 2014. The empirical results show that the interaction term between public debt and corruption is statistically significant. This confirms the hypothesis that the effect of public debt on economic growth is a function of corruption. The sign of the marginal effect is negative in corrupt countries, but public debt enhances economic growth within countries that are not corrupt, i.e., highly transparent.",
"title": ""
},
{
"docid": "fe5377214840549fbbb6ad520592191d",
"text": "The ability to exert an appropriate amount of force on brain tissue during surgery is an important component of instrument handling. It allows surgeons to achieve the surgical objective effectively while maintaining a safe level of force in tool-tissue interaction. At the present time, this knowledge, and hence skill, is acquired through experience and is qualitatively conveyed from an expert surgeon to trainees. These forces can be assessed quantitatively by retrofitting surgical tools with sensors, thus providing a mechanism for improved performance and safety of surgery, and enhanced surgical training. This paper presents the development of a force-sensing bipolar forceps, with installation of a sensory system, that is able to measure and record interaction forces between the forceps tips and brain tissue in real time. This research is an extension of a previous research where a bipolar forceps was instrumented to measure dissection and coagulation forces applied in a single direction. Here, a planar forceps with two sets of strain gauges in two orthogonal directions was developed to enable measuring the forces with a higher accuracy. Implementation of two strain gauges allowed compensation of strain values due to deformations of the forceps in other directions (axial stiffening) and provided more accurate forces during microsurgery. An experienced neurosurgeon performed five neurosurgical tasks using the axial setup and repeated the same tasks using the planar device. The experiments were performed on cadaveric brains. Both setups were shown to be capable of measuring real-time interaction forces. Comparing the two setups, under the same experimental condition, indicated that the peak and mean forces quantified by planar forceps were at least 7% and 10% less than those of axial tool, respectively; therefore, utilizing readings of all strain gauges in planar forceps provides more accurate values of both peak and mean forces than axial forceps. Cross-correlation analysis between the two force signals obtained, one from each cadaveric practice, showed a high similarity between the two force signals.",
"title": ""
},
{
"docid": "f5360ff8d8cc5d0a852cebeb09a29a98",
"text": "In this paper, we propose a collaborative deep reinforcement learning (C-DRL) method for multi-object tracking. Most existing multiobject tracking methods employ the tracking-by-detection strategy which first detects objects in each frame and then associates them across different frames. However, the performance of these methods rely heavily on the detection results, which are usually unsatisfied in many real applications, especially in crowded scenes. To address this, we develop a deep prediction-decision network in our C-DRL, which simultaneously detects and predicts objects under a unified network via deep reinforcement learning. Specifically, we consider each object as an agent and track it via the prediction network, and seek the optimal tracked results by exploiting the collaborative interactions of different agents and environments via the decision network.Experimental results on the challenging MOT15 and MOT16 benchmarks are presented to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "970a1c802a4c731c3fcb03855d5cfb8c",
"text": "Visual prior from generic real-world images can be learned and transferred for representing objects in a scene. Motivated by this, we propose an algorithm that transfers visual prior learned offline for online object tracking. From a collection of real-world images, we learn an overcomplete dictionary to represent visual prior. The prior knowledge of objects is generic, and the training image set does not necessarily contain any observation of the target object. During the tracking process, the learned visual prior is transferred to construct an object representation by sparse coding and multiscale max pooling. With this representation, a linear classifier is learned online to distinguish the target from the background and to account for the target and background appearance variations over time. Tracking is then carried out within a Bayesian inference framework, in which the learned classifier is used to construct the observation model and a particle filter is used to estimate the tracking result sequentially. Experiments on a variety of challenging sequences with comparisons to several state-of-the-art methods demonstrate that more robust object tracking can be achieved by transferring visual prior.",
"title": ""
},
{
"docid": "7f1ad50ce66c855776aaacd0d53279aa",
"text": "A method to synchronize and control a system of parallel single-phase inverters without communication is presented. Inspired by the phenomenon of synchronization in networks of coupled oscillators, we propose that each inverter be controlled to emulate the dynamics of a nonlinear dead-zone oscillator. As a consequence of the electrical coupling between inverters, they synchronize and share the load in proportion to their ratings. We outline a sufficient condition for global asymptotic synchronization and formulate a methodology for controller design such that the inverter terminal voltages oscillate at the desired frequency, and the load voltage is maintained within prescribed bounds. We also introduce a technique to facilitate the seamless addition of inverters controlled with the proposed approach into an energized system. Experimental results for a system of three inverters demonstrate power sharing in proportion to power ratings for both linear and nonlinear loads.",
"title": ""
},
{
"docid": "e1f5ad4dbb4f029ea7815b1f47c29a9b",
"text": "This paper proposes a method based on epoch parameters for detection of creaky voice in speech signal. The epoch parameters characterizing the source of excitation considered in this work are number of epochs in a frame, strength of excitation of epochs and epoch intervals. Analysis of epoch parameters estimated from zero-frequency filtering method with different window sizes is carried out. Distinct variations in the epoch parameters are observed for modal and creaky voiced regions. Variances of epoch parameters are used as input features to train a neural network classifier for identifying creaky regions. Performance evaluation results indicate that the proposed method performs significantly better than the existing creaky detection methods on different speech databases.",
"title": ""
},
{
"docid": "24411f7fe027e5eb617cf48c3e36ce05",
"text": "Reliability assessment of distribution system, based on historical data and probabilistic methods, leads to an unreliable estimation of reliability indices since the data for the distribution components are usually inaccurate or unavailable. Fuzzy logic is an efficient method to deal with the uncertainty in reliability inputs. In this paper, the ENS index along with other commonly used indices in reliability assessment are evaluated for the distribution system using fuzzy logic. Accordingly, the influential variables on the failure rate and outage duration time of the distribution components, which are natural or human-made, are explained using proposed fuzzy membership functions. The reliability indices are calculated and compared for different cases of the system operations by simulation on the IEEE RBTS Bus 2. The results of simulation show how utilities can significantly improve the reliability of their distribution system by considering the risk of the influential variables.",
"title": ""
},
{
"docid": "2802c89f5b943ea0bee357b36d072ada",
"text": "Motivation: Alzheimer’s disease (AD) is an incurable neurological condition which causes progressive mental deterioration, especially in the elderly. The focus of our work is to improve our understanding about the progression of AD. By finding brain regions which degenerate together in AD we can understand how the disease progresses during the lifespan of an Alzheimer’s patient. Our aim is to work towards not only achieving diagnostic performance but also generate useful clinical information. Objective: The main objective of this study is to find important sub regions of the brain which undergo neuronal degeneration together during AD using deep learning algorithms and other machine learning techniques. Methodology: We extract 3D brain region patches from 100 subject MRI images using a predefined anatomical atlas. We have devised an ensemble of pair predictors which use 3D convolutional neural networks to extract salient features for AD from a pair of regions in the brain. We then train them in a supervised manner and use a boosting algorithm to find the weightage of each pair predictor towards the final classification. We use this weightage as the strength of correlation and saliency between the two input sub regions of the pair predictor. Result: We were able to retrieve sub regional association measures for 100 sub region pairs using the proposed method. Our approach was able to automatically learn sub regional association structure in AD directly from images. Our approach also provides an insight into computational methods for demarcating effects of AD from effects of ageing (and other neurological diseases) on our neuroanatomy. Our meta classifier gave a final accuracy of 81.79% for AD classification relative to healthy subjects using a single imaging modality dataset.",
"title": ""
},
{
"docid": "07f24637cb88ac855a3e63676317ac34",
"text": "The location of the author of a social media message is not invariably the same as the location that the author writes about in the message. In applications that mine these messages for information such as tracking news, political events or responding to disasters, it is the geographic content of the message rather than the location of the author that is important. To this end, we present a method to geo-parse the short, informal messages known as microtext. Our preliminary investigation has shown that many microtext messages contain place references that are abbreviated, misspelled, or highly localized. These references are missed by standard geo-parsers. Our geo-parser is built to find such references. It uses Natural Language Processing methods to identify references to streets and addresses, buildings and urban spaces, and toponyms, and place acronyms and abbreviations. It combines heuristics, open-source Named Entity Recognition software, and machine learning techniques. Our primary data consisted of Twitter messages sent immediately following the February 2011 earthquake in Christchurch, New Zealand. The algorithm identified location in the data sample, Twitter messages, giving an F statistic of 0.85 for streets, 0.86 for buildings, 0.96 for toponyms, and 0.88 for place abbreviations, with a combined average F of 0.90 for identifying places. The same data run through a geo-parsing standard, Yahoo! Placemaker, yielded an F statistic of zero for streets and buildings (because Placemaker is designed to find neither streets nor buildings), and an F of 0.67 for toponyms.",
"title": ""
},
{
"docid": "b3932eaf894f4fb6b47f2fec24bb88fd",
"text": "There has been much prescriptive work in project management, exemplified in various \"Bodies of Knowledge\". However, experience shows some projects overspending considerably. Recently, systemic modeling research into the behavior of large projects explains project oversponds by \"systemic\" effects and the (sometimes counterintuitive) effect of management actions. However, while this work is becoming more widely known, embedding the lessons in project-management practice is not straightforward. The current prescriptive dominant discourse of project management contains implicit underlying assumptions with which the systemic modeling work clashes, indeed showing how conventional methods can exacerbate rather than alleviate project problems. Exploration of this modeling suggests that for projects that are complex, uncertain, and time-limited, conventional methods might be inappropriate, and aspects of newer methodologies in which the project \"emerges\" rather than being fully preplanned might be more appropriate. Some of the current literature on project-classification schemes also suggests similar parameters, without the rationale that the systemic modeling provides, thus providing useful backup to this analysis. The eventual aim of this line of work is to enable project managers to choose effective ways to manage projects based on understanding and model-based theory.",
"title": ""
},
{
"docid": "c90ab409ea2a9726f6ddded45e0fdea9",
"text": "About a decade ago, the Adult Attachment Interview (AAI; C. George, N. Kaplan, & M. Main, 1985) was developed to explore parents' mental representations of attachment as manifested in language during discourse of childhood experiences. The AAI was intended to predict the quality of the infant-parent attachment relationship, as observed in the Ainsworth Strange Situation, and to predict parents' responsiveness to their infants' attachment signals. The current meta-analysis examined the available evidence with respect to these predictive validity issues. In regard to the 1st issue, the 18 available samples (N = 854) showed a combined effect size of 1.06 in the expected direction for the secure vs. insecure split. For a portion of the studies, the percentage of correspondence between parents' mental representation of attachment and infants' attachment security could be computed (the resulting percentage was 75%; kappa = .49, n = 661). Concerning the 2nd issue, the 10 samples (N = 389) that were retrieved showed a combined effect size of .72 in the expected direction. According to conventional criteria, the effect sizes are large. It was concluded that although the predictive validity of the AAI is a replicated fact, there is only partial knowledge of how attachment representations are transmitted (the transmission gap).",
"title": ""
},
{
"docid": "db1c084ddbe345fe3c8e400e295830c8",
"text": "This article is a single-source introduction to the emerging concept of smart cities. It can be used for familiarizing researchers with the vast scope of research possible in this application domain. The smart city is primarily a concept, and there is still not a clear and consistent definition among practitioners and academia. As a simplistic explanation, a smart city is a place where traditional networks and services are made more flexible, efficient, and sustainable with the use of information, digital, and telecommunication technologies to improve the city's operations for the benefit of its inhabitants. Smart cities are greener, safer, faster, and friendlier. The different components of a smart city include smart infrastructure, smart transportation, smart energy, smart health care, and smart technology. These components are what make the cities smart and efficient. Information and communication technology (ICT) are enabling keys for transforming traditional cities into smart cities. Two closely related emerging technology frameworks, the Internet of Things (IoT) and big data (BD), make smart cities efficient and responsive. The technology has matured enough to allow smart cities to emerge. However, there is much needed in terms of physical infrastructure, a smart city, the digital technologies translate into better public services for inhabitants and better use of resources while reducing environmental impacts. One of the formal definitions of the smart city is the following: a city \"connecting the physical infrastructure, the information-technology infrastructure, the social infrastructure, and the business infrastructure to leverage the collective intelligence of the city\". Another formal and comprehensive definition is \"a smart sustainable city is an innovative city that uses information and communication technologies (ICTs) and other means to improve quality of life, efficiency of urban operations and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects\". Any combination of various smart components can make cities smart. A city need not have all the components to be labeled as smart. The number of smart components depends on the cost and available technology.",
"title": ""
}
] | scidocsrr |
c8855abd771a62b93c7112efeece4ecd | Extracting sclera features for cancelable identity verification | [
{
"docid": "9fc7f8ef20cf9c15f9d2d2ce5661c865",
"text": "This paper presents a new iris database that contains images with noise. This is in contrast with the existing databases, that are noise free. UBIRIS is a tool for the development of robust iris recognition algorithms for biometric proposes. We present a detailed description of the many characteristics of UBIRIS and a comparison of several image segmentation approaches used in the current iris segmentation methods where it is evident their small tolerance to noisy images.",
"title": ""
}
] | [
{
"docid": "befc5dbf4da526963f8aa180e1fda522",
"text": "Charities publicize the donations they receive, generally according to dollar categories rather than the exact amount. Donors in turn tend to give the minimum amount necessary to get into a category. These facts suggest that donors have a taste for having their donations made public. This paper models the effects of such a taste for ‘‘prestige’’ on the behavior of donors and charities. I show how a taste for prestige means that charities can increase donations by using categories. The paper also discusses the effect of a taste for prestige on competition between charities. 1998 Elsevier Science S.A.",
"title": ""
},
{
"docid": "8ad20ab4523e4cc617142a2de299dd4a",
"text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.",
"title": ""
},
{
"docid": "ea982e20cc739fc88ed6724feba3d896",
"text": "We report new evidence on the emotional, demographic, and situational correlates of boredom from a rich experience sample capturing 1.1 million emotional and time-use reports from 3,867 U.S. adults. Subjects report boredom in 2.8% of the 30-min sampling periods, and 63% of participants report experiencing boredom at least once across the 10-day sampling period. We find that boredom is more likely to co-occur with negative, rather than positive, emotions, and is particularly predictive of loneliness, anger, sadness, and worry. Boredom is more prevalent among men, youths, the unmarried, and those of lower income. We find that differences in how such demographic groups spend their time account for up to one third of the observed differences in overall boredom. The importance of situations in predicting boredom is additionally underscored by the high prevalence of boredom in specific situations involving monotonous or difficult tasks (e.g., working, studying) or contexts where one's autonomy might be constrained (e.g., time with coworkers, afternoons, at school). Overall, our findings are consistent with cognitive accounts that cast boredom as emerging from situations in which engagement is difficult, and are less consistent with accounts that exclusively associate boredom with low arousal or with situations lacking in meaning. (PsycINFO Database Record",
"title": ""
},
{
"docid": "85221954ced857c449acab8ee5cf801e",
"text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.",
"title": ""
},
{
"docid": "d3e35963e85ade6e3e517ace58cb3911",
"text": "In this paper, we present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers’ sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e, direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.",
"title": ""
},
{
"docid": "619af7dc39e21690c1d164772711d7ed",
"text": "The prevalence of smart mobile devices has promoted the popularity of mobile applications (a.k.a. apps). Supporting mobility has become a promising trend in software engineering research. This article presents an empirical study of behavioral service profiles collected from millions of users whose devices are deployed with Wandoujia, a leading Android app-store service in China. The dataset of Wandoujia service profiles consists of two kinds of user behavioral data from using 0.28 million free Android apps, including (1) app management activities (i.e., downloading, updating, and uninstalling apps) from over 17 million unique users and (2) app network usage from over 6 million unique users. We explore multiple aspects of such behavioral data and present patterns of app usage. Based on the findings as well as derived knowledge, we also suggest some new open opportunities and challenges that can be explored by the research community, including app development, deployment, delivery, revenue, etc.",
"title": ""
},
{
"docid": "7f47434e413230faf04849cf43a845fa",
"text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.",
"title": ""
},
{
"docid": "8e8905e6ae4c4d6cd07afa157b253da9",
"text": "Blockchain technology enables the execution of collaborative business processes involving untrusted parties without requiring a central authority. Specifically, a process model comprising tasks performed by multiple parties can be coordinated via smart contracts operating on the blockchain. The consensus mechanism governing the blockchain thereby guarantees that the process model is followed by each party. However, the cost required for blockchain use is highly dependent on the volume of data recorded and the frequency of data updates by smart contracts. This paper proposes an optimized method for executing business processes on top of commodity blockchain technology. The paper presents a method for compiling a process model into a smart contract that encodes the preconditions for executing each task in the process using a space-optimized data structure. The method is empirically compared to a previously proposed baseline by replaying execution logs, including one from a real-life business process, and measuring resource consumption.",
"title": ""
},
{
"docid": "41820b51dbea5801281e6ca86defed2e",
"text": "This paper is an exploration in the semantics and pragmatics of linguistic feedback, i.e., linguistic mechanisms which enable the participants in spoken interaction to exchange information about basic communicative functions, such as contact, perception, understanding, and attitudinal reactions to the communicated content. Special attention is given to the type of reaction conveyed by feedback utterances, the communicative status of the information conveyed (i. e., the level of awareness and intentionality of the communicating sender), and the context sensitivity of feedback expressions. With regard to context sensitivity, which is one of the most characteristic features of feedback expressions, the discussion focuses on the way in which the type of speech act (mood), the factual polarity and the information status of the preceding utterance influence the interpretation of feedback utterances. The different content dimensions are exemplified by data from recorded dialogues and by data given through linguistic intuition. Finally, two different ways of formalizing the analysis are examined, one using attribute-value matrices and one based on the theory of situation semantics. ___________________________________________________________________ Authors' address: Jens Allwood, Joakim Nivre, and Elisabeth Ahlsén Department of Lingustics University of Göteborg Box 200 S-405 30 Göteborg Sweden",
"title": ""
},
{
"docid": "099dbf8d4c0b401cd3389583eb4495f3",
"text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.",
"title": ""
},
{
"docid": "777243cb514414dd225a9d5f41dc49b7",
"text": "We have built and tested a decision tool which will help organisations properly select one business process maturity model (BPMM) over another. This prototype consists of a novel questionnaire with decision criteria for BPMM selection, linked to a unique data set of 69 BPMMs. Fourteen criteria (questions) were elicited from an international Delphi study, and weighed by the analytical hierarchy process. Case studies have shown (non-)profit and academic applications. Our purpose was to describe criteria that enable an informed BPMM choice (conform to decision-making theories, rather than ad hoc). Moreover, we propose a design process for building BPMM decision tools. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "90d0d75ca8413dad8ffe42b6d064905b",
"text": "BACKGROUND\nDebate continues about the consequences of adolescent cannabis use. Existing data are limited in statistical power to examine rarer outcomes and less common, heavier patterns of cannabis use than those already investigated; furthermore, evidence has a piecemeal approach to reporting of young adult sequelae. We aimed to provide a broad picture of the psychosocial sequelae of adolescent cannabis use.\n\n\nMETHODS\nWe integrated participant-level data from three large, long-running longitudinal studies from Australia and New Zealand: the Australian Temperament Project, the Christchurch Health and Development Study, and the Victorian Adolescent Health Cohort Study. We investigated the association between the maximum frequency of cannabis use before age 17 years (never, less than monthly, monthly or more, weekly or more, or daily) and seven developmental outcomes assessed up to age 30 years (high-school completion, attainment of university degree, cannabis dependence, use of other illicit drugs, suicide attempt, depression, and welfare dependence). The number of participants varied by outcome (N=2537 to N=3765).\n\n\nFINDINGS\nWe recorded clear and consistent associations and dose-response relations between the frequency of adolescent cannabis use and all adverse young adult outcomes. After covariate adjustment, compared with individuals who had never used cannabis, those who were daily users before age 17 years had clear reductions in the odds of high-school completion (adjusted odds ratio 0·37, 95% CI 0·20-0·66) and degree attainment (0·38, 0·22-0·66), and substantially increased odds of later cannabis dependence (17·95, 9·44-34·12), use of other illicit drugs (7·80, 4·46-13·63), and suicide attempt (6·83, 2·04-22·90).\n\n\nINTERPRETATION\nAdverse sequelae of adolescent cannabis use are wide ranging and extend into young adulthood. Prevention or delay of cannabis use in adolescence is likely to have broad health and social benefits. Efforts to reform cannabis legislation should be carefully assessed to ensure they reduce adolescent cannabis use and prevent potentially adverse developmental effects.\n\n\nFUNDING\nAustralian Government National Health and Medical Research Council.",
"title": ""
},
{
"docid": "066d3a381ffdb2492230bee14be56710",
"text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.",
"title": ""
},
{
"docid": "d64b3b68f094ade7881f2bb0f2572990",
"text": "Large-scale transactional systems still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective. A semantic layer built upon a basic blockchain infrastructure would join the benefits of flexible resource/service discovery and validation by consensus. This paper proposes a novel Service-oriented Architecture (SOA) based on a semantic blockchain. Registration, discovery, selection and payment operations are implemented as smart contracts, allowing decentralized execution and trust. Potential applications include material and immaterial resource marketplaces and trustless collaboration among autonomous entities, spanning many areas of interest for smart cities and communities.",
"title": ""
},
{
"docid": "5038df440c0db19e1588cc69b10cc3c4",
"text": "Electronic document management (EDM) technology has the potential to enhance the information management in construction projects considerably, without radical changes to current practice. Over the past fifteen years this topic has been overshadowed by building product modelling in the construction IT research world, but at present EDM is quickly being introduced in practice, in particular in bigger projects. Often this is done in the form of third party services available over the World Wide Web. In the paper, a typology of research questions and methods is presented, which can be used to position the individual research efforts which are surveyed in the paper. Questions dealt with include: What features should EMD systems have? How much are they used? Are there benefits from use and how should these be measured? What are the barriers to wide-spread adoption? Which technical questions need to be solved? Is there scope for standardisation? How will the market for such systems evolve?",
"title": ""
},
{
"docid": "14d5c8ed0b48d5625287fecaf5f72691",
"text": "In this paper we attempt to demonstrate the strengths of Hierarchical Hidden Markov Models (HHMMs) in the representation and modelling of musical structures. We show how relatively simple HHMMs, containing a minimum of expert knowledge, use their advantage of having multiple layers to perform well on tasks where flat Hidden Markov Models (HMMs) struggle. The examples in this paper show a HHMM’s performance at extracting higherlevel musical properties through the construction of simple pitch sequences, correctly representing the data set on which it was trained.",
"title": ""
},
{
"docid": "5320d7790348cc0e48dcf76428811d7b",
"text": "central and, in some ways, most familiar concepts in AI, the most fundamental question about it—What is it?—has rarely been answered directly. Numerous papers have lobbied for one or another variety of representation, other papers have argued for various properties a representation should have, and still others have focused on properties that are important to the notion of representation in general. In this article, we go back to basics to address the question directly. We believe that the answer can best be understood in terms of five important and distinctly different roles that a representation plays, each of which places different and, at times, conflicting demands on the properties a representation should have. We argue that keeping in mind all five of these roles provides a usefully broad perspective that sheds light on some long-standing disputes and can invigorate both research and practice in the field.",
"title": ""
},
{
"docid": "185f209e92314fdf15bbbe3238f1c616",
"text": "This paper studies the opportunistic routing (OR) in unmanned aerial vehicle (UAV) assisted wireless sensor networks (WSNs). We consider the scenario where a UAV collects data from randomly deployed mobile sensors that are moving with different velocities along a predefined route. Due to the dynamic topology, mobile sensors have different opportunities to communicate with the UAV. This paper proposes the All Neighbors Opportunistic Routing (ANOR) and Highest Velocity Opportunistic Routing (HVOR) protocols. In essence, ANOR forwards packets to all neighbors and HVOR forwards them to one neighbor with highest velocity. HVOR is a new OR protocol which dynamically selects route on a pre-transmission basis in multi-hop network. HVOR helps the sensor which has little opportunity to communicate with the UAV to determine which sensor, among all the sensors that are within its range, is the forwarder. The selected node forwards the packet. As a result, in each hop, the packet moves to the sensor that has higher opportunity to communicate with the UAV. In addition, we focus on various performance metrics, including Packets Delivery Ratio (PDR), Routing Overhead Ratio (ROR), Average Latency (AL) and Average Hop Count (AHC), to evaluate the proposed algorithms and compare them with a Direct Communication (DC) protocol. Through extensive simulations, we have shown that both HVOR and ANOR algorithms work better than DC. Moreover, the HVOR algorithm outperforms the other two algorithms in terms of the average overhead.",
"title": ""
},
{
"docid": "3dcb6a88aafb7a9c917ccdd306768f51",
"text": "Protein quality describes characteristics of a protein in relation to its ability to achieve defined metabolic actions. Traditionally, this has been discussed solely in the context of a protein's ability to provide specific patterns of amino acids to satisfy the demands for synthesis of protein as measured by animal growth or, in humans, nitrogen balance. As understanding of protein's actions expands beyond its role in maintaining body protein mass, the concept of protein quality must expand to incorporate these newly emerging actions of protein into the protein quality concept. New research reveals increasingly complex roles for protein and amino acids in regulation of body composition and bone health, gastrointestinal function and bacterial flora, glucose homeostasis, cell signaling, and satiety. The evidence available to date suggests that quality is important not only at the minimum Recommended Dietary Allowance level but also at higher intakes. Currently accepted methods for measuring protein quality do not consider the diverse roles of indispensable amino acids beyond the first limiting amino acid for growth or nitrogen balance. As research continues to evolve in assessing protein's role in optimal health at higher intakes, there is also need to continue to explore implications for protein quality assessment.",
"title": ""
},
{
"docid": "20a2390dede15514cd6a70e9b56f5432",
"text": "The ability to record and replay program executions with low overhead enables many applications, such as reverse-execution debugging, debugging of hard-toreproduce test failures, and “black box” forensic analysis of failures in deployed systems. Existing record-andreplay approaches limit deployability by recording an entire virtual machine (heavyweight), modifying the OS kernel (adding deployment and maintenance costs), requiring pervasive code instrumentation (imposing significant performance and complexity overhead), or modifying compilers and runtime systems (limiting generality). We investigated whether it is possible to build a practical record-and-replay system avoiding all these issues. The answer turns out to be yes — if the CPU and operating system meet certain non-obvious constraints. Fortunately modern Intel CPUs, Linux kernels and user-space frameworks do meet these constraints, although this has only become true recently. With some novel optimizations, our system RR records and replays real-world lowparallelism workloads with low overhead, with an entirely user-space implementation, using stock hardware, compilers, runtimes and operating systems. RR forms the basis of an open-source reverse-execution debugger seeing significant use in practice. We present the design and implementation of RR, describe its performance on a variety of workloads, and identify constraints on hardware and operating system design required to support our approach.",
"title": ""
}
] | scidocsrr |
d1b743428f24a649b697cff3b7c15ca3 | Towards Accurate Distant Supervision for Relational Facts Extraction | [
{
"docid": "904db9e8b0deb5027d67bffbd345b05f",
"text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.",
"title": ""
},
{
"docid": "3f2312e385fc1c9aafc6f9f08e2e2d4f",
"text": "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.",
"title": ""
},
{
"docid": "9c44aba7a9802f1fe95fbeb712c23759",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
}
] | [
{
"docid": "7974d8e70775f1b7ef4d8c9aefae870e",
"text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.",
"title": ""
},
{
"docid": "15b8b0f3682e2eb7c1b1a62be65d6327",
"text": "Data augmentation is widely used to train deep neural networks for image classification tasks. Simply flipping images can help learning by increasing the number of training images by a factor of two. However, data augmentation in natural language processing is much less studied. Here, we describe two methods for data augmentation for Visual Question Answering (VQA). The first uses existing semantic annotations to generate new questions. The second method is a generative approach using recurrent neural networks. Experiments show the proposed schemes improve performance of baseline and state-of-the-art VQA algorithms.",
"title": ""
},
{
"docid": "500e8ab316398313c90a0ea374f28ee8",
"text": "Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community.",
"title": ""
},
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
},
{
"docid": "5e858796f025a9e2b91109835d827c68",
"text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.",
"title": ""
},
{
"docid": "9888a7723089d2f1218e6e1a186a5e91",
"text": "This classic text offers you the key to understanding short circuits, open conductors and other problems relating to electric power systems that are subject to unbalanced conditions. Using the method of symmetrical components, acknowledged expert Paul M. Anderson provides comprehensive guidance for both finding solutions for faulted power systems and maintaining protective system applications. You'll learn to solve advanced problems, while gaining a thorough background in elementary configurations. Features you'll put to immediate use: Numerous examples and problems Clear, concise notation Analytical simplifications Matrix methods applicable to digital computer technology Extensive appendices",
"title": ""
},
{
"docid": "0ba8b4a1dc59e9f1fe68fbb1e491aa2b",
"text": "Capparis spinosa contained many biologically active chemical groups including, alkaloids, glycosides, tannins, phenolics, flavonoids, triterpenoids steroids, carbohydrates, saponins and a wide range of minerals and trace elements. It exerted many pharmacological effects including antimicrobial, cytotoxic, antidiabetic, anti-inflammatory, antioxidant, cardiovascular, bronchorelaxant and many other effects. The present review will designed to highlight the chemical constituents and the pharmacological effects of Capparis spinosa.",
"title": ""
},
{
"docid": "8abcf3e56e272c06da26a40d66afcfb0",
"text": "As internet use becomes increasingly integral to modern life, the hazards of excessive use are also becoming apparent. Prior research suggests that socially anxious individuals are particularly susceptible to problematic internet use. This vulnerability may relate to the perception of online communication as a safer means of interacting, due to greater control over self-presentation, decreased risk of negative evaluation, and improved relationship quality. To investigate these hypotheses, a general sample of 338 completed an online survey. Social anxiety was confirmed as a significant predictor of problematic internet use when controlling for depression and general anxiety. Social anxiety was associated with perceptions of greater control and decreased risk of negative evaluation when communicating online, however perceived relationship quality did not differ. Negative expectations during face-to-face interactions partially accounted for the relationship between social anxiety and problematic internet use. There was also preliminary evidence that preference for online communication exacerbates face-to-face avoidance.",
"title": ""
},
{
"docid": "46d5ecaeb529341dedcd724cfb3696bb",
"text": "Big Data stellt heute ein zentrales Thema der Informatik dar: Insbesondere durch die zunehmende Datafizierung unserer Umwelt entstehen neue und umfangreiche Datenquellen, während sich gleichzeitig die Verarbeitungsgeschwindigkeit von Daten wesentlich erhöht und diese Quellen somit immer häufiger in nahezu Echtzeit analysiert werden können. Neben der Bedeutung in der Informatik nimmt jedoch auch die Relevanz von Daten im täglichen Leben zu: Immer mehr Informationen sind das Ergebnis von Datenanalysen und immer häufiger werden Entscheidungen basierend auf Analyseergebnissen getroffen. Trotz der Relevanz von Daten und Datenverarbeitung im Alltag werden moderne Formen der Datenanalyse im Informatikunterricht bisher jedoch allenfalls am Rand betrachtet, sodass die Schülerinnen und Schüler weder die Möglichkeiten noch die Gefahren dieser Methoden erfahren können. In diesem Beitrag stellen wir daher ein prototypisches Unterrichtskonzept zum Thema Datenanalyse im Kontext von Big Data vor, in dem die Schülerinnen und Schüler wesentliche Grundlagen von Datenanalysen kennenlernen und nachvollziehen können. Um diese komplexen Systeme für den Informatikunterricht möglichst einfach zugänglich zu machen und mit realen Daten arbeiten zu können, wird dabei ein selbst implementiertes Datenstromsystem zur Verarbeitung des Datenstroms von Twitter eingesetzt.",
"title": ""
},
{
"docid": "4c2a41869b0ae000473a8623bd51b4c8",
"text": "This paper presents a novel voltage-angle-based field-weakening control scheme appropriate for the operation of permanent-magnet synchronous machines over a wide range of speed. At high rotational speed, the stator voltage is limited by the inverter dc bus voltage. To control the machine torque above the base speed, the proposed method controls the angle of the limited stator voltage by the integration of gain-scheduled q-axis current error. The stability of the drive is increased by a feedback loop, which compensates dynamic disturbances and smoothes the transition into field weakening. The proposed method can fully utilize the available dc bus voltage. Due to its simplicity, it is robust to the variation of machine parameters. Excellent performance of the proposed method is demonstrated through the experiments performed with and without speed and position sensors.",
"title": ""
},
{
"docid": "049c6062613d0829cf39cbfe4aedca7a",
"text": "Deep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework. We show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively.",
"title": ""
},
{
"docid": "dd130195f82c005d1168608a0388e42d",
"text": "CONTEXT\nThe educational environment makes an important contribution to student learning. The DREEM questionnaire is a validated tool assessing the environment.\n\n\nOBJECTIVES\nTo translate and validate the DREEM into Greek.\n\n\nMETHODS\nForward translations from English were produced by three independent Greek translators and then back translations by five independent bilingual translators. The Greek DREEM.v0 that was produced was administered to 831 undergraduate students from six Greek medical schools. Cronbach's alpha and test-retest correlation were used to evaluate reliability and factor analysis was used to assess validity. Questions that increased alpha if deleted and/or sorted unexpectedly in factor analysis were further checked through two focus groups.\n\n\nFINDINGS\nQuestionnaires were returned by 487 respondents (59%), who were representative of all surveyed students by gender but not by year of study or medical school. The instrument's overall alpha was 0.90, and for the learning, teachers, academic, atmosphere and social subscales the alphas were 0.79 (expected 0.69), 0.78 (0.67), 0.69 (0.60), 0.68 (0.69), 0.48 (0.57), respectively. In a subset of the whole sample, test and retest alphas were both 0.90, and mean item scores highly correlated (p<0.001). Factor analysis produced meaningful subscales but not always matching the original ones. Focus group evaluation revealed possible misunderstanding for questions 17, 25, 29 and 38, which were revised in the DREEM.Gr.v1. The group mean overall scale score was 107.7 (SD 20.2), with significant differences across medical schools (p<0.001).\n\n\nCONCLUSION\nAlphas and test-retest correlation suggest the Greek translated and validated DREEM scale is a reliable tool for assessing the medical education environment and for informing policy. Factor analysis and focus group input suggest it is a valid tool. Reasonable school differences suggest the instrument's sensitivity.",
"title": ""
},
{
"docid": "54dd5e40748b13dafc672e143d20c3bc",
"text": "Reinforcement learning is a promising new approach for automatically developing effective policies for real-time self-management. RL can achieve superior performance to traditional methods, while requiring less built-in domain knowledge. Several case studies from real and simulated systems management applications demonstrate RL's promises and challenges. These studies show that standard online RL can learn effective policies in feasible training times. Moreover, a Hybrid RL approach can profit from any knowledge contained in an existing policy by training on the policy's observable behavior, without needing to interface directly to such knowledge",
"title": ""
},
{
"docid": "c1f095252c6c64af9ceeb33e78318b82",
"text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.",
"title": ""
},
{
"docid": "003d004f57d613ff78bf39a35e788bf9",
"text": "Breast cancer is one of the most common cancer in women worldwide. It is typically diagnosed via histopathological microscopy imaging, for which image analysis can aid physicians for more effective diagnosis. Given a large variability in tissue appearance, to better capture discriminative traits, images can be acquired at different optical magnifications. In this paper, we propose an approach which utilizes joint colour-texture features and a classifier ensemble for classifying breast histopathology images. While we demonstrate the effectiveness of the proposed framework, an important objective of this work is to study the image classification across different optical magnification levels. We provide interesting experimental results and related discussions, demonstrating a visible classification invariance with cross-magnification training-testing. Along with magnification-specific model, we also evaluate the magnification independent model, and compare the two to gain some insights.",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "a42163e2a6625006d04a9b9f6dddf9ce",
"text": "This paper concludes the theme issue on structural health monitoring (SHM) by discussing the concept of damage prognosis (DP). DP attempts to forecast system performance by assessing the current damage state of the system (i.e. SHM), estimating the future loading environments for that system, and predicting through simulation and past experience the remaining useful life of the system. The successful development of a DP capability will require the further development and integration of many technology areas including both measurement/processing/telemetry hardware and a variety of deterministic and probabilistic predictive modelling capabilities, as well as the ability to quantify the uncertainty in these predictions. The multidisciplinary and challenging nature of the DP problem, its current embryonic state of development, and its tremendous potential for life-safety and economic benefits qualify DP as a 'grand challenge' problem for engineers in the twenty-first century.",
"title": ""
},
{
"docid": "1a0ed30b64fa7f8d39a12acfcadfd763",
"text": "This letter presents a smart shelf configuration for radio frequency identification (RFID) application. The proposed shelf has an embedded leaking microstrip transmission line with extended ground plane. This structure, when connected to an RFID reader, allows detecting tagged objects in close proximity with proper field confinement to avoid undesired reading of neighboring shelves. The working frequency band covers simultaneously the three world assigned RFID subbands at ultrahigh frequency (UHF). The concept is explored by full-wave simulations and it is validated with thorough experimental tests.",
"title": ""
},
{
"docid": "f233f816c84407a4acd694f540bb18a9",
"text": "Link prediction is a key technique in many applications such as recommender systems, where potential links between users and items need to be predicted. A challenge in link prediction is the data sparsity problem. In this paper, we address this problem by jointly considering multiple heterogeneous link prediction tasks such as predicting links between users and different types of items including books, movies and songs, which we refer to as the collective link prediction (CLP) problem. We propose a nonparametric Bayesian framework for solving the CLP problem, which allows knowledge to be adaptively transferred across heterogeneous tasks while taking into account the similarities between tasks. We learn the inter-task similarity automatically. We also introduce link functions for different tasks to correct their biases and skewness of distributions in their link data. We conduct experiments on several real world datasets and demonstrate significant improvements over several existing state-of-the-art methods.",
"title": ""
},
{
"docid": "86fdb9b60508f87c0210623879185c8c",
"text": "This paper proposes a novel Hierarchical Parsing Net (HPN) for semantic scene parsing. Unlike previous methods, which separately classify each object, HPN leverages global scene semantic information and the context among multiple objects to enhance scene parsing. On the one hand, HPN uses the global scene category to constrain the semantic consistency between the scene and each object. On the other hand, the context among all objects is also modeled to avoid incompatible object predictions. Specifically, HPN consists of four steps. In the first step, we extract scene and local appearance features. Based on these appearance features, the second step is to encode a contextual feature for each object, which models both the scene-object context (the context between the scene and each object) and the interobject context (the context among different objects). In the third step, we classify the global scene and then use the scene classification loss and a backpropagation algorithm to constrain the scene feature encoding. In the fourth step, a label map for scene parsing is generated from the local appearance and contextual features. Our model outperforms many state-of-the-art deep scene parsing networks on five scene parsing databases.",
"title": ""
}
] | scidocsrr |
0080127afb31502bf9ce634f93bd4a63 | Augmenting End-to-End Dialogue Systems With Commonsense Knowledge | [
{
"docid": "d4a1acf0fedca674145599b4aa546de0",
"text": "Neural network models are capable of generating extremely natural sounding conversational interactions. However, these models have been mostly applied to casual scenarios (e.g., as “chatbots”) and have yet to demonstrate they can serve in more useful conversational applications. This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses. We generalize the widely-used Sequence-toSequence (SEQ2SEQ) approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting. Our approach yields significant improvements over a competitive SEQ2SEQ baseline. Human judges found that our outputs are significantly more informative.",
"title": ""
},
{
"docid": "4410d7ed0d64e49e83111e6126cbc533",
"text": "We consider incorporating topic information as prior knowledge into the sequence to sequence (Seq2Seq) network structure with attention mechanism for response generation in chatbots. To this end, we propose a topic augmented joint attention based Seq2Seq (TAJASeq2Seq) model. In TAJA-Seq2Seq, information from input posts and information from topics related to the posts are simultaneously embedded into vector spaces by a content encoder and a topic encoder respectively. The two kinds of information interact with each other and help calibrate weights of each other in the joint attention mechanism in TAJA2Seq2Seq, and jointly determine the generation of responses in decoding. The model simulates how people behave in conversation and can generate well-focused and informative responses with the help of topic information. Empirical study on large scale human judged generation results show that our model outperforms Seq2Seq with attention on both response quality and diversity.",
"title": ""
}
] | [
{
"docid": "7fd33ebd4fec434dba53b15d741fdee4",
"text": "We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.",
"title": ""
},
{
"docid": "d574b43be735b5f560881a58c17f2acf",
"text": "People seek out situations that \"fit,\" but the concept of fit is not well understood. We introduce State Authenticity as Fit to the Environment (SAFE), a conceptual framework for understanding how social identities motivate the situations that people approach or avoid. Drawing from but expanding the authenticity literature, we first outline three types of person-environment fit: self-concept fit, goal fit, and social fit. Each type of fit, we argue, facilitates cognitive fluency, motivational fluency, and social fluency that promote state authenticity and drive approach or avoidance behaviors. Using this model, we assert that contexts subtly signal social identities in ways that implicate each type of fit, eliciting state authenticity for advantaged groups but state inauthenticity for disadvantaged groups. Given that people strive to be authentic, these processes cascade down to self-segregation among social groups, reinforcing social inequalities. We conclude by mapping out directions for research on relevant mechanisms and boundary conditions.",
"title": ""
},
{
"docid": "722e838f25efde8592c5eb7d8209ef45",
"text": "Machine learning algorithms are generally developed in computer science or adjacent disciplines and find their way into chemical modeling by a process of diffusion. Though particular machine learning methods are popular in chemoinformatics and quantitative structure-activity relationships (QSAR), many others exist in the technical literature. This discussion is methods-based and focused on some algorithms that chemoinformatics researchers frequently use. It makes no claim to be exhaustive. We concentrate on methods for supervised learning, predicting the unknown property values of a test set of instances, usually molecules, based on the known values for a training set. Particularly relevant approaches include Artificial Neural Networks, Random Forest, Support Vector Machine, k-Nearest Neighbors and naïve Bayes classifiers.",
"title": ""
},
{
"docid": "516153ca56874e4836497be9b7631834",
"text": "Shunt active power filter (SAPF) is the preeminent solution against nonlinear loads, current harmonics, and power quality problems. APF topologies for harmonic compensation use numerous high-power rating components and are therefore disadvantageous. Hybrid topologies combining low-power rating APF with passive filters are used to reduce the power rating of voltage source inverter (VSI). Hybrid APF topologies for high-power rating systems use a transformer with large numbers of passive components. In this paper, a novel four-switch two-leg VSI topology for a three-phase SAPF is proposed for reducing the system cost and size. The proposed topology comprises a two-arm bridge structure, four switches, coupling inductors, and sets of LC PFs. The third leg of the three-phase VSI is removed by eliminating the set of power switching devices, thereby directly connecting the phase with the negative terminals of the dc-link capacitor. The proposed topology enhances the harmonic compensation capability and provides complete reactive power compensation compared with conventional APF topologies. The new experimental prototype is tested in the laboratory to verify the results in terms of total harmonic distortion, balanced supply current, and harmonic compensation, following the IEEE-519 standard.",
"title": ""
},
{
"docid": "5d6cb50477423bf9fc1ea6c27ad0f1b9",
"text": "We propose a framework for general probabilistic multi-step time series regression. Specifically, we exploit the expressiveness and temporal nature of Sequence-to-Sequence Neural Networks (e.g. recurrent and convolutional structures), the nonparametric nature of Quantile Regression and the efficiency of Direct Multi-Horizon Forecasting. A new training scheme, forking-sequences, is designed for sequential nets to boost stability and performance. We show that the approach accommodates both temporal and static covariates, learning across multiple related series, shifting seasonality, future planned event spikes and coldstarts in real life large-scale forecasting. The performance of the framework is demonstrated in an application to predict the future demand of items sold on Amazon.com, and in a public probabilistic forecasting competition to predict electricity price and load.",
"title": ""
},
{
"docid": "1151348144ad2915f63f6b437e777452",
"text": "Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled) data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, publicly available data sets are few, often contain samples from subjects with too similar characteristics, and very often lack of specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new smartphone accelerometer dataset designed for activity recognition. The dataset includes 11,771 activities performed by 30 subjects of ages ranging from 18 to 60 years. Activities are divided in 17 fine grained classes grouped in two coarse grained classes: 9 types of activities of daily living (ADL) and 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with two different classifiers and with different configurations. The best results are achieved with k-NN classifying ADLs only, considering personalization, and with both windows of 51 and 151 samples.",
"title": ""
},
{
"docid": "0022623017e81ee0a102da0524c83932",
"text": "Calcite is a new Eclipse plugin that helps address the difficulty of understanding and correctly using an API. Calcite finds the most popular ways to instantiate a given class or interface by using code examples. To allow the users to easily add these object instantiations to their code, Calcite adds items to the popup completion menu that will insert the appropriate code into the user’s program. Calcite also uses crowd sourcing to add to the menu instructions in the form of comments that help the user perform functions that people have identified as missing from the API. In a user study, Calcite improved users’ success rate by 40%.",
"title": ""
},
{
"docid": "744d409ba86a8a60fafb5c5602f6d0f0",
"text": "In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 %, 65 %, and 55 % for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively.",
"title": ""
},
{
"docid": "f4bb9f769659436c79b67765145744ac",
"text": "Sparse Principal Component Analysis (S-PCA) is a novel framework for learning a linear, orthonormal basis representation for structure intrinsic to an ensemble of images. S-PCA is based on the discovery that natural images exhibit structure in a low-dimensional subspace in a sparse, scale-dependent form. The S-PCA basis optimizes an objective function which trades off correlations among output coefficients for sparsity in the description of basis vector elements. This objective function is minimized by a simple, robust and highly scalable adaptation algorithm, consisting of successive planar rotations of pairs of basis vectors. The formulation of S-PCA is novel in that multi-scale representations emerge for a variety of ensembles including face images, images from outdoor scenes and a database of optical flow vectors representing a motion class.",
"title": ""
},
{
"docid": "696069ce14bb37713421a01686555a92",
"text": "We propose a Bayesian trajectory prediction and criticality assessment system that allows to reason about imminent collisions of a vehicle several seconds in advance. We first infer a distribution of high-level, abstract driving maneuvers such as lane changes, turns, road followings, etc. of all vehicles within the driving scene by modeling the domain in a Bayesian network with both causal and diagnostic evidences. This is followed by maneuver-based, long-term trajectory predictions, which themselves contain random components due to the immanent uncertainty of how drivers execute specific maneuvers. Taking all uncertain predictions of all maneuvers of every vehicle into account, the probability of the ego vehicle colliding at least once within a time span is evaluated via Monte-Carlo simulations and given as a function of the prediction horizon. This serves as the basis for calculating a novel criticality measure, the Time-To-Critical-Collision-Probability (TTCCP) - a generalization of the common Time-To-Collision (TTC) in arbitrary, uncertain, multi-object driving environments and valid for longer prediction horizons. The system is applicable from highly-structured to completely non-structured environments and additionally allows the prediction of vehicles not behaving according to a specific maneuver class.",
"title": ""
},
{
"docid": "e83873daee4f8dae40c210987d9158e8",
"text": "Domain ontologies are important information sources for knowledge-based systems. Yet, building domain ontologies from scratch is known to be a very labor-intensive process. In this study, we present our semi-automatic approach to building an ontology for the domain of wind energy which is an important type of renewable energy with a growing share in electricity generation all over the world. Related Wikipedia articles are first processed in an automated manner to determine the basic concepts of the domain together with their properties and next the concepts, properties, and relationships are organized to arrive at the ultimate ontology. We also provide pointers to other engineering ontologies which could be utilized together with the proposed wind energy ontology in addition to its prospective application areas. The current study is significant as, to the best of our knowledge, it proposes the first considerably wide-coverage ontology for the wind energy domain and the ontology is built through a semi-automatic process which makes use of the related Web resources, thereby reducing the overall cost of the ontology building process.",
"title": ""
},
{
"docid": "5d557ecb67df253662e37d6ec030d055",
"text": "Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.",
"title": ""
},
{
"docid": "1b3b2b8872d3b846120502a7a40e03d0",
"text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.",
"title": ""
},
{
"docid": "ed8fef21796713aba1a6375a840c8ba3",
"text": "PURPOSE\nThe novel self-paced maximal-oxygen-uptake (VO2max) test (SPV) may be a more suitable alternative to traditional maximal tests for elite athletes due to the ability to self-regulate pace. This study aimed to examine whether the SPV can be administered on a motorized treadmill.\n\n\nMETHODS\nFourteen highly trained male distance runners performed a standard graded exercise test (GXT), an incline-based SPV (SPVincline), and a speed-based SPV (SPVspeed). The GXT included a plateau-verification stage. Both SPV protocols included 5×2-min stages (and a plateau-verification stage) and allowed for self-pacing based on fixed increments of rating of perceived exertion: 11, 13, 15, 17, and 20. The participants varied their speed and incline on the treadmill by moving between different marked zones in which the tester would then adjust the intensity.\n\n\nRESULTS\nThere was no significant difference (P=.319, ES=0.21) in the VO2max achieved in the SPVspeed (67.6±3.6 mL·kg(-1)·min(-1), 95%CI=65.6-69.7 mL·kg(-1)·min(-1)) compared with that achieved in the GXT (68.6±6.0 mL·kg(-1)·min(-1), 95%CI=65.1-72.1 mL·kg(-1)·min(-1)). Participants achieved a significantly higher VO2max in the SPVincline (70.6±4.3 mL·kg(-1)·min(-1), 95%CI=68.1-73.0 mL·kg(-1)·min(-1)) than in either the GXT (P=.027, ES=0.39) or SPVspeed (P=.001, ES=0.76).\n\n\nCONCLUSIONS\nThe SPVspeed protocol produces VO2max values similar to those obtained in the GXT and may represent a more appropriate and athlete-friendly test that is more oriented toward the variable speed found in competitive sport.",
"title": ""
},
{
"docid": "06d42f15aa724120bd99f3ab3bed6053",
"text": "With today's unprecedented proliferation in smart-devices, the Internet of Things Vision has become more of a reality than ever. With the extreme diversity of applications running on these heterogeneous devices, numerous middle-ware solutions have consequently emerged to address IoT-related challenges. These solutions however, heavily rely on the cloud for better data management, integration, and processing. This might potentially compromise privacy, add latency, and place unbearable traffic load. In this paper, we propose The Hive, an edge-based middleware architecture and protocol, that enables heterogeneous edge devices to dynamically share data and resources for enhanced application performance and privacy. We implement a prototype of the Hive, test it for basic robustness, show its modularity, and evaluate its performance with a real world smart emotion recognition application running on edge devices.",
"title": ""
},
{
"docid": "8b0a09cbac4b1cbf027579ece3dea9ef",
"text": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.",
"title": ""
},
{
"docid": "9646160d55bf5fe6d883ac62075c7560",
"text": "The authors provide a systematic security analysis on the sharing methods of three major cloud storage and synchronization services: Dropbox, Google Drive, and Microsoft SkyDrive. They show that all three services have security weaknesses that may result in data leakage without users' awareness.",
"title": ""
},
{
"docid": "b17fdc300edc22ab855d4c29588731b2",
"text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.",
"title": ""
},
{
"docid": "a6f11cf1bf479fe72dcb8dabb53176ee",
"text": "This paper focuses on WPA and IEEE 802.11i protocols that represent two important solutions in the wireless environment. Scenarios where it is possible to produce a DoS attack and DoS flooding attacks are outlined. The last phase of the authentication process, represented by the 4-way handshake procedure, is shown to be unsafe from DoS attack. This can produce the undesired effect of memory exhaustion if a flooding DoS attack is conducted. In order to avoid DoS attack without increasing the complexity of wireless mobile devices too much and without changing through some further control fields of the frame structure of wireless security protocols, a solution is found and an extension of WPA and IEEE 802.11 is proposed. A protocol extension with three “static” variants and with a resource-aware dynamic approach is considered. The three enhancements to the standard protocols are achieved through some simple changes on the client side and they are robust against DoS and DoS flooding attack. Advantages introduced by the proposal are validated by simulation campaigns and simulation parameters such as attempted attacks, successful attacks, and CPU load, while the algorithm execution time is evaluated. Simulation results show how the three static solutions avoid memory exhaustion and present a good performance in terms of CPU load and execution time in comparison with the standard WPA and IEEE 802.11i protocols. However, if the mobile device presents different resource availability in terms of CPU and memory or if resource availability significantly changes in time, a dynamic approach that is able to switch among three different modalities could be more suitable.",
"title": ""
}
] | scidocsrr |
7370e36cddefd67a8bb8250286d22c20 | The RowHammer problem and other issues we may face as memory becomes denser | [
{
"docid": "c97fe8ccd39a1ad35b5f09377f45aaa2",
"text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. In our research, we experimentally measure, characterize, analyze, and model error patterns in nanoscale flash memories. Based on the understanding developed using real flash memory chips, we design techniques for more efficient and effective error management than traditionally used costly error correction codes.",
"title": ""
},
{
"docid": "73284fdf9bc025672d3b97ca5651084a",
"text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. Understanding, characterizing, and modeling the distribution of the threshold voltages across different cells in a modern multi-level cell (MLC) flash memory can enable the design of more effective and efficient error correction mechanisms to combat this degradation. We show the first published experimental measurement-based characterization of the threshold voltage distribution of flash memory. To accomplish this, we develop a testing infrastructure that uses the read retry feature present in some 2Y-nm (i.e., 20-24nm) flash chips. We devise a model of the threshold voltage distributions taking into account program/erase (P/E) cycle effects, analyze the noise in the distributions, and evaluate the accuracy of our model. A key result is that the threshold voltage distribution can be modeled, with more than 95% accuracy, as a Gaussian distribution with additive white noise, which shifts to the right and widens as P/E cycles increase. The novel characterization and models provided in this paper can enable the design of more effective error tolerance mechanisms for future flash memories.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
}
] | [
{
"docid": "dc76a4d28841e703b961a1126bd28a39",
"text": "In this work, we study the problem of anomaly detection of the trajectories of objects in a visual scene. For this purpose, we propose a novel representation for trajectories utilizing covariance features. Representing trajectories via co-variance features enables us to calculate the distance between the trajectories of different lengths. After setting this proposed representation and calculation of distances, anomaly detection is achieved by sparse representations on nearest neighbours. Conducted experiments on both synthetic and real datasets show that the proposed method yields results which are outperforming or comparable with state of the art.",
"title": ""
},
{
"docid": "9b45bb1734e9afc34b14fa4bc47d8fba",
"text": "To achieve complex solutions in the rapidly changing world of e-commerce, it is impossible to go it alone. This explains the latest trend in IT outsourcing---global and partner-based alliances. But where do we go from here?",
"title": ""
},
{
"docid": "5772e4bfb9ced97ff65b5fdf279751f4",
"text": "Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains. In this work, we present an approach to feed generic cues into the training process of such networks, leading to better generalization abilities given limited training data. We propose to induce sentiment embeddings via supervision on extrinsic data, which are then fed into the model via a dedicated memorybased component. We observe significant gains in effectiveness on a range of different datasets in seven different languages.",
"title": ""
},
{
"docid": "fe89c8a17676b7767cfa40e7822b8d25",
"text": "Previous machine comprehension (MC) datasets are either too small to train endto-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.",
"title": ""
},
{
"docid": "4bee6ec901c365f3780257ed62b7c020",
"text": "There is no explicitly known example of a triple (g, a, x), where g ≥ 3 is an integer, a a digit in {0, . . . , g − 1} and x a real algebraic irrational number, for which one can claim that the digit a occurs infinitely often in the g–ary expansion of x. In 1909 and later in 1950, É. Borel considered such questions and suggested that the g–ary expansion of any algebraic irrational number in any base g ≥ 2 satisfies some of the laws that are satisfied by almost all numbers. For instance, the frequency where a given finite sequence of digits occurs should depend only on the base and on the length of the sequence. Hence there is a huge gap between the established theory and the expected state of the art. However, some progress have been made recently, mainly thanks to clever use of the Schmidt’s subspace Theorem. We review some of these results.",
"title": ""
},
{
"docid": "efd3280939a90041f50c4938cf886deb",
"text": "A distributed double integrator discrete time consensus protocol is presented along with stability analysis. The protocol will achieve consensus when the communication topology contains at least a directed spanning tree. Average consensus is achieved when the communication topology is strongly connected and balanced, where average consensus for double integrator systems is discussed. For second order systems average consensus occurs when the information states tend toward the average of the current information states not their initial values. Lastly, perturbation to the consensus protocol is addressed. Using a designed perturbation input, an algorithm is presented that accurately tracks the center of a vehicle formation in a decentralized manner.",
"title": ""
},
{
"docid": "6421979368a138e4b21ab7d9602325ff",
"text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.",
"title": ""
},
{
"docid": "0963b6b27b57575bd34ff8f5bd330536",
"text": "The human ocular surface spans from the conjunctiva to the cornea and plays a critical role in visual perception. Cornea, the anterior portion of the eye, is transparent and provides the eye with two-thirds of its focusing power and protection of ocular integrity. The cornea consists of five main layers, namely, corneal epithelium, Bowman’s layer, corneal stroma, Descemet’s membrane and corneal endothelium. The outermost layer of the cornea, which is exposed to the external environment, is the corneal epithelium. Corneal epithelial integrity and transparency are maintained by somatic stem cells (SC) that reside in the limbus. The limbus, an anatomical structure 1-2 mm wide, circumscribes the peripheral cornea and separates it from the conjunctiva (Cotsarelis et al., 1989, Davanger and Evensen, 1971) (Figure 1). Any damage to the ocular surface by burns, or various infections, can threaten vision. The most insidious of such damaging conditions is limbal stem cell deficiency (LSCD). Clinical signs of LSCD include corneal vascularization, chronic stromal inflammation, ingrowth of conjunctival epithelium onto the corneal surface and persistent epithelial defects (Lavker et al., 2004). Primary limbal stem cell deficiency is associated with aniridia and ectodermal dysplasia. Acquired limbal stem cell deficiency has been associated with inflammatory conditions (Stevens–Johnson syndrome (SJS), ocular cicatricial pemphigoid), ocular trauma (chemical and thermal burns), contact lens wear, corneal infection, neoplasia, peripheral ulcerative corneal disease and neurotrophic keratopathy (Dua et al., 2000, Jeng et al., 2011). Corneal stem cells and/or their niche are known to play important anti-angiogenic and anti-inflamatory roles in maintaining a normal corneal microenvironment, the destruction of which in LSCD, tips the balance toward pro-angiogenic conditions (Lim et al., 2009). For a long time, the primary treatment for LSCD has been transplantation of healthy keratolimbal tissue from autologous, allogenic, or cadaveric sources. In the late 1990s, cultured, autologous, limbal epithelial cell implants were used successfully to improve vision in two patients with chemical injury-induced LSCD (Pellegrini et al., 1997). Since then, transplantation of cultivated epithelial (stem) cells has become a treatment of choice for numerous LSCD patients worldwide. While the outcomes are promising, the variability of methodologies used to expand the cells, points to an underlying need for better standardization of ex vivo cultivation-based therapies and their outcome measures (Sangwan et al., 2005, Ti et al., 2004, Grueterich et al., 2002b, Kolli et al., 2010).",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "8c4d4567cf772a76e99aa56032f7e99e",
"text": "This paper discusses current perspectives on play and leisure and proposes that if play and leisure are to be accepted as viable occupations, then (a) valid and reliable measures of play must be developed, (b) interventions must be examined for inclusion of the elements of play, and (c) the promotion of play and leisure must be an explicit goal of occupational therapy intervention. Existing tools used by occupational therapists to assess clients' play and leisure are evaluated for the aspects of play and leisure they address and the aspects they fail to address. An argument is presented for the need for an assessment of playfulness, rather than of play or leisure activities. A preliminary model for the development of such an assessment is proposed.",
"title": ""
},
{
"docid": "e0320fc4031a4d1d09c9255012c3d03c",
"text": "We develop a model of premium sharing for firms that offer multiple insurance plans. We assume that firms offer one low quality plan and one high quality plan. Under the assumption of wage rigidities we found that the employee's contribution to each plan is an increasing function of that plan's premium. The effect of the other plan's premium is ambiguous. We test our hypothesis using data from the Employer Health Benefit Survey. Restricting the analysis to firms that offer both HMO and PPO plans, we measure the amount of the premium passed on to employees in response to a change in both premiums. We find evidence of large and positive effects of the increase in the plan's premium on the amount of the premium passed on to employees. The effect of the alternative plan's premium is negative but statistically significant only for the PPO plans.",
"title": ""
},
{
"docid": "dcd116e601c9155d60364c19a1f0dfb7",
"text": "The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure was developed to aid clinicians with a dimensional assessment of psychopathology; however, this measure resembles a screening tool for several symptomatic domains. The objective of the current study was to examine the basic parameters of sensitivity, specificity, positive and negative predictive power of the measure as a screening tool. One hundred and fifty patients in a correctional community center filled out the measure prior to a psychiatric evaluation, including the Mini International Neuropsychiatric Interview screen. The above parameters were calculated for the domains of depression, mania, anxiety, and psychosis. The results showed that the sensitivity and positive predictive power of the studied domains was poor because of a high rate of false positive answers on the measure. However, when the lowest threshold on the Cross-Cutting Symptom Measure was used, the sensitivity of the anxiety and psychosis domains and the negative predictive values for mania, anxiety and psychosis were good. In conclusion, while it is foreseeable that some clinicians may use the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a screening tool, it should not be relied on to identify positive findings. It functioned well in the negative prediction of mania, anxiety and psychosis symptoms.",
"title": ""
},
{
"docid": "5da2747dd2c3fe5263d8bfba6e23de1f",
"text": "We propose to transfer the content of a text written in a certain style to an alternative text written in a different style, while maintaining as much as possible of the original meaning. Our work is inspired by recent progress of applying style transfer to images, as well as attempts to replicate the results to text. Our model is a deep neural network based on Generative Adversarial Networks (GAN). Our novelty is replacing the discrete next-word prediction with prediction in the embedding space, which provides two benefits (1) train the GAN without using gradient approximations and (2) provide semantically related results even for failure cases.",
"title": ""
},
{
"docid": "b059f6d2e9f10e20417f97c05d92c134",
"text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.",
"title": ""
},
{
"docid": "d06c91afbfd79e40d0d6fe326e3be957",
"text": "This meta-analysis included 66 studies (N = 4,176) on parental antecedents of attachment security. The question addressed was whether maternal sensitivity is associated with infant attachment security, and what the strength of this relation is. It was hypothesized that studies more similar to Ainsworth's Baltimore study (Ainsworth, Blehar, Waters, & Wall, 1978) would show stronger associations than studies diverging from this pioneering study. To create conceptually homogeneous sets of studies, experts divided the studies into 9 groups with similar constructs and measures of parenting. For each domain, a meta-analysis was performed to describe the central tendency, variability, and relevant moderators. After correction for attenuation, the 21 studies (N = 1,099) in which the Strange Situation procedure in nonclinical samples was used, as well as preceding or concurrent observational sensitivity measures, showed a combined effect size of r(1,097) = .24. According to Cohen's (1988) conventional criteria, the association is moderately strong. It is concluded that in normal settings sensitivity is an important but not exclusive condition of attachment security. Several other dimensions of parenting are identified as playing an equally important role. In attachment theory, a move to the contextual level is required to interpret the complex transactions between context and sensitivity in less stable and more stressful settings, and to pay more attention to nonshared environmental influences.",
"title": ""
},
{
"docid": "b92484f67bf2d3f71d51aee9fb7abc86",
"text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.",
"title": ""
},
{
"docid": "f1681e1c8eef93f15adb5a4d7313c94c",
"text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.",
"title": ""
},
{
"docid": "139ecd9ff223facaec69ad6532f650db",
"text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.",
"title": ""
},
{
"docid": "b5aad69e6a0f672cdaa1f81187a48d57",
"text": "In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%.",
"title": ""
}
] | scidocsrr |
47b0cae56e5e04ca4fa7e91be1b8c7d1 | Empathy and Its Modulation in a Virtual Human | [
{
"docid": "8efee8d7c3bf229fa5936209c43a7cff",
"text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"title": ""
}
] | [
{
"docid": "ec8ffeb175dbd392e877d7704705f44e",
"text": "Business Intelligence (BI) solutions commonly aim at assisting decision-making processes by providing a comprehensive view over a company’s core business data and suitable abstractions thereof. Decision-making based on BI solutions therefore builds on the assumption that providing users with targeted, problemspecific fact data enables them to make informed and, hence, better decisions in their everyday businesses. In order to really provide users with all the necessary details to make informed decisions, we however believe that – in addition to conventional reports – it is essential to also provide users with information about the quality, i.e. with quality metadata, regarding the data from which reports are generated. Identifying a lack of support for quality metadata management in conventional BI solutions, in this paper we propose the idea of quality-aware reports and a possible architecture for quality-aware BI, able to involve the users themselves into the quality metadata management process, by explicitly soliciting and exploiting user feedback.",
"title": ""
},
{
"docid": "2d86a717ef4f83ff0299f15ef1df5b1b",
"text": "Proactive interference (PI) refers to the finding that memory for recently studied (target) information can be vastly impaired by the previous study of other (nontarget) information. PI can be reduced in a number of ways, for instance, by directed forgetting of the prior nontarget information, the testing of the prior nontarget information, or an internal context change before study of the target information. Here we report the results of four experiments, in which we demonstrate that all three forms of release from PI are accompanied by a decrease in participants’ response latencies. Because response latency is a sensitive index of the size of participants’ mental search set, the results suggest that release from PI can reflect more focused memory search, with the previously studied nontarget items being largely eliminated from the search process. Our results thus provide direct evidence for a critical role of retrieval processes in PI release. 2012 Elsevier Inc. All rights reserved. Introduction buildup of PI is caused by a failure to distinguish items Proactive interference (PI) refers to the finding that memory for recently studied information can be vastly impaired by the previous study of further information (e.g., Underwood, 1957). In a typical PI experiment, participants study a (target) list of items and are later tested on it. In the PI condition, participants study further (nontarget) lists that precede encoding of the target information, whereas in the no-PI condition participants engage in an unrelated distractor task. Typically, recall of the target list is worse in the PI condition than the no-PI condition, which reflects the PI finding. PI has been extensively studied in the past century, has proven to be a very robust finding, and has been suggested to be one of the major causes of forgetting in everyday life (e.g., Underwood, 1957; for reviews, see Anderson & Neely, 1996; Crowder, 1976). Over the years, a number of theories have been put forward to account for PI, most of them suggesting a critical role of retrieval processes in this form of forgetting. For instance, temporal discrimination theory suggests that . All rights reserved. ie.uni-regensburg.de from the most recent target list from items that appeared on the earlier nontarget lists. Specifically, the theory assumes that at test participants are unable to restrict their memory search to the target list and instead search the entire set of items that have previously been exposed (Baddeley, 1990; Crowder, 1976; Wixted & Rohrer, 1993). Another retrieval account attributes PI to a generation failure. Here, reduced recall levels of the target items are thought to be due to the impaired ability to access the material’s correct memory representation (Dillon & Thomas, 1975). In contrast to these retrieval explanations of PI, some theories also suggested a role of encoding factors in PI, assuming that the prior study of other lists impairs subsequent encoding of the target list. For instance, attentional resources may deteriorate across item lists and cause the target material to be less well processed in the presence than the absence of the preceding lists (e.g., Crowder, 1976).",
"title": ""
},
{
"docid": "b085860a27df6604c6dc38cd9fbd0b75",
"text": "A number of factors are considered during the analysis of automobile transportation with respect to increasing safety. One of the vital factors for night-time travel is temporary blindness due to increase in the headlight intensity. While headlight intensity provides better visual acuity, it simultaneously affects oncoming traffic. This problem is encountered when both drivers are using a higher headlight intensity setting. Also, increased speed of the vehicles due to decreased traffic levels at night increases the severity of accidents. In order to reduce accidents due to temporary driver blindness, a wireless sensor network (WSN) based controller could be developed to transmit sensor data in a faster and an efficient way between cars. Low latency allows faster headlight intensity adjustment between the vehicles to drastically reduce the cause of temporary blindness. An attempt has been made to come up with a system which would sense the intensity of the headlight of the oncoming vehicle and depending on the threshold headlight intensity being set in the system it would automatically reduce the intensity of the headlight of the oncoming vehicle using wireless sensor network thus reducing the condition of temporary blindness caused due to excessive exposure to headlights.",
"title": ""
},
{
"docid": "c68397cdbe538fd22fe88c0ff4e47879",
"text": "With the higher demand of the three dimensional (3D) imaging, a high definition real-time 3D video system based on FPGA is proposed. The system is made up of CMOS image sensors, DDR2 SDRAM, High Definition Multimedia Interface (HDMI) transmitter and Field Programmable Gate Array (FPGA). CMOS image sensor produces digital video streaming. DDR2 SDRAM buffers large amount of video data. FPGA processes the video streaming and realizes 3D data format conversion. HDMI transmitter is utilized to transmit 3D format data. Using the active 3D display device and shutter glasses, the system can achieve the living effect of real-time 3D high definition imaging. The resolution of the system is 720p@60Hz in 3D mode.",
"title": ""
},
{
"docid": "bd4dde3f5b7ec9dcd711a538b973ef1e",
"text": "Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the well-known measures precision, recall, and their average, the F-measure. The unigrambased F-measure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, this standard measure has an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. The relevant software is publicly available from http://nlp.cs.nyu.edu/GTM/.",
"title": ""
},
{
"docid": "c9b278eea7f915222cf8e99276fb5af2",
"text": "Pseudorandom generators based on linear feedback shift registers (LFSR) are a traditional building block for cryptographic stream ciphers. In this report, we review the general idea for such generators, as well as the most important techniques of cryptanalysis.",
"title": ""
},
{
"docid": "738f60fbfe177eec52057c8e5ab43e55",
"text": "From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains.",
"title": ""
},
{
"docid": "d353db098a7ca3bd9dc73b803e7369a2",
"text": "DevOps community advocates collaboration between development and operations staff during software deployment. However this collaboration may cause a conceptual deficit. This paper proposes a Unified DevOps Model (UDOM) in order to overcome the conceptual deficit. Firstly, the origin of conceptual deficit is discussed. Secondly, UDOM model is introduced that includes three sub-models: application and data model, workflow execution model and infrastructure model. UDOM model can help to scale down deployment time, mitigate risk, satisfy customer requirements, and improve productivity. Finally, this paper can be a roadmap for standardization DevOps terminologies, concepts, patterns, cultures, and tools.",
"title": ""
},
{
"docid": "1be6aecdc3200ed70ede2d5e96cb43be",
"text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.",
"title": ""
},
{
"docid": "63339fb80c01c38911994cd326e483a3",
"text": "Older adults are becoming a significant percentage of the world's population. A multitude of factors, from the normal aging process to the progression of chronic disease, influence the nutrition needs of this very diverse group of people. Appropriate micronutrient intake is of particular importance but is often suboptimal. Here we review the available data regarding micronutrient needs and the consequences of deficiencies in the ever growing aged population.",
"title": ""
},
{
"docid": "dfc7a31461a382f0574fadf36a8fd211",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Road Traffic Accident is very serious matter of life. The World Health Organization (WHO) reports that about 1.24 million people of the world die annually on the roads. The Institute for Health Metrics and Evaluation (IHME) estimated about 907,900, 1.3 million and 1.4 million deaths from road traffic injuries in 1990, 2010 and 2013, respectively. Uttar Pradesh in particular one of the state of India, experiences the highest rate of such accidents. Thus, methods to reduce accident severity are of great interest to traffic agencies and the public at large. In this paper, we applied data mining technologies to link recorded road characteristics to accident severity and developed a set of rules that could be used by the Indian Traffic Agency to improve safety and could help to save precious life.",
"title": ""
},
{
"docid": "689f1a8a6e8a1267dd45db32f3b711f6",
"text": "Today, the digitalization strides tremendously on all the sides of the modern society. One of the enablers to keep this process secure is the authentication. It touches many different areas of the connected world including payments, communications, and access right management. This manuscript attempts to shed the light on the authentication systems' evolution towards Multi-factor Authentication (MFA) from Singlefactor Authentication (SFA) and through Two-factor Authentication (2FA). Particularly, MFA is expected to be utilized for the user and vehicle-to-everything (V2X) interaction which is selected as descriptive scenario. The manuscript is focused on already available and potentially integrated sensors (factor providers) to authenticate the occupant from inside the vehicle. The survey on existing vehicular systems suitable for MFA is given. Finally, the MFA system based on reversed Lagrange polynomial, utilized in Shamir's Secret Sharing (SSS), was proposed to enable flexible in-car authentication. The solution was further extended covering the cases of authenticating the user even if some of the factors are mismatched or absent. The framework allows to qualify the missing factor and authenticate the user without providing the sensitive biometric data to the verification entity. The proposed is finally compared to conventional SSS.",
"title": ""
},
{
"docid": "9d3778091b10c6352559fb51faace714",
"text": "Aims to provide an analysis of the introduction of Internet-based skills into small firms. Seeks to contribute to the wider debate on the content and style of training most appropriate for employees and managers of SMEs.",
"title": ""
},
{
"docid": "5867f20ff63506be7eccb6c209ca03cc",
"text": "When creating a virtual environment open to the public a number of challenges have to be addressed. The equipment has to be chosen carefully in order to be be able to withstand hard everyday usage, and the application has not only to be robust and easy to use, but has also to be appealing to the user, etc. The current paper presents findings gathered from the creation of a multi-thematic virtual museum environment to be offered to visitors of real world museums. A number of design and implementation aspects are described along with an experiment designed to evaluate alternative approaches for implementing the navigation in a virtual museum environment. The paper is concluded with insights gained from the development of the virtual museum and portrays future research plans.",
"title": ""
},
{
"docid": "64f4a275dce1963b281cd0143f5eacdc",
"text": "Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.",
"title": ""
},
{
"docid": "a1b50cf02ef0e37aed3d941ea281b885",
"text": "Collaborative filtering and content-based methods are two main approaches for recommender systems, and hybrid models use advantages of both. In this paper, we made a comparison of a hybrid model, which uses Bayesian Staked Denoising Autoencoders for content learning, and a collaborative filtering method, Bayesian Nonnegative Matrix Factorisation. It is shown that the tightly coupled hybrid model, Collaborative Deep Learning, gave more successful results comparing to collaborative filtering methods.",
"title": ""
},
{
"docid": "d9c4bdd95507ef497db65fc80d3508c5",
"text": "3D content creation is referred to as one of the most fundamental tasks of computer graphics. And many 3D modeling algorithms from 2D images or curves have been developed over the past several decades. Designers are allowed to align some conceptual images or sketch some suggestive curves, from front, side, and top views, and then use them as references in constructing a 3D model automatically or manually. However, to the best of our knowledge, no studies have investigated on 3D human body reconstruction in a similar manner. In this paper, we propose a deep learning based reconstruction of 3D human body shape from 2D orthographic views. A novel CNN-based regression network, with two branches corresponding to frontal and lateral views respectively, is designed for estimating 3D human body shape from 2D mask images. We train our networks separately to decouple the feature descriptors which encode the body parameters from different views, and fuse them to estimate an accurate human body shape. In addition, to overcome the shortage of training data required for this purpose, we propose some significantly data augmentation schemes for 3D human body shapes, which can be used to promote further research on this topic. Extensive experimental results demonstrate that visually realistic and accurate reconstructions can be achieved effectively using our algorithm. Requiring only binary mask images, our method can help users create their own digital avatars quickly, and also make it easy to create digital human body for 3D game, virtual reality, online fashion shopping.",
"title": ""
},
{
"docid": "39ed08e9a08b7d71a4c177afe8f0056a",
"text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a40c00b1dc4a8d795072e0a8cec09d7a",
"text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.",
"title": ""
},
{
"docid": "e65d522f6b08eeebb8a488b133439568",
"text": "We propose a bootstrap learning algorithm for salient object detection in which both weak and strong models are exploited. First, a weak saliency map is constructed based on image priors to generate training samples for a strong model. Second, a strong classifier based on samples directly from an input image is learned to detect salient pixels. Results from multiscale saliency maps are integrated to further improve the detection performance. Extensive experiments on six benchmark datasets demonstrate that the proposed bootstrap learning algorithm performs favorably against the state-of-the-art saliency detection methods. Furthermore, we show that the proposed bootstrap learning approach can be easily applied to other bottom-up saliency models for significant improvement.",
"title": ""
}
] | scidocsrr |
7f0fd3cae088ad01ca2e50d33b24ec11 | Insiders and Insider Threats - An Overview of Definitions and Mitigation Techniques | [
{
"docid": "27b9350b8ea1032e727867d34c87f1c3",
"text": "A field study and an experimental study examined relationships among organizational variables and various responses of victims to perceived wrongdoing. Both studies showed that procedural justice climate moderates the effect of organizational variables on the victim's revenge, forgiveness, reconciliation, or avoidance behaviors. In Study 1, a field study, absolute hierarchical status enhanced forgiveness and reconciliation, but only when perceptions of procedural justice climate were high; relative hierarchical status increased revenge, but only when perceptions of procedural justice climate were low. In Study 2, a laboratory experiment, victims were less likely to endorse vengeance or avoidance depending on the type of wrongdoing, but only when perceptions of procedural justice climate were high.",
"title": ""
}
] | [
{
"docid": "e510e80f71d24783414cb5db279b2ec3",
"text": "The purpose of this research is to investigate negativity bias in secondary electronic word-of-mouth (eWOM). Two experiments, one laboratory and one field, were conducted to study actual dissemination behavior. The results demonstrate a strong tendency toward the negative in the dissemination of secondary commercial information. In line with Dynamic Social Impact Theory, our findings show that consumers disseminate online negative content to more recipients, for a longer period of time and in more elaborated and assimilated manner than they do positive information. The research is important from both a theoretical and managerial perspective. In the former, it enriches existing literature on eWOM by providing insight into theoretical dimensions of the negativity theory not examined before (duration, role of valence, elaboration, and assimilation). Findings provide managerial insights into designing more effective WOM and publicity campaigns. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "49ff096deb6621438286942b792d6af3",
"text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.",
"title": ""
},
{
"docid": "4d429f5f5d46dc1beb9b681c4578f34a",
"text": "Recently, many digital service providers started to gamify their services to promote continued service usage. Although gamification has drawn attention in both practice and research, it remains unclear how users experience gamified services and how these gameful experiences may increase service usage. This research adopts a user-centered perspective to reveal the underlying gameful experience dimensions during gamified service usage and how they drive continued service usage. Findings from Study 1 – a survey with 148 app-users – reveal four essential gameful experience dimensions (skill development, social comparison, social connectedness, and expressive freedom) and how they relate to game mechanics. Study 2, which is based on a survey among 821 app-users, shows that gameful experiences trigger continued service usage through two different types of motivation, namely autonomous and controlled motivation.",
"title": ""
},
{
"docid": "cc8e52fdb69a9c9f3111287905f02bfc",
"text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.",
"title": ""
},
{
"docid": "bc3924d12ee9d07a752fce80a67bb438",
"text": "Unsupervised semantic segmentation in the time series domain is a much-studied problem due to its potential to detect unexpected regularities and regimes in poorly understood data. However, the current techniques have several shortcomings, which have limited the adoption of time series semantic segmentation beyond academic settings for three primary reasons. First, most methods require setting/learning many parameters and thus may have problems generalizing to novel situations. Second, most methods implicitly assume that all the data is segmentable, and have difficulty when that assumption is unwarranted. Finally, most research efforts have been confined to the batch case, but online segmentation is clearly more useful and actionable. To address these issues, we present an algorithm which is domain agnostic, has only one easily determined parameter, and can handle data streaming at a high rate. In this context, we test our algorithm on the largest and most diverse collection of time series datasets ever considered, and demonstrate our algorithm's superiority over current solutions. Furthermore, we are the first to show that semantic segmentation may be possible at superhuman performance levels.",
"title": ""
},
{
"docid": "d1cf6f36fe964ac9e48f54a1f35e94c3",
"text": "Recognising patterns that correlate multiple events over time becomes increasingly important in applications from urban transportation to surveillance monitoring. In many realworld scenarios, however, timestamps of events may be erroneously recorded and events may be dropped from a stream due to network failures or load shedding policies. In this work, we present SimpMatch, a novel simplex-based algorithm for probabilistic evaluation of event queries using constraints over event orderings in a stream. Our approach avoids learning probability distributions for time-points or occurrence intervals. Instead, we employ the abstraction of segmented intervals and compute the probability of a sequence of such segments using the principle of order statistics. The algorithm runs in linear time to the number of missed timestamps, and shows high accuracy, yielding exact results if event generation is based on a Poisson process and providing a good approximation otherwise. As we demonstrate empirically, SimpMatch enables efficient and effective reasoning over event streams, outperforming state-ofthe-art methods for probabilistic evaluation of event queries by up to two orders of magnitude.",
"title": ""
},
{
"docid": "7931fa9541efa9a006a030655c59c5f4",
"text": "Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.",
"title": ""
},
{
"docid": "b741698d7e4d15cb7f4e203f2ddbce1d",
"text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.",
"title": ""
},
{
"docid": "867516a6a54105e4759338e407bafa5a",
"text": "At the end of the criminal intelligence analysis process there are relatively well established and understood approaches to explicit externalisation and representation of thought that include theories of argumentation, narrative and hybrid approaches that include both of these. However the focus of this paper is on the little understood area of how to support users in the process of arriving at such representations from an initial starting point where little is given. The work is based on theoretical considerations and some initial studies with end users. In focusing on process we discuss the requirements of fluidity and rigor and how to gain traction in investigations, the processes of thinking involved including abductive, deductive and inductive reasoning, how users may use thematic sorting in early stages of investigation and how tactile reasoning may be used to externalize and facilitate reasoning in a productive way. In the conclusion section we discuss the issues raised in this work and directions for future work.",
"title": ""
},
{
"docid": "a6e84af8b1ba1d120e69c10f76eb7e2a",
"text": "Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which autoencoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the intractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational auto-encoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method.",
"title": ""
},
{
"docid": "4a5f05a7aea8a02cf70d6c644e06dda0",
"text": "Sales pipeline win-propensity prediction is fundamental to effective sales management. In contrast to using subjective human rating, we propose a modern machine learning paradigm to estimate the winpropensity of sales leads over time. A profile-specific two-dimensional Hawkes processes model is developed to capture the influence from seller’s activities on their leads to the win outcome, coupled with lead’s personalized profiles. It is motivated by two observations: i) sellers tend to frequently focus their selling activities and efforts on a few leads during a relatively short time. This is evidenced and reflected by their concentrated interactions with the pipeline, including login, browsing and updating the sales leads which are logged by the system; ii) the pending opportunity is prone to reach its win outcome shortly after such temporally concentrated interactions. Our model is deployed and in continual use to a large, global, B2B multinational technology enterprize (Fortune 500) with a case study. Due to the generality and flexibility of the model, it also enjoys the potential applicability to other real-world problems.",
"title": ""
},
{
"docid": "7d1470edd8d8c6bd589ea64a73189705",
"text": "Background modeling plays an important role for video surveillance, object tracking, and object counting. In this paper, we propose a novel deep background modeling approach utilizing fully convolutional network. In the network block constructing the deep background model, three atrous convolution branches with different dilate are used to extract spatial information from different neighborhoods of pixels, which breaks the limitation that extracting spatial information of the pixel from fixed pixel neighborhood. Furthermore, we sample multiple frames from original sequential images with increasing interval, in order to capture more temporal information and reduce the computation. Compared with classical background modeling approaches, our approach outperforms the state-of-art approaches both in indoor and outdoor scenes.",
"title": ""
},
{
"docid": "136fadcc21143fd356b48789de5fb2b0",
"text": "Cost-effective and scalable wireless backhaul solutions are essential for realizing the 5G vision of providing gigabits per second anywhere. Not only is wireless backhaul essential to support network densification based on small cell deployments, but also for supporting very low latency inter-BS communication to deal with intercell interference. Multiplexing backhaul and access on the same frequency band (in-band wireless backhaul) has obvious cost benefits from the hardware and frequency reuse perspective, but poses significant technology challenges. We consider an in-band solution to meet the backhaul and inter-BS coordination challenges that accompany network densification. Here, we present an analysis to persuade the readers of the feasibility of in-band wireless backhaul, discuss realistic deployment and system assumptions, and present a scheduling scheme for inter- BS communications that can be used as a baseline for further improvement. We show that an inband wireless backhaul for data backhauling and inter-BS coordination is feasible without significantly hurting the cell access capacities.",
"title": ""
},
{
"docid": "066eef8e511fac1f842c699f8efccd6b",
"text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "3c2b68ac95f1a9300585b73ca4b83122",
"text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.",
"title": ""
},
{
"docid": "8a22f454a657768a3d5fd6e6ec743f5f",
"text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.",
"title": ""
},
{
"docid": "e9e2887e7aae5315a8661c9d7456aa2e",
"text": "It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model.",
"title": ""
},
{
"docid": "bb43c98d05f3844354862d39f6fa1d2d",
"text": "There are always frustrations for drivers in finding parking spaces and being protected from auto theft. In this paper, to minimize the drivers' hassle and inconvenience, we propose a new intelligent secure privacy-preserving parking scheme through vehicular communications. The proposed scheme is characterized by employing parking lot RSUs to surveil and manage the whole parking lot and is enabled by communication between vehicles and the RSUs. Once vehicles that are equipped with wireless communication devices, which are also known as onboard units, enter the parking lot, the RSUs communicate with them and provide the drivers with real-time parking navigation service, secure intelligent antitheft protection, and friendly parking information dissemination. In addition, the drivers' privacy is not violated. Performance analysis through extensive simulations demonstrates the efficiency and practicality of the proposed scheme.",
"title": ""
},
{
"docid": "cf6b553b54ed94b9a6b516c51a4ad571",
"text": "The relationship of food and eating with affective and other clinical disorders is complex and intriguing. Serotoninergic dysfunction in seasonal affective disorder, atypical depression, premenstrual syndrome, anorexia and bulimia nervosa, and binge eating disorder is reviewed. Patients exhibiting a relationship between food and behaviour are found in various diagnostic categories. This points to a need to shift from nosological to functional thinking in psychiatry. It also means application of psychopharmacological treatments across diagnostic boundaries. The use of phototherapy and psychotropic drugs (MAO inhibitors and selective serotonin reuptake inhibitors like fluoxetine) in these disorders is discussed.",
"title": ""
},
{
"docid": "dc445d234bafaf115495ce1838163463",
"text": "In this paper, a novel camera tamper detection algorithm is proposed to detect three types of tamper attacks: covered, moved and defocused. The edge disappearance rate is defined in order to measure the amount of edge pixels that disappear in the current frame from the background frame while excluding edges in the foreground. Tamper attacks are detected if the difference between the edge disappearance rate and its temporal average is larger than an adaptive threshold reflecting the environmental conditions of the cameras. The performance of the proposed algorithm is evaluated for short video sequences with three types of tamper attacks and for 24-h video sequences without tamper attacks; the algorithm is shown to achieve acceptable levels of detection and false alarm rates for all types of tamper attacks in real environments.",
"title": ""
}
] | scidocsrr |
7db887e32b328c1d584dcef17552323a | Lares: An Architecture for Secure Active Monitoring Using Virtualization | [
{
"docid": "d1c46994c5cfd59bdd8d52e7d4a6aa83",
"text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.",
"title": ""
},
{
"docid": "14dd650afb3dae58ffb1a798e065825a",
"text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.",
"title": ""
}
] | [
{
"docid": "5bc2b92a3193c36bac5ae848da7974a3",
"text": "Robust real-time tracking of non-rigid objects is a challenging task. Particle filtering has proven very successful for non-linear and nonGaussian estimation problems. The article presents the integration of color distributions into particle filtering, which has typically been used in combination with edge-based image features. Color distributions are applied, as they are robust to partial occlusion, are rotation and scale invariant and computationally efficient. As the color of an object can vary over time dependent on the illumination, the visual angle and the camera parameters, the target model is adapted during temporally stable image observations. An initialization based on an appearance condition is introduced since tracked objects may disappear and reappear. Comparisons with the mean shift tracker and a combination between the mean shift tracker and Kalman filtering show the advantages and limitations of the new approach. q 2002 Published by Elsevier Science B.V.",
"title": ""
},
{
"docid": "133b2f033245dad2a2f35ff621741b2f",
"text": "In wireless sensor networks (WSNs), long lifetime requirement of different applications and limited energy storage capability of sensor nodes has led us to find out new horizons for reducing power consumption upon nodes. To increase sensor node's lifetime, circuit and protocols have to be energy efficient so that they can make a priori reactions by estimating and predicting energy consumption. The goal of this study is to present and discuss several strategies such as power-aware protocols, cross-layer optimization, and harvesting technologies used to alleviate power consumption constraint in WSNs.",
"title": ""
},
{
"docid": "99ba1fd6c96dad6d165c4149ac2ce27a",
"text": "In order to solve unsupervised domain adaptation problem, recent methods focus on the use of adversarial learning to learn the common representation among domains. Although many designs are proposed, they seem to ignore the negative influence of domain-specific characteristics in transferring process. Besides, they also tend to obliterate these characteristics when extracted, although they are useful for other tasks and somehow help preserve the data. Take into account these issues, in this paper, we want to design a novel domainadaptation architecture which disentangles learned features into multiple parts to answer the questions: what features to transfer across domains and what to preserve within domains for other tasks. Towards this, besides jointly matching domain distributions in both image-level and feature-level, we offer new idea on feature exchange across domains combining with a novel feed-back loss and a semantic consistency loss to not only enhance the transferability of learned common feature but also preserve data and semantic information during exchange process. By performing domain adaptation on two standard digit datasets – MNIST and USPS, we show that our architecture can solve not only the full transfer problem but also partial transfer problem efficiently. The translated image results also demonstrate the potential of our architecture in image style transfer application.",
"title": ""
},
{
"docid": "f321ba1ee0f68612d7c463a37708a1e7",
"text": "Non-orthogonal multiple access (NOMA) is a promising technique for the fifth generation mobile communication due to its high spectral efficiency. By applying superposition coding and successive interference cancellation techniques at the receiver, multiple users can be multiplexed on the same subchannel in NOMA systems. Previous works focus on subchannel assignment and power allocation to achieve the maximization of sum rate; however, the energy-efficient resource allocation problem has not been well studied for NOMA systems. In this paper, we aim to optimize subchannel assignment and power allocation to maximize the energy efficiency for the downlink NOMA network. Assuming perfect knowledge of the channel state information at base station, we propose a low-complexity suboptimal algorithm, which includes energy-efficient subchannel assignment and power proportional factors determination for subchannel multiplexed users. We also propose a novel power allocation across subchannels to further maximize energy efficiency. Since both optimization problems are non-convex, difference of convex programming is used to transform and approximate the original non-convex problems to convex optimization problems. Solutions to the resulting optimization problems can be obtained by solving the convex sub-problems iteratively. Simulation results show that the NOMA system equipped with the proposed algorithms yields much better sum rate and energy efficiency performance than the conventional orthogonal frequency division multiple access scheme.",
"title": ""
},
{
"docid": "e982aa23c644bad4870bafaf7344d15a",
"text": "In this work we introduce a structured prediction model that endows the Deep Gaussian Conditional Random Field (G-CRF) with a densely connected graph structure. We keep memory and computational complexity under control by expressing the pairwise interactions as inner products of low-dimensional, learnable embeddings. The G-CRF system matrix is therefore low-rank, allowing us to solve the resulting system in a few milliseconds on the GPU by using conjugate gradient. As in G-CRF, inference is exact, the unary and pairwise terms are jointly trained end-to-end by using analytic expressions for the gradients, while we also develop even faster, Potts-type variants of our embeddings. We show that the learned embeddings capture pixel-to-pixel affinities in a task-specific manner, while our approach achieves state of the art results on three challenging benchmarks, namely semantic segmentation, human part segmentation, and saliency estimation. Our implementation is fully GPU based, built on top of the Caffe library, and is available at https://github.com/siddharthachandra/gcrf-v2.0.",
"title": ""
},
{
"docid": "d6b3969a6004b5daf9781c67c2287449",
"text": "Lotilaner is a new oral ectoparasiticide from the isoxazoline class developed for the treatment of flea and tick infestations in dogs. It is formulated as pure S-enantiomer in flavoured chewable tablets (Credelio™). The pharmacokinetics of lotilaner were thoroughly determined after intravenous and oral administration and under different feeding regimens in dogs. Twenty-six adult beagle dogs were enrolled in a pharmacokinetic study evaluating either intravenous or oral administration of lotilaner. Following the oral administration of 20 mg/kg, under fed or fasted conditions, or intravenous administration of 3 mg/kg, blood samples were collected up to 35 days after treatment. The effects of timing of offering food and the amount of food consumed prior or after dosing on bioavailability were assessed in a separate study in 25 adult dogs. Lotilaner blood concentrations were measured using a validated liquid chromatography/tandem mass spectrometry (LC-MS/MS) method. Pharmacokinetic parameters were calculated by non-compartmental analysis. In addition, in vivo enantiomer stability was evaluated in an analytical study. Following oral administration in fed animals, lotilaner was readily absorbed and peak blood concentrations reached within 2 hours. The terminal half-life was 30.7 days. Food enhanced the absorption, providing an oral bioavailability above 80% and reduced the inter-individual variability. Moreover, the time of feeding with respect to dosing (fed 30 min prior, fed at dosing or fed 30 min post-dosing) or the reduction of the food ration to one-third of the normal daily ration did not impact bioavailability. Following intravenous administration, lotilaner had a low clearance of 0.18 l/kg/day, large volumes of distribution Vz and Vss of 6.35 and 6.45 l/kg, respectively and a terminal half-life of 24.6 days. In addition, there was no in vivo racemization of lotilaner. The pharmacokinetic properties of lotilaner administered orally as a flavoured chewable tablet (Credelio™) were studied in detail. With a Tmax of 2 h and a terminal half-life of 30.7 days under fed conditions, lotilaner provides a rapid onset of flea and tick killing activity with consistent and sustained efficacy for at least 1 month.",
"title": ""
},
{
"docid": "68693c88cb62ce28514344d15e9a6f09",
"text": "New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline.",
"title": ""
},
{
"docid": "22b2eda49d67e83a1aa526abf9074734",
"text": "A new member of polyhydroxyalkanoates (PHA) family, namely, a terpolyester abbreviated as PHBVHHx consisting of 3-hydroxybutyrate (HB), 3-hydroxyvalerate (HV) and 3-hydroxyhexanoate (HHx) that can be produced by recombinant microorganisms, was found to have proper thermo- and mechanical properties for possible skin tissue engineering, as demonstrated by its strong ability to support the growth of human keratinocyte cell line HaCaT. In this study, HaCaT cells showed the strongest viability and the highest growth activity on PHBVHHx film compared with PLA, PHB, PHBV, PHBHHx and P3HB4HB, even the tissue culture plates were grown with less HaCaT cells compared with that on PHBVHHx. To understand its superior biocompatibility, PHBVHHx nanoparticles ranging from 200 to 350nm were prepared. It was found that the nanoparticles could increase the cellular activities by stimulating a rapid increase of cytosolic calcium influx in HaCaT cells, leading to enhanced cell growth. At the same time, 3-hydroxybutyrate (HB), a degradation product and the main component of PHBVHHx, was also shown to promote HaCaT proliferation. Morphologically, under the same preparation conditions, PHBVHHx film showed the most obvious surface roughness under atomic force microscopy (AFM), accompanied by the lowest surface energy compared with all other well studied biopolymers tested above. These results explained the superior ability for PHBVHHx to grow skin HaCaT cells. Therefore, PHBVHHx possesses the suitability to be developed into a skin tissue-engineered material.",
"title": ""
},
{
"docid": "59a06c71efeb218e85955f17edc42bf1",
"text": "Toyota Hybrid System is the innovative powertrain used in the current best-selling hybrid vehicle on the market—the Prius. It uses a split-type hybrid configuration which contains both a parallel and a serial power path to achieve the benefits of both. The main purpose of this paper is to develop a dynamic model to investigate the unique design of THS, which will be used to analyze the control strategy, and explore the potential of further improvement. A Simulink model is developed and a control algorithm is derived. Simulations confirm our model captures the fundamental behavior of THS reasonably well.",
"title": ""
},
{
"docid": "0f4ac688367d3ea43643472b7d75ffc9",
"text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.",
"title": ""
},
{
"docid": "dcf8c1a5445ad3c2e475b296cb72b18e",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading markov decision processes discrete stochastic dynamic programming is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "39598533576bdd3fa94df5a6967b9b2d",
"text": "Genetic Algorithm (GA) and other Evolutionary Algorithms (EAs) have been successfully applied to solve constrained minimum spanning tree (MST) problems of the communication network design and also have been used extensively in a wide variety of communication network design problems. Choosing an appropriate representation of candidate solutions to the problem is the essential issue for applying GAs to solve real world network design problems, since the encoding and the interaction of the encoding with the crossover and mutation operators have strongly influence on the success of GAs. In this paper, we investigate a new encoding crossover and mutation operators on the performance of GAs to design of minimum spanning tree problem. Based on the performance analysis of these encoding methods in GAs, we improve predecessor-based encoding, in which initialization depends on an underlying random spanning-tree algorithm. The proposed crossover and mutation operators offer locality, heritability, and computational efficiency. We compare with the approach to others that encode candidate spanning trees via the Pr?fer number-based encoding, edge set-based encoding, and demonstrate better results on larger instances for the communication spanning tree design problems. key words: minimum spanning tree (MST), communication network design, genetic algorithm (GA), node-based encoding",
"title": ""
},
{
"docid": "1aa27b05e046927d75a8cabb60506a9e",
"text": "Accurate road detection and centerline extraction from very high resolution (VHR) remote sensing imagery are of central importance in a wide range of applications. Due to the complex backgrounds and occlusions of trees and cars, most road detection methods bring in the heterogeneous segments; besides for the centerline extraction task, most current approaches fail to extract a wonderful centerline network that appears smooth, complete, as well as single-pixel width. To address the above-mentioned complex issues, we propose a novel deep model, i.e., a cascaded end-to-end convolutional neural network (CasNet), to simultaneously cope with the road detection and centerline extraction tasks. Specifically, CasNet consists of two networks. One aims at the road detection task, whose strong representation ability is well able to tackle the complex backgrounds and occlusions of trees and cars. The other is cascaded to the former one, making full use of the feature maps produced formerly, to obtain the good centerline extraction. Finally, a thinning algorithm is proposed to obtain smooth, complete, and single-pixel width road centerline network. Extensive experiments demonstrate that CasNet outperforms the state-of-the-art methods greatly in learning quality and learning speed. That is, CasNet exceeds the comparing methods by a large margin in quantitative performance, and it is nearly 25 times faster than the comparing methods. Moreover, as another contribution, a large and challenging road centerline data set for the VHR remote sensing image will be publicly available for further studies.",
"title": ""
},
{
"docid": "5b7483a4dea12d8b07921c150ccc66ee",
"text": "OBJECTIVE\nWe reviewed the efficacy of occupational therapy-related interventions for adults with rheumatoid arthritis.\n\n\nMETHOD\nWe examined 51 Level I studies (19 physical activity, 32 psychoeducational) published 2000-2014 and identified from five databases. Interventions that focused solely on the upper or lower extremities were not included.\n\n\nRESULTS\nFindings related to key outcomes (activities of daily living, ability, pain, fatigue, depression, self-efficacy, disease symptoms) are presented. Strong evidence supports the use of aerobic exercise, resistive exercise, and aquatic therapy. Mixed to limited evidence supports dynamic exercise, Tai Chi, and yoga. Among the psychoeducation interventions, strong evidence supports the use of patient education, self-management, cognitive-behavioral approaches, multidisciplinary approaches, and joint protection, and limited or mixed evidence supports the use of assistive technology and emotional disclosure.\n\n\nCONCLUSION\nThe evidence supports interventions within the scope of occupational therapy practice for rheumatoid arthritis, but few interventions were occupation based.",
"title": ""
},
{
"docid": "719ca13e95b9b4a1fc68772746e436d9",
"text": "The increased chance of deception in computer-mediated communication and the potential risk of taking action based on deceptive information calls for automatic detection of deception. To achieve the ultimate goal of automatic prediction of deception, we selected four common classification methods and empirically compared their performance in predicting deception. The deception and truth data were collected during two experimental studies. The results suggest that all of the four methods were promising for predicting deception with cues to deception. Among them, neural networks exhibited consistent performance and were robust across test settings. The comparisons also highlighted the importance of selecting important input variables and removing noise in an attempt to enhance the performance of classification methods. The selected cues offer both methodological and theoretical contributions to the body of deception and information systems research.",
"title": ""
},
{
"docid": "f75ace78cc5c82e49ee5d5481f294dbf",
"text": "This paper presents the design and fabrication of Sierpinski gasket fractal antenna with defected ground structure (DGS) with center frequency at 5.8 GHz. A slot was used as a DGS. The antenna was designed and simulated using Computer Simulation Technology (CST) software and fabricated on FR-4 board with a substrate thickness of 1.6 mm and dielectric constant, er of 5.0 and dielectric loss tangent 0.025. Measurement of the parameters of the antenna was carried by using a Vector Network Analyzer. The result shows a good agreement between simulation and measurement and a compact size antenna was realized.",
"title": ""
},
{
"docid": "42cfbb2b2864e57d59a72ec91f4361ff",
"text": "Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.",
"title": ""
},
{
"docid": "dba5777004cf43d08a58ef3084c25bd3",
"text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.",
"title": ""
},
{
"docid": "c3195ff8dc6ca8c130f5a96ebe763947",
"text": "The recent emergence of Cloud Computing has drastically altered everyone’s perception of infrastructure architectures, software delivery and development models. Projecting as an evolutionary step, following the transition from mainframe computers to client/server deployment models, cloud computing encompasses elements from grid computing, utility computing and autonomic computing, into an innovative deployment architecture. This rapid transition towards the clouds, has fuelled concerns on a critical issue for the success of information systems, communication and information security. From a security perspective, a number of unchartered risks and challenges have been introduced from this relocation to the clouds, deteriorating much of the effectiveness of traditional protection mechanisms. As a result the aim of this paper is twofold; firstly to evaluate cloud security by identifying unique security requirements and secondly to attempt to present a viable solution that eliminates these potential threats. This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment. The proposed solution calls upon cryptography, specifically Public Key Infrastructure operating in concert with SSO and LDAP, to ensure the authentication, integrity and confidentiality of involved data and communications. The solution, presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained.",
"title": ""
}
] | scidocsrr |
fa68821a1f52cc2104bad15f3a5cba67 | The Work Tasks Motivation Scale for Teachers ( WTMST ) | [
{
"docid": "691fcf418d6073f7681846b30a1753a8",
"text": "Cognitive evaluation theory, which explains the effects of extrinsic motivators on intrinsic motivation, received some initial attention in the organizational literature. However, the simple dichotomy between intrinsic and extrinsic motivation made the theory difficult to apply to work settings. Differentiating extrinsic motivation into types that differ in their degree of autonomy led to self-determination theory, which has received widespread attention in the education, health care, and sport domains. This article describes self-determination theory as a theory of work motivation and shows its relevance to theories of organizational behavior. Copyright # 2005 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "2b2cbdced71e24eb25e20a186ad0af58",
"text": "The job demands-resources (JD-R) model proposes that working conditions can be categorized into 2 broad categories, job demands and job resources. that are differentially related to specific outcomes. A series of LISREL analyses using self-reports as well as observer ratings of the working conditions provided strong evidence for the JD-R model: Job demands are primarily related to the exhaustion component of burnout, whereas (lack of) job resources are primarily related to disengagement. Highly similar patterns were observed in each of 3 occupational groups: human services, industry, and transport (total N = 374). In addition, results confirmed the 2-factor structure (exhaustion and disengagement) of a new burnout instrument--the Oldenburg Burnout Inventory--and suggested that this structure is essentially invariant across occupational groups.",
"title": ""
},
{
"docid": "feafd64c9f81b07f7f616d2e36e15e0c",
"text": "Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job, and is defined by the three dimensions of exhaustion, cynicism, and inefficacy. The past 25 years of research has established the complexity of the construct, and places the individual stress experience within a larger organizational context of people's relation to their work. Recently, the work on burnout has expanded internationally and has led to new conceptual models. The focus on engagement, the positive antithesis of burnout, promises to yield new perspectives on interventions to alleviate burnout. The social focus of burnout, the solid research basis concerning the syndrome, and its specific ties to the work domain make a distinct and valuable contribution to people's health and well-being.",
"title": ""
}
] | [
{
"docid": "31f67a8751afec0442b8a91b9c8e9aa6",
"text": "Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.",
"title": ""
},
{
"docid": "39b7783e43526e5f825abe3bc8ebe01b",
"text": "The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the \"increasing dimensionality\" problem, where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethink the way cluster points are created and designed, and then devise OPTIC, an efficient online time series clustering technique for demand response (DR), in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. OPTIC is randomized in nature, and provides optimal performance guarantees (Section 2.3.2) in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a 'killer' approach that breaks the \"of dimensionality\" in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles. We demonstrate the efficacy of OPTIC in practice using real-world data obtained from the fully operational USC microgrid.",
"title": ""
},
{
"docid": "72d47983c009c7892155fc3c491c9f52",
"text": "To improve the stability accuracy of stable platform of unmanned aerial vehicle (UAV), a line-of-sight stabilized control system is developed by using an inertial and optical-mechanical (fast steering mirror) combined method in a closed loop with visual feedback. The system is based on Peripheral Component Interconnect (PCI), included an image-deviation-obtained system and a combined controller using a PQ method. The method changes the series-wound structure to the shunt-wound structure of dual-input/single-output (DISO), and decouples the actuator range and frequency of inertial stabilization and fast steering mirror stabilization. Test results show the stability accuracy improves from 20μrad of inertial method to 5μrad of inertial and optical-mechanical combined method, and prove the effectiveness of the combined line-of-sight stabilization control system.",
"title": ""
},
{
"docid": "fca196c6900f43cf6fd711f8748c6768",
"text": "The fatigue fracture of structural details subjected to cyclic loads mostly occurs at a critical cross section with stress concentration. The welded joint is particularly dangerous location because of sinergetic harmful effects of stress concentration, tensile residual stresses, deffects, microstructural heterogeneity. Because of these reasons many methods for improving the fatigue resistance of welded joints are developed. Significant increase in fatigue strength and fatigue life was proved and could be attributed to improving weld toe profile, the material microstructure, removing deffects at the weld toe and modifying the original residual stress field. One of the most useful methods to improve fatigue behaviour of welded joints is TIG dressing. The magnitude of the improvement in fatigue performance depends on base material strength, type of welded joint and type of loading. Improvements of the fatigue behaviour of the welded joints in low-carbon structural steel treated by TIG dressing is considered in this paper.",
"title": ""
},
{
"docid": "9882c528dce5e9bb426d057ee20a520c",
"text": "The use of herbal medicinal products and supplements has increased tremendously over the past three decades with not less than 80% of people worldwide relying on them for some part of primary healthcare. Although therapies involving these agents have shown promising potential with the efficacy of a good number of herbal products clearly established, many of them remain untested and their use are either poorly monitored or not even monitored at all. The consequence of this is an inadequate knowledge of their mode of action, potential adverse reactions, contraindications, and interactions with existing orthodox pharmaceuticals and functional foods to promote both safe and rational use of these agents. Since safety continues to be a major issue with the use of herbal remedies, it becomes imperative, therefore, that relevant regulatory authorities put in place appropriate measures to protect public health by ensuring that all herbal medicines are safe and of suitable quality. This review discusses toxicity-related issues and major safety concerns arising from the use of herbal medicinal products and also highlights some important challenges associated with effective monitoring of their safety.",
"title": ""
},
{
"docid": "9eedeec21ab380c0466ed7edfe7c745d",
"text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.",
"title": ""
},
{
"docid": "1fc965670f71d9870a4eea93d129e285",
"text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bda0ae59319660987e9d2686d98e4b9a",
"text": "Due to the shift from software-as-a-product (SaaP) to software-as-a-service (SaaS), software components that were developed to run in a single address space must increasingly be accessed remotely across the network. Distribution middleware is frequently used to facilitate this transition. Yet a range of middleware platforms exist, and there are few existing guidelines to help the programmer choose an appropriate middleware platform to achieve desired goals for performance, expressiveness, and reliability. To address this limitation, in this paper we describe a case study of transitioning an Open Service Gateway Initiative (OSGi) service from local to remote access. Our case study compares five remote versions of this service, constructed using different distribution middleware platforms. These platforms are implemented by widely-used commercial technologies or have been proposed as improvements on the state of the art. In particular, we implemented a service-oriented version of our own Remote Batch Invocation abstraction. We compare and contrast these implementations in terms of their respective performance, expressiveness, and reliability. Our results can help remote service programmers make informed decisions when choosing middleware platforms for their applications.",
"title": ""
},
{
"docid": "b950d3b1bc2a30730b12e2f0016ecd9c",
"text": "Application distribution platforms - or app stores - such as Google Play or Apple AppStore allow users to submit feedback in form of ratings and reviews to downloaded applications. In the last few years, these platforms have become very popular to both application developers and users. However, their real potential for and impact on requirements engineering processes are not yet well understood. This paper reports on an exploratory study, which analyzes over one million reviews from the Apple AppStore. We investigated how and when users provide feedback, inspected the feedback content, and analyzed its impact on the user community. We found that most of the feedback is provided shortly after new releases, with a quickly decreasing frequency over time. Reviews typically contain multiple topics, such as user experience, bug reports, and feature requests. The quality and constructiveness vary widely, from helpful advices and innovative ideas to insulting offenses. Feedback content has an impact on download numbers: positive messages usually lead to better ratings and vice versa. Negative feedback such as shortcomings is typically destructive and misses context details and user experience. We discuss our findings and their impact on software and requirements engineering teams.",
"title": ""
},
{
"docid": "66467b6181882fade46d331d7a67da59",
"text": "This paper suggests an architectural approach of representing knowledge graph for complex question-answering. There are four kinds of entity relations added to our knowledge graph: syntactic dependencies, semantic role labels, named entities, and coreference links, which can be effectively applied to answer complex questions. As a proof of concept, we demonstrate how our knowledge graph can be used to solve complex questions such as arithmetics. Our experiment shows a promising result on solving arithmetic questions, achieving the 3folds cross-validation score of 71.75%.",
"title": ""
},
{
"docid": "c5a7c8457830fb2989e6087abf9fd252",
"text": "Paper prototyping highlights cost-effective usability testing techniques that produce fast results for improving an interface design. Practitioners and students interested in the design, development, and support of user interfaces will appreciate Snyder’s text for its focus on practical information and application. This book’s best features are the real life examples, anecdotes, and case studies that the author presents to demonstrate the uses of paper prototyping and its many benefits. While the author advocates paper prototyping, she also notes that paper prototyping techniques are one of many usability evaluation methods and that paper prototyping works best only in certain situations. Snyder reminds her readers that paper prototyping does not produce precise usability measurements, but rather it is a “blunt instrument” that rapidly uncovers qualitative information from actual users performing real tasks (p. 185). Hence, this book excludes in-depth theoretical discussions about methods and validity, but its pragmatic discussion on test design prepares the practitioner for dealing with several circumstances and making sound decisions based on testing method considerations.",
"title": ""
},
{
"docid": "918e7434798ebcfdf075fa93cbffba39",
"text": "Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.",
"title": ""
},
{
"docid": "c3f942a915c149a7fc9929e0404c61f2",
"text": "Distributed model training suffers from communication overheads due to frequent gradient updates transmitted between compute nodes. To mitigate these overheads, several studies propose the use of sparsified stochastic gradients. We argue that these are facets of a general sparsification method that can operate on any possible atomic decomposition. Notable examples include elementwise, singular value, and Fourier decompositions. We present Atomo, a general framework for atomic sparsification of stochastic gradients. Given a gradient, an atomic decomposition, and a sparsity budget, Atomo gives a random unbiased sparsification of the atoms minimizing variance. We show that recent methods such as QSGD and TernGrad are special cases of Atomo and that sparsifiying the singular value decomposition of neural networks gradients, rather than their coordinates, can lead to significantly faster distributed training.",
"title": ""
},
{
"docid": "a1774a08ffefd28785fbf3a8f4fc8830",
"text": "Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a
nite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning. 1 Introduction Rademacher averages have been introduced to learning theory as an e¢ cient complexity measure for function classes, motivated by tight, sample or distribution dependent generalization bounds ([10], [2]). Both the de
nition of Rademacher complexity and the generalization bounds extend easily from realvalued function classes to function classes with values in R, as they are relevant to multi-task learning ([1], [12]). There has been an increasing interest in multi-task learning which has shown to be very e¤ective in experiments ([7], [1]), and there have been some general studies of its generalisation performance ([4], [5]). For a large collection of tasks there are usually more data available than for a single task and these data may be put to a coherent use by some constraint of relatedness. A practically interesting case is linear multi-task learning, extending linear large margin classi
ers to vector valued large-margin classi
ers. Di¤erent types of constraints have been proposed: Evgeniou et al ([8], [9]) propose graph regularization, where the vectors de
ning the classi
ers of related tasks have to be near each other. They also show that their scheme can be implemented in the framework of kernel machines. Ando and Zhang [1] on the other hand require the classi
ers to be members of a common low dimensional subspace. They also give generalization bounds using Rademacher complexity, but these bounds increase with the dimension of the input space. This paper gives dimension free bounds which apply to both approaches. 1.1 Multi-task generalization and Rademacher complexity Suppose we have m classi
cation tasks, represented by m independent random variables X ; Y l taking values in X f 1; 1g, where X l models the random",
"title": ""
},
{
"docid": "38f6aaf5844ddb6e4ed0665559b7f813",
"text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.",
"title": ""
},
{
"docid": "226276adf10b40939e8cbb15addc6ba3",
"text": "The effects of EGb 761 on the CNS underlie one of its major therapeutic indications; i.e., individuals suffering from deteriorating cerebral mechanisms related to age-associated impairments of memory, attention and other cognitive functions. EGb 761 is currently used as symptomatic treatment for cerebral insufficiency that occurs during normal ageing or which may be due to degenerative dementia, vascular dementia or mixed forms of both, and for neurosensory disturbances. Depressive symptoms of patients with Alzheimer's disease (AD) and aged non-Alzheimer patients may also respond to treatment with EGb 761 since this extract has an \"anti-stress\" effect. Basic and clinical studies, conducted both in vitro and in vivo, support these beneficial neuroprotective effects of EGb 761. EGb 761 has several major actions; it enhances cognition, improves blood rheology and tissue metabolism, and opposes the detrimental effects of ischaemia. Several mechanisms of action are useful in explaining how EGb 761 benefits patients with AD and other age-related, neurodegenerative disorders. In animals, EGb 761 possesses antioxidant and free radical-scavenging activities, it reverses age-related losses in brain alpha 1-adrenergic, 5-HT1A and muscarinic receptors, protects against ischaemic neuronal death, preserves the function of the hippocampal mossy fiber system, increases hippocampal high-affinity choline uptake, inhibits the down-regulation of hippocampal glucocorticoid receptors, enhances neuronal plasticity, and counteracts the cognitive deficits that follow stress or traumatic brain injury. Identified chemical constituents of EGb 761 have been associated with certain actions. Both flavonoid and ginkgolide constituents are involved in the free radical-scavenging and antioxidant effects of EGb 761 which decrease tissue levels of reactive oxygen species (ROS) and inhibit membrane lipid peroxidation. Regarding EGb 761-induced regulation of cerebral glucose utilization, bilobalide increases the respiratory control ratio of mitochondria by protecting against uncoupling of oxidative phosphorylation, thereby increasing ATP levels, a result that is supported by the finding that bilobalide increases the expression of the mitochondrial DNA-encoded COX III subunit of cytochrome oxidase. With regard to its \"anti-stress\" effect, EGb 761 acts via its ginkgolide constituents to decrease the expression of the peripheral benzodiazepine receptor (PBR) of the adrenal cortex.",
"title": ""
},
{
"docid": "149c18850040c6073e84ad117b4e4eac",
"text": "Hemangiomas are the most common tumor of infantile period and usually involved sites are head and neck (%50), followed by trunk and extremities. Hemangioma is rarely described in genitals. We report a 17-months-old patient with a hemangioma of the preputium penis. The tumor was completely removed surgically and histological examination revealed an infantile hemangioma.",
"title": ""
},
{
"docid": "325b97e73ea0a50d2413757e95628163",
"text": "Due to the recent advancement in procedural generation techniques, games are presenting players with ever growing cities and terrains to explore. However most sandbox-style games situated in cities, do not allow players to wander into buildings. In past research, space planning techniques have already been utilized to generate suitable layouts for both building floor plans and room layouts. We introduce a novel rule-based layout solving approach, especially suited for use in conjunction with procedural generation methods. We show how this solving approach can be used for procedural generation by providing the solver with a userdefined plan. In this plan, users can specify objects to be placed as instances of classes, which in turn contain rules about how instances should be placed. This approach gives us the opportunity to use our generic solver in different procedural generation scenarios. In this paper, we will illustrate mainly with interior generation examples.",
"title": ""
},
{
"docid": "c36dac0c410570e84bf8634b32a0cac3",
"text": "The design of strategies for branching in Mixed Integer Programming (MIP) is guided by cycles of parameter tuning and offline experimentation on an extremely heterogeneous testbed, using the average performance. Once devised, these strategies (and their parameter settings) are essentially input-agnostic. To address these issues, we propose a machine learning (ML) framework for variable branching in MIP. Our method observes the decisions made by Strong Branching (SB), a time-consuming strategy that produces small search trees, collecting features that characterize the candidate branching variables at each node of the tree. Based on the collected data, we learn an easy-to-evaluate surrogate function that mimics the SB strategy, by means of solving a learning-to-rank problem, common in ML. The learned ranking function is then used for branching. The learning is instance-specific, and is performed on-the-fly while executing a branch-and-bound search to solve the instance. Experiments on benchmark instances indicate that our method produces significantly smaller search trees than existing heuristics, and is competitive with a state-of-the-art commercial solver.",
"title": ""
},
{
"docid": "29d98961d0ecde875bedcd4cfcb72026",
"text": "The claim that we have a moral obligation, where a choice can be made, to bring to birth the 'best' child possible, has been highly controversial for a number of decades. More recently Savulescu has labelled this claim the Principle of Procreative Beneficence. It has been argued that this Principle is problematic in both its reasoning and its implications, most notably in that it places lower moral value on the disabled. Relentless criticism of this proposed moral obligation, however, has been unable, thus far, to discredit this Principle convincingly and as a result its influence shows no sign of abating. I will argue that while criticisms of the implications and detail of the reasoning behind it are well founded, they are unlikely to produce an argument that will ultimately discredit the obligation that the Principle of Procreative Beneficence represents. I believe that what is needed finally and convincingly to reveal the fallacy of this Principle is a critique of its ultimate theoretical foundation, the notion of impersonal harm. In this paper I argue that while the notion of impersonal harm is intuitively very appealing, its plausibility is based entirely on this intuitive appeal and not on sound moral reasoning. I show that there is another plausible explanation for our intuitive response and I believe that this, in conjunction with the other theoretical criticisms that I and others have levelled at this Principle, shows that the Principle of Procreative Beneficence should be rejected.",
"title": ""
}
] | scidocsrr |
4067a8bb29b89d8861b311280b95fdf6 | Smart Wheelchairs - State of the Art in an Emerging Market | [
{
"docid": "f5913b9635302192149270b600a15fcd",
"text": "Many people who use wheelchairs are unable to control a powered wheelchair with the standard joystick interface. A robotic wheelchair can provide users with driving assistance, taking over low level navigation to allow its user to travel efficiently and with greater ease. Our robotic wheelchair system, Wheelesley, consists of a standard powered wheelchair with an on-board computer, sensors and a graphical user interface running on a notebook computer. This paper describes the indoor navigation system and a user interface that can be easily customized for",
"title": ""
}
] | [
{
"docid": "b94687da7db1a718a9a440a575a71a34",
"text": "SOS1 constraints require that at most one of a given set of variables is nonzero. In this article, we investigate a branch-and-cut algorithm to solve linear programs with SOS1 constraints. We focus on the case in which the SOS1 constraints overlap. The corresponding conflict graph can algorithmically be exploited, for instance, for improved branching rules, preprocessing, primal heuristics, and cutting planes. In an extensive computational study, we evaluate the components of our implementation on instances for three different applications. We also demonstrate the effectiveness of this approach by comparing it to the solution of a mixed-integer programming formulation, if the variables appearing in SOS1 constraints are bounded.",
"title": ""
},
{
"docid": "514d626cc44cf453706c0903cbc645fe",
"text": "Peer group analysis is a new tool for monitoring behavior over time in data mining situations. In particular, the tool detects individual objects that begin to behave in a way distinct from objects to which they had previously been similar. Each object is selected as a target object and is compared with all other objects in the database, using either external comparison criteria or internal criteria summarizing earlier behavior patterns of each object. Based on this comparison, a peer group of objects most similar to the target object is chosen. The behavior of the peer group is then summarized at each subsequent time point, and the behavior of the target object compared with the summary of its peer group. Those target objects exhibiting behavior most different from their peer group summary behavior are flagged as meriting closer investigation. The tool is intended to be part of the data mining process, involving cycling between the detection of objects that behave in anomalous ways and the detailed examination of those objects. Several aspects of peer group analysis can be tuned to the particular application, including the size of the peer group, the width of the moving behavior window being used, the way the peer group is summarized, and the measures of difference between the target object and its peer group summary. We apply the tool in various situations and illustrate its use on a set of credit card transaction data.",
"title": ""
},
{
"docid": "313dba70fea244739a45a9df37cdcf71",
"text": "We present KB-UNIFY, a novel approach for integrating the output of different Open Information Extraction systems into a single unified and fully disambiguated knowledge repository. KB-UNIFY consists of three main steps: (1) disambiguation of relation argument pairs via a sensebased vector representation and a large unified sense inventory; (2) ranking of semantic relations according to their degree of specificity; (3) cross-resource relation alignment and merging based on the semantic similarity of domains and ranges. We tested KB-UNIFY on a set of four heterogeneous knowledge bases, obtaining high-quality results. We discuss and provide evaluations at each stage, and release output and evaluation data for the use and scrutiny of the community1.",
"title": ""
},
{
"docid": "263e8b756862ab28d313578e3f6acbb1",
"text": "Goal posts detection is a critical robot soccer ability which is needed to be accurate, robust and efficient. A goal detection method using Hough transform to get the detailed goal features is presented in this paper. In the beginning, the image preprocessing and Hough transform implementation are described in detail. A new modification on the θ parameter range in Hough transform is explained and applied to speed up the detection process. Line processing algorithm is used to classify the line detected, and then the goal feature extraction method, including the line intersection calculation, is done. Finally, the goal distance from the robot body is estimated using triangle similarity. The experiment is performed on our university humanoid robot with the goal dimension of 225 cm in width and 110 cm in height, in yellow color. The result shows that the goal detection method, including the modification in Hough transform, is able to extract the goal features seen by the robot correctly, with the lowest speed of 5 frames per second. Additionally, the goal distance estimation is accomplished with maximum error of 20 centimeters.",
"title": ""
},
{
"docid": "4e97003a5609901f1f18be1ccbf9db46",
"text": "Fog computing is strongly emerging as a relevant and interest-attracting paradigm+technology for both the academic and industrial communities. However, architecture and methodological approaches are still prevalent in the literature, while few research activities have specifically targeted so far the issues of practical feasibility, cost-effectiveness, and efficiency of fog solutions over easily-deployable environments. In this perspective, this paper originally presents i) our fog-oriented framework for Internet-of-Things applications based on innovative scalability extensions of the open-source Kura gateway and ii) its Docker-based containerization over challenging and resource-limited fog nodes, i.e., RaspberryPi devices. Our practical experience and experimental work show the feasibility of using even extremely constrained nodes as fog gateways; the reported results demonstrate that good scalability and limited overhead can be coupled, via proper configuration tuning and implementation optimizations, with the significant advantages of containerization in terms of flexibility and easy deployment, also when working on top of existing, off-the-shelf, and limited-cost gateway nodes.",
"title": ""
},
{
"docid": "729840cdad8954ac58df1e8457a93796",
"text": "Prudent health care policies that encourage public-private participation in health care financing and provisioning have conferred on Singapore the advantage of flexible response as it faces the potentially conflicting challenges of becoming a regional medical hub attracting foreign patients and ensuring domestic access to affordable health care. Both the external and internal health care markets are two sides of the same coin, the competition to be decided on price and quality. For effective regulation, a tripartite model, involving not just the government and providers but empowered consumers, is needed. Government should distance itself from the provider role while providers should compete - and cooperate - to create higher-value health care systems than what others can offer. Health care policies should be better informed by health policy research.",
"title": ""
},
{
"docid": "72453a8b2b70c781e1a561b5cfb9eecb",
"text": "Pair Programming is an innovative collaborative software development methodology. Anecdotal and empirical evidence suggests that this agile development method produces better quality software in reduced time with higher levels of developer satisfaction. To date, little explanation has been offered as to why these improved performance outcomes occur. In this qualitative study, we focus on how individual differences, and specifically task conflict, impact results of the collaborative software development process and related outcomes. We illustrate that low to moderate levels of task conflict actually enhance performance, while high levels mitigate otherwise anticipated positive results.",
"title": ""
},
{
"docid": "4019beb9fa6ec59b4b19c790fe8ff832",
"text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.",
"title": ""
},
{
"docid": "d10ab66c987495aefc34ce55eb89e110",
"text": "Bartter syndrome (BS) type 1, also referred to antenatal BS, is a genetic tubulopathy with hypokalemic metabolic alkalosis and prenatal onset of polyuria leading to polyhydramnios. It has been shown that BS type 1 is caused by mutations in the SLC12A1 gene encoding bumetanide-sensitive Na-K-2Cl– cotransporter (NKCC2). We had the opportunity to care for two unrelated Japanese patients of BS type 1 with typical manifestations including polyhydramnios, prematurity, hypokalemia, alkalosis, and infantile-onset nephrocalcinosis. Analysis of the SLC12A1 gene demonstrated four novel mutations: N117X, G257S, D792fs and N984fs. N117X mutation is expected to abolish most of the NKCC2 protein, whereas G257, which is evolutionary conserved, resides in the third transmemebrane domain. The latter two frameshift mutations reside in the intra-cytoplasmic C-terminal domain, which illustrates the importance of this domain for the NKCC2 function. In conclusion, we found four novel SLC12A1 mutations in two BS type 1 patients. Development of effective therapy for hypercalciuria is mandatory to prevent nephrocalcinosis and resultant renal failure.",
"title": ""
},
{
"docid": "674d347526e5ea2677eec2f2b816935b",
"text": "PATIENT\nMale, 70 • Male, 84.\n\n\nFINAL DIAGNOSIS\nAppendiceal mucocele and pseudomyxoma peritonei.\n\n\nSYMPTOMS\n-.\n\n\nMEDICATION\n-.\n\n\nCLINICAL PROCEDURE\n-.\n\n\nSPECIALTY\nSurgery.\n\n\nOBJECTIVE\nRare disease.\n\n\nBACKGROUND\nMucocele of the appendix is an uncommon cystic lesion characterized by distension of the appendiceal lumen with mucus. Most commonly, it is the result of epithelial proliferation, but it can also be caused by inflammation or obstruction of the appendix. When an underlying mucinous cystadenocarcinoma exists, spontaneous or iatrogenic rupture of the mucocele can lead to mucinous intraperitoneal ascites, a syndrome known as pseudomyxoma peritonei.\n\n\nCASE REPORT\nWe report 2 cases that represent the clinical extremities of this heterogeneous disease; an asymptomatic mucocele of the appendix in a 70-year-old female and a case of pseudomyxoma peritonei in an 84-year-old male. Subsequently, we review the current literature focusing to the optimal management of both conditions.\n\n\nCONCLUSIONS\nMucocele of the appendix is a rare disease, usually diagnosed on histopathologic examination of appendectomized specimens. Due to the existing potential for malignant transformation and pseudomyxoma peritonei caused by rupture of the mucocele, extensive preoperative evaluation and thorough intraoperative gastrointestinal and peritoneal examination is required.",
"title": ""
},
{
"docid": "800befb527094bc6169809c6765d5d15",
"text": "The problem of scheduling a weighted directed acyclic graph (DAG) to a set of homogeneous processors to minimize the completion time has been extensively studied. The NPcompleteness of the problem has instigated researchers to propose a myriad of heuristic algorithms. While these algorithms are individually reported to be efficient, it is not clear how effective they are and how well they compare against each other. A comprehensive performance evaluation and comparison of these algorithms entails addressing a number of difficult issues. One of the issues is that a large number of scheduling algorithms are based upon radically different assumptions, making their comparison on a unified basis a rather intricate task. Another issue is that there is no standard set of benchmarks that can be used to evaluate and compare these algorithms. Furthermore, most algorithms are evaluated using small problem sizes, and it is not clear how their performance scales with the problem size. In this paper, we first provide a taxonomy for classifying various algorithms into different categories according to their assumptions and functionalities. We then propose a set of benchmarks which are of diverse structures without being biased towards a particular scheduling technique and still allow variations in important parameters. We have evaluated 15 scheduling algorithms, and compared them using the proposed benchmarks. Based upon the design philosophies and principles behind these algorithms, we interpret the results and discuss why some algorithms perform better than the others.",
"title": ""
},
{
"docid": "5218f1ddf65b9bc1db335bb98d7e71b4",
"text": "The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm. Key-words:-Fingerprint Recognition, Binarization, Block Filter Method, Matching score and Minutia.",
"title": ""
},
{
"docid": "1f7f0b82bf5822ee51313edfd1cb1593",
"text": "With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.",
"title": ""
},
{
"docid": "fbc41e1582d2d6d3896f89de1568de3c",
"text": "Vehicular ad-hoc NETworks (VANETs) have received considerable attention in recent years, due to its unique characteristics, which are different from mobile ad-hoc NETworks, such as rapid topology change, frequent link failure, and high vehicle mobility. The main drawback of VANETs network is the network instability, which yields to reduce the network efficiency. In this paper, we propose three algorithms: cluster-based life-time routing (CBLTR) protocol, Intersection dynamic VANET routing (IDVR) protocol, and control overhead reduction algorithm (CORA). The CBLTR protocol aims to increase the route stability and average throughput in a bidirectional segment scenario. The cluster heads (CHs) are selected based on maximum lifetime among all vehicles that are located within each cluster. The IDVR protocol aims to increase the route stability and average throughput, and to reduce end-to-end delay in a grid topology. The elected intersection CH receives a set of candidate shortest routes (SCSR) closed to the desired destination from the software defined network. The IDVR protocol selects the optimal route based on its current location, destination location, and the maximum of the minimum average throughput of SCSR. Finally, the CORA algorithm aims to reduce the control overhead messages in the clusters by developing a new mechanism to calculate the optimal numbers of the control overhead messages between the cluster members and the CH. We used SUMO traffic generator simulators and MATLAB to evaluate the performance of our proposed protocols. These protocols significantly outperform many protocols mentioned in the literature, in terms of many parameters.",
"title": ""
},
{
"docid": "b16d8dddf037e60ba9121f85e7d9b45a",
"text": "Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.",
"title": ""
},
{
"docid": "8389e7702dbd2c54395d871758361b0e",
"text": "Recently, significant advances have been made in ROBOTICS, ARTIFICIAL INTELLIGENCE and other COGNITIVE related fields, allowing tomakemuch sophisticated biomimetic robotics systems. In addition, enormous number of robots have been designed and assembled, explicitly realize biological oriented behaviors. Towards much skill behaviors and adequate grasping abilities (i.e. ARTICULATION and DEXTEROUS MANIPULATION), a new phase of dexterous hands have been developed recently with biomimetically oriented and bio-inspired functionalities. In this respect, this manuscript brings a detailed survey of biomimetic based dexterous robotics multi-fingered hands. The aim of this survey, is to find out the state of the art on dexterous robotics end-effectors, known in literature as (ROBOTIC HANDS) or (DEXTEROUSMULTI-FINGERED) robot hands. Hence, this review finds such biomimetic approaches using a framework that permits for a common description of biological and technical based hand manipulation behavior. In particular, the manuscript focuses on a number of developments that have been taking place over the past two decades, and some recent developments related to this biomimetic field of research. In conclusions, the study found that, there are rich research efforts in terms of KINEMATICS, DYNAMICS, MODELING and CONTROL methodologies. The survey is also indicating that, the topic of biomimetic inspired robotics systems make significant contributions to robotics hand design, in four main directions for future research. First, they provide a genuine world test of models of biologically inspired hand designs and dexterous manipulation behaviors. Second, they provide novel manipulation articulations and mechanisms available for industrial and domestic uses, most notably in the field of human like hand design and real world applications. Third, this survey has also indicated that, there are quite large number of attempts to acquire biologically inspired hands. These attempts were almost successful, where they exposed more novel ideas for further developments. Such inspirations were directed towards a number of topics related (HAND MECHANICS AND DESIGN), (HAND TACTILE SENSING), (HAND FORCE SENSING), (HAND SOFT ACTUATION) and (HANDCONFIGURATIONAND TOPOLOGY). FOURTH, in terms of employing AI related sciences and cognitive thinking, it was also found that, rare and exceptional research attempts were directed towards the employment of biologically inspired thinking, i.e. (AI, BRAIN AND COGNITIVE SCIENCES) for hand upper control and towards much sophisticated dexterous movements. Throughout the study, it has been found there are number of efforts in terms of mechanics and hand designs, tactical sensing, however, for hand soft actuation, it seems this area of research is still far away from having a realistic muscular type fingers and hand movements. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "242b854de904075d04e7044e680dc281",
"text": "Adopting a motivational perspective on adolescent development, these two companion studies examined the longitudinal relations between early adolescents' school motivation (competence beliefs and values), achievement, emotional functioning (depressive symptoms and anger), and middle school perceptions using both variable- and person-centered analytic techniques. Data were collected from 1041 adolescents and their parents at the beginning of seventh and the end of eight grade in middle school. Controlling for demographic factors, regression analyses in Study 1 showed reciprocal relations between school motivation and positive emotional functioning over time. Furthermore, adolescents' perceptions of the middle school learning environment (support for competence and autonomy, quality of relationships with teachers) predicted their eighth grade motivation, achievement, and emotional functioning after accounting for demographic and prior adjustment measures. Cluster analyses in Study 2 revealed several different patterns of school functioning and emotional functioning during seventh grade that were stable over 2 years and that were predictably related to adolescents' reports of their middle school environment. Discussion focuses on the developmental significance of schooling for multiple adjustment outcomes during adolescence.",
"title": ""
},
{
"docid": "54768e28b5980d735fed93096de20f5d",
"text": "................................................................................................... vii Chapter",
"title": ""
},
{
"docid": "9c951a9bf159c073471107bd3c1663ee",
"text": "Collision tumor means the coexistence of two adjacent, but histologically distinct tumors without histologic admixture in the same tissue or organ. Collision tumors involving ovaries are extremely rare. We present a case of 45-year-old parous woman with a left dermoid cyst, with unusual imaging findings, massive ascites and peritoneal carcinomatosis. The patient underwent cytoreductive surgery. The histopathology revealed a collision tumor consisting of an invasive serous cystadenocarcinoma and a dermoid cyst.",
"title": ""
}
] | scidocsrr |
b72287732bf3573bd69c5b8e44b71fed | Identifying Justifications in Written Dialogs by Classifying Text as Argumentative | [
{
"docid": "5cd48ee461748d989c40f8e0f0aa9581",
"text": "Being able to identify which rhetorical relations (e.g., contrast or explanation) hold between spans of text is important for many natural language processing applications. Using machine learning to obtain a classifier which can distinguish between different relations typically depends on the availability of manually labelled training data, which is very time-consuming to create. However, rhetorical relations are sometimes lexically marked, i.e., signalled by discourse markers (e.g., because, but, consequently etc.), and it has been suggested (Marcu and Echihabi, 2002) that the presence of these cues in some examples can be exploited to label them automatically with the corresponding relation. The discourse markers are then removed and the automatically labelled data are used to train a classifier to determine relations even when no discourse marker is present (based on other linguistic cues such as word co-occurrences). In this paper, we investigate empirically how feasible this approach is. In particular, we test whether automatically labelled, lexically marked examples are really suitable training material for classifiers that are then applied to unmarked examples. Our results suggest that training on this type of data may not be such a good strategy, as models trained in this way do not seem to generalise very well to unmarked data. Furthermore, we found some evidence that this behaviour is largely independent of the classifiers used and seems to lie in the data itself (e.g., marked and unmarked examples may be too dissimilar linguistically and removing unambiguous markers in the automatic labelling process may lead to a meaning shift in the examples).",
"title": ""
}
] | [
{
"docid": "a54bc0f529d047aa273d834c53c15bd3",
"text": "This paper presents an optimized methodology to folded cascode operational transconductance amplifier (OTA) design. The design is done in different regions of operation, weak inversion, strong inversion and moderate inversion using the gm/ID methodology in order to optimize MOS transistor sizing. Using 0.35μm CMOS process, the designed folded cascode OTA achieves a DC gain of 77.5dB and a unity-gain frequency of 430MHz in strong inversion mode. In moderate inversion mode, it has a 92dB DC gain and provides a gain bandwidth product of around 69MHz. The OTA circuit has a DC gain of 75.5dB and unity-gain frequency limited to 19.14MHZ in weak inversion region. Keywords—CMOS IC design, Folded Cascode OTA, gm/ID methodology, optimization.",
"title": ""
},
{
"docid": "c962837c549d0ef45384bb7a67805f63",
"text": "In this study, hypotheses of astrologers about the predominance of specific astrological factors in the birth charts of serial killers are tested. In particular, Mutable signs (Gemini, Virgo, Sagittarius and Pisces), the 12 principles (12 house, Pisces, Neptune) and specific Moon aspects are expected to be frequent among serial killers as compared to the normal population. A sample consisting of two datasets of male serial killers was analysed: one set consisting of birth data with a reliable birth time (N=77) and another set with missing birth times (12:00 AM was used, N=216). The set with known birth times was selected from AstroDatabank and an astrological publication. The set with unknown birth times was selected from three specialised sources on the Internet. Various control groups were obtained by shuffle methods, by time-shifting and by sampling birth data of 6,000 persons from AstroDatabank. Theoretically expected frequencies of astrological factors were derived from the control samples. Probability-density functions were obtained by bootstrap methods and were used to estimate significance levels. It is found that serial killers are frequently born when celestial factors are in Mutable signs (with birth time: p=0.005, effect size=0.31; without birth time: p=0.002, effect size=0.25). The frequency of planets in the 12 house is significantly high (p=0.005, effect size=0.31, for birth times only) and the frequency distribution of Moon aspects deviates from the theoretical distribution in the whole sample (p=0.0005) and in the dataset with known birth time (p=0.001). It is concluded that, based on the two datasets, some of the claims of astrologers cannot be rejected. Introduction This investigation is stimulated by astrological research articles about the birth charts of serial killers (Marks, 2002; Wickenburg, 1994). Unfortunately, the hypotheses by astrologer Liz Greene and others about the natal charts of psychopaths and serial killers (Greene & Sasportas, 1987a,b; Greene, 2003) are not tested in these research articles. I feel the challenge to do that in a more detailed study. Evidence for astrology is largely lacking, though some studies have reported small effect sizes (Ertel & Irving, 1996). It could be reasoned that if some of these astrological effects are genuine, higher effect sizes are to be expected in samples that are more homogeneous with respect to certain behavioural or psychological factors. Serial killers can be considered quite homogeneous with respect to common psychological traits, which manifest at an early age, and with respect to background, which is mostly dysfunctional, involving sexual or physical abuse, drugs or alcoholism (Schechter & Everitt, 1997; Schechter, 2004). If astrology works, then one would say that serial killers should display common factors in their birth charts. Correlation 25(2) 2008 Jan Ruis: Serial Killers 8 8 Specific sorts of behaviour, such as animal torture, fire setting, bed-wetting, frequent daydreaming, social isolation and chronic lying, characterize the childhood of serial killers. As adults they are addicted to their fantasies, have a lack of empathy, a constant urge for stimuli, a lack of external goals in life, a low self-control and a low sense of personal power. The lack of empathy or remorse, the superficial charm and the inflated self-appraisal are features of psychopathy. Serial killers have also been said to have a form of narcissistic personality disorder with a mental addiction to kill (Vaknin, 2003). In many psychological profiles of serial killers the central theme is frequent daydreaming, starting in early childhood and associated with a powerful imagination. It leads to the general fantasy world in which the serial killer begins to live as protection against isolation and feelings of inadequacy arising from this isolation (Ressler & Burgess, 1990). Many serial killers enact their crimes because of the detailed and violent fantasies (power, torture and murder) that have developed in their minds as early as the ages of seven and eight. These aggressive daydreams, developed as children, continue to develop and expand through adolescence into maturity, where they are finally released into the real world (Wilson & Seamen, 1992). With each successive victim, they attempt to fine tune the act, striving to make the real life experiences as perfect as the fantasy (Apsche, 1993). Serial killers, of which 90% are males, must be distinguished from the other type of multiple murderers: rampage killers (Schechter, 2004), which include mass and spree killers. The typical serial killer murders a single victim at separate events, while reverting to normal life in between the killings, and may continue with this pattern for years. In contrast, a mass murderer kills many people at a single event that usually ends with actual or provoked suicide, such as the Columbine High School massacre. A spree killer can be seen as a mobile mass murderer, such as Charles Starkweather and Andrew Cunanan. The FBI definition of a serial killer states that they must have committed at least three murders at different locations with a cooling-off period in between the killings. This definition is criticized because it is not specific enough with respect to the nature of the crimes and the number of kills (Schechter, 2004). A person with the mentality of a serial killer, who gets arrested after the second sexually motivated murder, would not be a serial killer in this definition. Therefore, the National Institutes of Justice have formulated another description, which was adopted in the present study: “a series of two or more murders, committed as separate events, usually, but not always, by one offender acting alone. The crimes may occur over a period of time ranging from hours to years. Quite often the motive is psychological, and the offender’s behaviour and the physical evidence observed at the crime scene will often reflect sadistic, sexual overtones.” Five different categories of serial killer are usually distinguished (Newton, 2006; Schechter & Everitt, 1997, Schechter, 2004): 1. Visionary. Is subject to hallucinations or visions that tell him to kill. Examples are Ed Gein and Herbert Mullin. 2. Missionary. Goes on hunting \"missions\" to eradicate a specific group of people (prostitutes, ethnic groups). Missionary killers believe that their acts are justified on the basis that they are getting rid of a certain type of person and thus doing society a favour. Examples are Gary Ridgway and Carroll Cole. 3. Hedonistic, with two subtypes: Correlation 25(2) 2008 Jan Ruis: Serial Killers 9 9 a. Lust-motivated: associates sexual pleasure with murder. Torturing and necrophilia are eroticised experiences. An example is Jeffrey Dahmer. b. Thrill-motivated: gets a thrill from killing; excitement and euphoria at victim's final anguish. An example is Dennis Rader. 4. Powerand control-seeking. The primary motive is the urgent need to assert supremacy over a helpless victim, to compensate for their own deep-seated feelings of worthlessness by completely dominating a victim. An example is Ted Bundy. 5. Gain-motivated. Most criminals who commit multiple murders for financial gain (such as bank robbers, hit men from the drug business or the mafia) are not classified as serial killers, because they are motivated by economic gain rather than psychopathological compulsion. Many serial killers may take a trophy from the crime scene, or even some valuables, but financial gain is not a driving motive. Still, there is no clear boundary between profit killers and other kinds of serial killer. For instance, Marcel Petiot liked to watch his victims die through a peephole after having robbed them of their possessions. Here sadism as a psychological motive was clearly involved. Both sadism and greed also motivated Henry Howard Holmes, and sadism was at least a second motive in “bluebeard” killers such as Harry Powers (who murder a series of wives, fiancées or partners for profit). Schechter (2004) argues that all bluebeards, like Henry Landru, George Joseph Smith and John George Haigh, are driven by both greed and sadism. Other investigators, such as Aamodt from Radford University (2008), categorize bluebeards in the group of power-motivated serial killers. Holmes (1996) distinguishes six types of serial killer: visionary, missionary, lust-oriented hedonist, thrill-oriented hedonist, the power/control freak and the comfort-oriented hedonist. In this typology, bluebeards are placed in the comfort type of serial killer group. Other arguments that bluebeards should be included in the present study are that they fit the serial killer definition of the National Institutes of Justice, and that like typical serial killers, they engage in planning activities, target a specific type of (vulnerable) victim, kill out of free will and at their own initiative, avoid being captured, and pretend to be normal citizens while hiding the crimes. Other multiple killers for profit, such as bank robbers and other armed robbers, hit men from the drugs scene, the mafia or other gangs, are generally not considered serial killers. Neither are other types of multiple murderers such as war criminals, mass murderers (including terrorists), spree killers and murderers who kill their partner out of jealousy. These killers are not incorporated in this study. Since definite boundaries between the different types of multiple murderers are hard to draw (Newton, 2006), I used a checklist in order to define serial killers in this study and to distinguish between serial killers and the other types of multiple murderer. This checklist is based on the characteristics of serial and rampage killers (Holmes, 1996; Schechter, 2004) and is included in Appendix A. For reasons of homogeneity, and because females usually have different motives as compared to males and over 90% of serial killers are males, this investigation was restricted to mal",
"title": ""
},
{
"docid": "e15ee429fd04286d7668486af088e1f2",
"text": "This paper reviews the applications of Augmented Reality with an emphasis on aerospace manufacturing processes. A contextual overview of Lean Manufacturing, aerospace industry, Virtual Reality (VR) and Augmented Reality (AR) is provided. Many AR applications are provided to show that AR can be used in different fields of endeavor with different focuses. This paper shows two case studies in aerospace industries, presenting different forms of AR use in aerospace manufacturing processes to demonstrate the benefits and advantages that can be reached. It is concluded showing that gains of labor qualification, training costs reduction, inspection system and productivity of the business can be provided by the use of AR.",
"title": ""
},
{
"docid": "d840814a871a36479e465736077b375a",
"text": "With the popularity of the Internet, online news media are pouring numerous of news reports into the Internet every day. People get lost in the information explosion. Although the existing methods are able to extract news reports according to key words, and aggregate news reports into stories or events, they just list the related reports or events in order. Moreover, they are unable to provide the evolution relationships between events within a topic, thus people hardly capture the events development vein. In order to mine the underlying evolution relationships between events within the topic, we propose a novel event evolution Model in this paper. This model utilizes TFIEF and Temporal Distance Cost factor (TDC) to model the event evolution relationships. we construct event evolution relationships map to show the events development vein. The experimental evaluation on real dataset show that our technique precedes the baseline technique.",
"title": ""
},
{
"docid": "eb6da64fe7dffde7fbc0a2520b435c87",
"text": "In this paper, we present our system addressing Task 1 of CL-SciSumm Shared Task at BIRNDL 2016. Our system makes use of lexical and syntactic dependency cues, and applies rule-based approach to extract text spans in the Reference Paper that accurately reflect the citances. Further, we make use of lexical cues to identify discourse facets of the paper to which cited text belongs. The lexical and syntactic cues are obtained on pre-processed text of the citances, and the reference paper. We report our results obtained for development set using our system for identifying reference scope of citances in this paper.",
"title": ""
},
{
"docid": "b5290a5df838baff03de94f1f18bf9fa",
"text": "Current Web service technology is evolving towards a simpler approach to define Web service APIs that challenges the assumptions made by existing languages for Web service composition. RESTful Web services introduce a new kind of abstraction, the resource, which does not fit well with the message-oriented paradigm of the Web service description language (WSDL). RESTful Web services are thus hard to compose using the Business Process Execution Language (WS-BPEL), due to its tight coupling to WSDL. The goal of the BPEL for REST extensions presented in this paper is twofold. First, we aim to enable the composition of both RESTful Web services and traditional Web services from within the same process-oriented service composition language. Second, we show how to publish a BPEL process as a RESTful Web service, by exposing selected parts of its execution state using the REST interaction primitives. We include a detailed example on how BPEL for REST can be applied to orchestrate a RESTful e-Commerce scenario and discuss how the proposed extensions affect the architecture of a process execution engine. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "09e4a9c086638fe436e90b008a873d22",
"text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2015, INFORMS",
"title": ""
},
{
"docid": "767b6a698ee56a4859c21f70f52b2b81",
"text": "This article surveyed the main neuromarketing techniques used in the world and the practical results obtained. Specifically, the objectives are (1) to identify the main existing definitions of neuromarketing; (2) to identify the importance and the potential contributions of neuromarketing; (3) to demonstrate the advantages of neuromarketing as a marketing research tool compared to traditional research methods; (4) to identify the ethical issues involved with neuromarketing research; (5) to present the main neuromarketing techniques that are being used in the development of marketing research; (6) to present studies in which neuromarketing research techniques were used; and (7) to identify the main limitations of neuromarketing. The results obtained allow an understanding of the ways to develop, store, Journal of Management Research ISSN 1941-899X 2014, Vol. 6, No. 2 www.macrothink.org/jmr 202 retrieve and use information about consumers, as well as ways to develop the field of neuromarketing. In addition to offering theoretical support for neuromarketing, this article discusses business cases, implementation and achievements.",
"title": ""
},
{
"docid": "804322502b82ad321a0f97d6f83858ee",
"text": "Cheating is a real problem in the Internet of Things. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. The problem, however, isnt inherent in whether or not to embrace the idea of an open platform and open-source software, but to establish a methodology to verify the trustworthiness and control any access. This paper focuses on building an access control model and system based on trust computing. This is a new field of access control techniques which includes Access Control, Trust Computing, Internet of Things, network attacks, and cheating technologies. Nevertheless, the target access control systems can be very complex to manage. This paper presents an overview of the existing work on trust computing, access control models and systems in IoT. It not only summarizes the latest research progress, but also provides an understanding of the limitations and open issues of the existing work. It is expected to provide useful guidelines for future research. Access Control, Trust Management, Internet of Things Today, our world is characterized by increasing connectivity. Things in this world are increasingly being connected. Smart phones have started an era of global proliferation and rapid consumerization of smart devices. It is predicted that the next disruptive transformation will be the concept of ‘Internet of Things’ [2]. From networked computers to smart devices, and to connected people, we are now moving towards connected ‘things’. Items of daily use are being turned into smart devices as various sensors are embedded in consumer and enterprise equipment, industrial and household appliances and personal devices. Pervasive connectivity mechanisms build bridges between our clothing and vehicles. Interaction among these things/devices can happen with little or no human intervention, thereby conjuring an enormous network, namely the Internet of Things (IoT). One of the primary goals behind IoT is to sense and send data over remote locations to enable detection of significant events, and take relevant actions sooner rather than later [25]. This technological trend is being pursued actively in all areas including the medical and health care fields. IoT provides opportunities to dramatically improve many medical applications, such as glucose level sensing, remote health monitoring (e.g. electrocardiogram, blood pressure, body temperature, and oxygen saturation monitoring, etc), rehabilitation systems, medication management, and ambient assisted living systems. The connectivity offered by IoT extends from humanto-machine to machine-to-machine communications. The interconnected devices collect all kinds of data about patients. Intelligent and ubiquitous services can then be built upon the useful information extracted from the data. During the data aggregation, fusion, and analysis processes, user ar X iv :1 61 0. 01 06 5v 1 [ cs .C R ] 4 O ct 2 01 6 2 Z. Yunpeng and X. Wu privacy and information security become major concerns for IoT services and applications. Security breaches will seriously compromise user acceptance and consumption on IoT applications in the medical and health care areas. The large scale of integration of heterogeneous devices in IoT poses a great challenge for the provision of standard security services. Many IoT devices are vulnerable to attacks since no high-level intelligence can be enabled on these passive devices [10], and security vulnerabilities in products uncovered by researchers have spread from cars [13] to garage doors [9] and to skateboards [35]. Technological utopianism surrounding IoT was very real until the emergence of the Volkswagen emissions scandal [4]. The German conglomerate admitted installing software in its diesel cars that recognizes and identifies patterns when vehicles are being tested for nitrogen oxide emissions and cuts them so that they fall within the limits prescribed by US regulators (004 g/km). Once the test is over, the car returns to its normal state: emitting nitrogen oxides (nitric oxide and nitrogen dioxide) at up to 35 times the US legal limit. The focus of IoT is not the thing itself, but the data generated by the devices and the value therein. What Volkswagen has brought to light goes far beyond protecting data and privacy, preventing intrusion, and keeping the integrity of the data. It casts doubts on the credibility of the IoT industry and its ability to secure data, reach agreement on standards, or indeed guarantee that consumer privacy rights are upheld. All in all, IoT holds tremendous potential to improve our health, make our environment safer, boost productivity and efficiency, and conserve both water and energy. IoT needs to improve its trustworthiness, however, before it can be used to solve challenging economic and environmental problems tied to our social lives. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. If a node of IoT cheats, how does a system identify the cheating node and prevent a malicious attack from misbehaving nodes? This paper focuses on an access control mechanism that will only grant network access permission to trustworthy nodes. Embedding trust management into access control will improve the systems ability to discover untrustworthy participating nodes and prevent discriminatory attacks. There has been substantial research in this domain, most of which has been related to attacks like self-promotion and ballot stuffing where a node falsely promotes its importance and boosts the reputation of a malicious node (by providing good recommendations) to engage in a collusion-style attack. The traditional trust computation model is inefficient in differentiating a participant object in IoT, which is designed to win trust by cheating. In particular, the trust computation model will fail when a malicious node intelligently adjusts its behavior to hide its defect and obtain a higher trust value for its own gain. 1 Access Control Model and System IoT comprises the following three Access Control types Access Control in Internet of Things: A Survey 3 – Role-based access control (RBAC) – Credential-based access control (CBAC) — in order to access some resources and data, users require certain certificate information that falls into the following two types: 1. Attribute-Based access control (ABAC) : If a user has some special attributes, it is possible to access a particular resource or piece of data. 2. Capability-Based access control (Cap-BAC): A capability is a communicable, unforgeable rights markup, which corresponds to a value that uniquely specifies certain access rights to objects owned by subjects. – Trust-based access control (TBAC) In addition, there are also combinations of the aforementioned three methods. In order to improve the security of the system, some of the access control methods include encryption and key management mechanisms.",
"title": ""
},
{
"docid": "492b99428b8c0b4a5921c78518fece50",
"text": "Over the past few decades, significant progress has been made in clustering high-dimensional data sets distributed around a collection of linear and affine subspaces. This article presented a review of such progress, which included a number of existing subspace clustering algorithms together with an experimental evaluation on the motion segmentation and face clustering problems in computer vision.",
"title": ""
},
{
"docid": "eda40814ecaecbe5d15ccba49f8a0d43",
"text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem",
"title": ""
},
{
"docid": "9ea9b364e2123d8917d4a2f25e69e084",
"text": "Movement observation and imagery are increasingly propagandized for motor rehabilitation. Both observation and imagery are thought to improve motor function through repeated activation of mental motor representations. However, it is unknown what stimulation parameters or imagery conditions are optimal for rehabilitation purposes. A better understanding of the mechanisms underlying movement observation and imagery is essential for the optimization of functional outcome using these training conditions. This study systematically assessed the corticospinal excitability during rest, observation, imagery and execution of a simple and a complex finger-tapping sequence in healthy controls using transcranial magnetic stimulation (TMS). Observation was conducted passively (without prior instructions) as well as actively (in order to imitate). Imagery was performed visually and kinesthetically. A larger increase in corticospinal excitability was found during active observation in comparison with passive observation and visual or kinesthetic imagery. No significant difference between kinesthetic and visual imagery was found. Overall, the complex task led to a higher corticospinal excitability in comparison with the simple task. In conclusion, the corticospinal excitability was modulated during both movement observation and imagery. Specifically, active observation of a complex motor task resulted in increased corticospinal excitability. Active observation may be more effective than imagery for motor rehabilitation purposes. In addition, the activation of mental motor representations may be optimized by varying task-complexity.",
"title": ""
},
{
"docid": "8bf63451cf6b83f3da4d4378de7bfd7f",
"text": "This paper presents a high-efficiency and smoothtransition buck-boost (BB) converter to extend the battery life of portable devices. Owing to the usage of four switches, the BB control topology needs to minimize the switching and conduction losses at the same time. Therefore, over a wide input voltage range, the proposed BB converter consumes minimum switching loss like the basic operation of buck or boost converter. Besides, the conduction loss is reduced by means of the reduction of the inductor current level. Especially, the proposed BB converter offers good line/load regulation and thus provides a smooth and stable output voltage when the battery voltage decreases. Simulation results show that the output voltage drops is very small during the whole battery life time and the output transition is very smooth during the mode transition by the proposed BB control scheme.",
"title": ""
},
{
"docid": "0c7ba527445c6d8fc39d942f78901259",
"text": "Physically Unclonable Functions (PUFs) are impacted by environmental variations and aging which can reduce their acceptance in identification and authentication applications. Prior approaches to improve PUF reliability include bit analysis across environmental conditions, better design, and post-processing error correction, but these are of high cost in terms of test time and design overheads, making them unsuitable for high volume production. In this paper, we aim to address this issue for SRAM PUFs with novel bit analysis and bit selection algorithms. Our analysis of real SRAM PUFs reveals (i) critical conditions on which to select stable SRAM cells for PUF at low-cost (ii) unexplored spatial correlation between stable bits, i.e., cells that are the most stable tend to be surrounded by stable cells determined during enrollment. We develop a bit selection procedure around these observations that produces very stable bits for the PUF generated ID/key. Experimental data from real SRAM PUFs show that our approaches can effectively reduce number of errors in PUF IDs/keys with fewer enrollment steps.",
"title": ""
},
{
"docid": "3500278940baaf6f510ad47463cbf5ed",
"text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.",
"title": ""
},
{
"docid": "47432aed7a46f1591597208dd25e8425",
"text": "Successful breastfeeding is dependent upon an infant's ability to correctly latch onto a mother's breast. If an infant is born with oral soft tissue abnormalities such as tongue-tie or lip-tie, breastfeeding may become challenging or impossible. During the oral evaluation of an infant presenting with breastfeeding problems, one area that is often overlooked and undiagnosed and, thus, untreated is the attachment of the upper lip to the maxillary gingival tissue. Historically, this tissue has been described as the superior labial frenum, median labial frenum, or maxillary labial frenum. These terms all refer to a segment of the mucous membrane in the midline of the upper lip containing loose connective tissue that inserts into the maxillary arch's loose, unattached gingival or tight, attached gingival tissue. There is no muscle contained within this tissue. In severe instances, this tissue may extend into the area behind the upper central incisors and incisive papilla. The author has defined and identified the restrictions of mobility of this tissue as a lip-tie, which reflects the clinical attachment of the upper lip to the maxillary arch. This article discusses the diagnosis and classifications of the lip-tie, as it affects an infant's latch onto the mother's breast. As more and more women choose to breastfeed, lip-ties must be considered as an impediment to breastfeeding, recognizing that they can affect a successful, painless latch and milk transfer.",
"title": ""
},
{
"docid": "3860b1d259317da9ac6fe2c2ab161ce3",
"text": "In recent years, state-of-the-art methods in computer vision have utilized increasingly deep convolutional neural network architectures (CNNs), with some of the most successful models employing hundreds or even thousands of layers. A variety of pathologies such as vanishing/exploding gradients make training such deep networks challenging. While residual connections and batch normalization do enable training at these depths, it has remained unclear whether such specialized architecture designs are truly necessary to train deep CNNs. In this work, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme. We derive this initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing the conditions for dynamical isometry, the equilibration of singular values of the input-output Jacobian matrix. These conditions require that the convolution operator be an orthogonal transformation in the sense that it is norm-preserving. We present an algorithm for generating such random initial orthogonal convolution kernels and demonstrate empirically that they enable efficient training of extremely deep architectures.",
"title": ""
},
{
"docid": "00f9290840ba201e23d0ea6149f344e4",
"text": "Despite the plethora of security advice and online education materials offered to end-users, there exists no standard measurement tool for end-user security behaviors. We present the creation of such a tool. We surveyed the most common computer security advice that experts offer to end-users in order to construct a set of Likert scale questions to probe the extent to which respondents claim to follow this advice. Using these questions, we iteratively surveyed a pool of 3,619 computer users to refine our question set such that each question was applicable to a large percentage of the population, exhibited adequate variance between respondents, and had high reliability (i.e., desirable psychometric properties). After performing both exploratory and confirmatory factor analysis, we identified a 16-item scale consisting of four sub-scales that measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness.",
"title": ""
},
{
"docid": "02dfbd00fcff9601a8f70a334e3da9ba",
"text": "Visual sentiment analysis framework can predict the sentiment of an image by analyzing the image contents. Nowadays, people are uploading millions of images in social networks such as Twitter, Facebook, Google Plus, and Flickr. These images play a crucial part in expressing emotions of users in online social networks. As a result, image sentiment analysis has become important in the area of online multimedia big data research. Several research works are focusing on analyzing the sentiment of the textual contents. However, little investigation has been done to develop models that can predict sentiment of visual content. In this paper, we propose a novel visual sentiment analysis framework using transfer learning approach to predict sentiment. We use hyper-parameters learned from a very deep convolutional neural network to initialize our network model to prevent overfitting. We conduct extensive experiments on a Twitter image dataset and prove that our model achieves better performance than the current state-of-the-art.",
"title": ""
},
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
}
] | scidocsrr |
f06686b4ea6fdc98f10a76b15d4e1d26 | Sensing and Modeling Human Behavior Using Social Media and Mobile Data | [
{
"docid": "8fad55f682270afe6434ec595dbbdeb3",
"text": "It is becoming harder to find an app on one's smart phone due to the increasing number of apps available and installed on smart phones today. We collect sensory data including app use from smart phones, to perform a comprehensive analysis of the context related to mobile app use, and build prediction models that calculate the probability of an app in the current context. Based on these models, we developed a dynamic home screen application that presents icons for the most probable apps on the main screen of the phone and highlights the most probable one. Our models outperformed other strategies, and, in particular, improved prediction accuracy by 8% over Most Frequently Used from 79.8% to 87.8% (for 9 candidate apps). Also, we found that the dynamic home screen improved accessibility to apps on the phone, compared to the conventional static home screen in terms of accuracy, required touch input and app selection time.",
"title": ""
}
] | [
{
"docid": "3fc784bb6e21cd26a5398973d1252029",
"text": "Robots are slowly finding their way into the hands of search and rescue groups. One of the robots contributing to this effort is the Inuktun VGTV-Xtreme series by American Standard Robotics. This capable robot is one of the only robots engineered specifically for the search and rescue domain. This paper describes the adaptation of the VGTV platform from an industrial inspection robot into a capable and versatile search and rescue robot. These adaptations were based on growing requirements established by rescue groups, academic research, and extensive field trials. A narrative description of a successful search of a damaged building during the aftermath of Hurricane Katrina is included to support these claims. Finally, lessons learned from these deployments and guidelines for future robot development is discussed.",
"title": ""
},
{
"docid": "aa0dc468b1b7402e9eb03848af31216e",
"text": "This paper discusses the construction of speech databases for research into speech information processing and describes a problem illustrated by the case of emotional speech synthesis. It introduces a project for the processing of expressive speech, and describes the data collection techniques and the subsequent analysis of supra-linguistic, and emotional features signalled in the speech. It presents annotation guidelines for distinguishing speaking-style differences, and argues that the focus of analysis for expressive speech processing applications should be on the speaker relationships (defined herein), rather than on emotions.",
"title": ""
},
{
"docid": "ea1072f2972dbf15ef8c2d38704a0095",
"text": "The reliability of the microinverter is a very important feature that will determine the reliability of the ac-module photovoltaic (PV) system. Recently, many topologies and techniques have been proposed to improve its reliability. This paper presents a thorough study for different power decoupling techniques in single-phase microinverters for grid-tie PV applications. These power decoupling techniques are categorized into three groups in terms of the decoupling capacitor locations: 1) PV-side decoupling; 2) dc-link decoupling; and 3) ac-side decoupling. Various techniques and topologies are presented, compared, and scrutinized in scope of the size of decoupling capacitor, efficiency, and control complexity. Also, a systematic performance comparison is presented for potential power decoupling topologies and techniques.",
"title": ""
},
{
"docid": "3bca3446ce76b1f1560e037e4041a1de",
"text": "PURPOSE\nThe aim was to describe the demographic and clinical data of 116 consecutive cases of ocular dermoids.\n\n\nMETHODS\nThis was a retrospective case series and a review of clinical records of all the patients diagnosed with ocular dermoids. Both demographic and clinical data were recorded. Statistical analysis was performed with SPSS v. 18. Descriptive statistics are reported.\n\n\nRESULTS\nThe study included 116 consecutive patients with diagnosis consistent with ocular dermoids: corneal 18% (21), dermolipomas 38% (44), and orbital 44% (51). Sixty-five percent (71) were female, and 46% (54) were detected at birth. Secondary manifestations: amblyopia was present in 14% (3), and strabismus was detected in 6.8% (8). The Goldenhar syndrome was the most frequent syndromic entity in 7.5% (12) of the patients. Surgical resection was required on 49% (25) of orbital dermoids, 24% (5) of corneal dermoids, and 13% (6) of dermolipomas.\n\n\nCONCLUSIONS\nOrbital dermoids were the most frequent variety, followed by conjunctival and corneal. In contrast to other reports, corneal dermoids were significantly more prevalent in women. Goldenhar syndrome was the most frequent syndromatic entity.",
"title": ""
},
{
"docid": "41d546266db9b3e9ec5071e4926abb8d",
"text": "Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction. Under the assumption that the rays refract only twice when traveling through the object, we present the first approach to simultaneously reconstructing the 3D positions and normals of the object's surface at both refraction locations. Our acquisition setup requires only two cameras and one monitor, which serves as the light source. After acquiring the ray-ray correspondences between each camera and the monitor, we solve an optimization function which enforces a new position-normal consistency constraint. That is, the 3D positions of surface points shall agree with the normals required to refract the rays under Snell's law. Experimental results using both synthetic and real data demonstrate the robustness and accuracy of the proposed approach.",
"title": ""
},
{
"docid": "1beb1c36b24f186de59d6c8ef5348dcd",
"text": "We present a new corpus, PersonaBank, consisting of 108 personal stories from weblogs that have been annotated with their STORY INTENTION GRAPHS, a deep representation of the fabula of a story. We describe the topics of the stories and the basis of the STORY INTENTION GRAPH representation, as well as the process of annotating the stories to produce the STORY INTENTION GRAPHs and the challenges of adapting the tool to this new personal narrative domain We also discuss how the corpus can be used in applications that retell the story using different styles of tellings, co-tellings, or as a content planner.",
"title": ""
},
{
"docid": "4f6b8ea6fb0884bbcf6d4a6a4f658e52",
"text": "Ballistocardiography (BCG) enables the recording of heartbeat, respiration, and body movement data from an unconscious human subject. In this paper, we propose a new heartbeat detection algorithm for calculating heart rate (HR) and heart rate variability (HRV) from the BCG signal. The proposed algorithm consists of a moving dispersion calculation method to effectively highlight the respective heartbeat locations and an adaptive heartbeat peak detection method that can set a heartbeat detection window by automatically predicting the next heartbeat location. To evaluate the proposed algorithm, we compared it with other reference algorithms using a filter, waveform analysis and envelope calculation of signal by setting the ECG lead I as the gold standard. The heartbeat detection in BCG should be able to measure sensitively in the regions for lower and higher HR. However, previous detection algorithms are optimized mainly in the region of HR range (60~90 bpm) without considering the HR range of lower (40~60 bpm) and higher (90~110 bpm) HR. Therefore, we proposed an improved method in wide HR range that 40~110 bpm. The proposed algorithm detected the heartbeat greater stability in varying and wider heartbeat intervals as comparing with other previous algorithms. Our proposed algorithm achieved a relative accuracy of 98.29% with a root mean square error (RMSE) of 1.83 bpm for HR, as well as coverage of 97.63% and relative accuracy of 94.36% for HRV. And we obtained the root mean square (RMS) value of 1.67 for separated ranges in HR.",
"title": ""
},
{
"docid": "be447131554900aaba025be449944613",
"text": "Attackers increasingly take advantage of innocent users who tend to casually open email messages assumed to be benign, carrying malicious documents. Recent targeted attacks aimed at organizations utilize the new Microsoft Word documents (*.docx). Anti-virus software fails to detect new unknown malicious files, including malicious docx files. In this paper, we present ALDOCX, a framework aimed at accurate detection of new unknown malicious docx files that also efficiently enhances the framework’s detection capabilities over time. Detection relies upon our new structural feature extraction methodology (SFEM), which is performed statically using meta-features extracted from docx files. Using machine-learning algorithms with SFEM, we created a detection model that successfully detects new unknown malicious docx files. In addition, because it is crucial to maintain the detection model’s updatability and incorporate new malicious files created daily, ALDOCX integrates our active-learning (AL) methods, which are designed to efficiently assist anti-virus vendors by better focusing their experts’ analytical efforts and enhance detection capability. ALDOCX identifies and acquires new docx files that are most likely malicious, as well as informative benign files. These files are used for enhancing the knowledge stores of both the detection model and the anti-virus software. The evaluation results show that by using ALDOCX and SFEM, we achieved a high detection rate of malicious docx files (94.44% TPR) compared with the anti-virus software (85.9% TPR)—with very low FPR rates (0.19%). ALDOCX’s AL methods used only 14% of the labeled docx files, which led to a reduction of 95.5% in security experts’ labeling efforts compared with the passive learning and the support vector machine (SVM)-Margin (existing active-learning method). Our AL methods also showed a significant improvement of 91% in number of unknown docx malware acquired, compared with the passive learning and the SVM-Margin, thus providing an improved updating solution for the detection model, as well as the anti-virus software widely used within organizations.",
"title": ""
},
{
"docid": "68720a44720b4d80e661b58079679763",
"text": "The value of involving people as ‘users’ or ‘participants’ in the design process is increasingly becoming a point of debate. In this paper we describe a new framework, called ‘informant design’, which advocates efficiency of input from different people: maximizing the value of contributions tlom various informants and design team members at different stages of the design process. To illustrate how this can be achieved we describe a project that uses children and teachers as informants at difTerent stages to help us design an interactive learning environment for teaching ecology.",
"title": ""
},
{
"docid": "d79f92819d5485f2631897befd686416",
"text": "Information visualization is meant to support the analysis and comprehension of (often large) datasets through techniques intended to show/enhance features, patterns, clusters and trends, not always visible even when using a graphical representation. During the development of information visualization techniques the designer has to take into account the users' tasks to choose the graphical metaphor as well as the interactive methods to be provided. Testing and evaluating the usability of information visualization techniques are still a research question, and methodologies based on real or experimental users often yield significant results. To be comprehensive, however, experiments with users must rely on a set of tasks that covers the situations a real user will face when using the visualization tool. The present work reports and discusses the results of three case studies conducted as Multi-dimensional In-depth Long-term Case studies. The case studies were carried out to investigate MILCs-based usability evaluation methods for visualization tools.",
"title": ""
},
{
"docid": "17ec5256082713e85c819bb0a0dd3453",
"text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.",
"title": ""
},
{
"docid": "2ae3a8bf304cfce89e8fcd331d1ec733",
"text": "Linear Discriminant Analysis (LDA) is among the most optimal dimension reduction methods for classification, which provides a high degree of class separability for numerous applications from science and engineering. However, problems arise with this classical method when one or both of the scatter matrices is singular. Singular scatter matrices are not unusual in many applications, especially for highdimensional data. For high-dimensional undersampled and oversampled problems, the classical LDA requires modification in order to solve a wider range of problems. In recent work the generalized singular value decomposition (GSVD) has been shown to mitigate the issue of singular scatter matrices, and a new algorithm, LDA/GSVD, has been shown to be very robust for many applications in machine learning. However, the GSVD inherently has a considerable computational overhead. In this paper, we propose fast algorithms based on the QR decomposition and regularization that solve the LDA/GSVD computational bottleneck. In addition, we present fast algorithms for classical LDA and regularized LDA utilizing the framework based on LDA/GSVD and preprocessing by the Cholesky decomposition. Experimental results are presented that demonstrate substantial speedup in all of classical LDA, regularized LDA, and LDA/GSVD algorithms without any sacrifice in classification performance for a wide range of machine learning applications.",
"title": ""
},
{
"docid": "66f76354b6470a49f18300f67e47abd0",
"text": "Technologies in museums often support learning goals, providing information about exhibits. However, museum visitors also desire meaningful experiences and enjoy the social aspects of museum-going, values ignored by most museum technologies. We present ArtLinks, a visualization with three goals: helping visitors make connections to exhibits and other visitors by highlighting those visitors who share their thoughts; encouraging visitors' reflection on the social and liminal aspects of museum-going and their expectations of technology in museums; and doing this with transparency, aligning aesthetically pleasing elements of the design with the goals of connection and reflection. Deploying ArtLinks revealed that people have strong expectations of technology as an information appliance. Despite these expectations, people valued connections to other people, both for their own sake and as a way to support meaningful experience. We also found several of our design choices in the name of transparency led to unforeseen tradeoffs between the social and the liminal.",
"title": ""
},
{
"docid": "db7edbb1a255e9de8486abbf466f9583",
"text": "Nowadays, adopting an optimized irrigation system has become a necessity due to the lack of the world water resource. The system has a distributed wireless network of soil-moisture and temperature sensors. This project focuses on a smart irrigation system which is cost effective. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology where automation is playing important role in human life. Automation allows us to control various appliances automatically. DC motor based vehicle is designed for irrigation purpose. The objectives of this paper were to control the water supply to each plant automatically depending on values of temperature and soil moisture sensors. Mechanism is done such that soil moisture sensor electrodes are inserted in front of each soil. It also monitors the plant growth using various parameters like height and width. Android app.",
"title": ""
},
{
"docid": "99fdab0b77428f98e9486d1cc7430757",
"text": "Self organizing Maps (SOMs) are most well-known, unsupervised approach of neural network that is used for clustering and are very efficient in handling large and high dimensional dataset. As SOMs can be applied on large complex set, so it can be implemented to detect credit card fraud. Online banking and ecommerce has been experiencing rapid growth over past years and will show tremendous growth even in future. So, it is very necessary to keep an eye on fraudsters and find out some ways to depreciate the rate of frauds. This paper focuses on Real Time Credit Card Fraud Detection and presents a new and innovative approach to detect the fraud by the help of SOM. Keywords— Self-Organizing Map, Unsupervised Learning, Transaction Introduction The fast and rapid growth in the credit card issuers, online merchants and card users have made them very conscious about the online frauds. Card users just want to make safe transactions while purchasing their goods and on the other hand, banks want to differentiate the legitimate as well as fraudulent users. The merchants that is mostly affected as they do not have any kind of evidence like Digital Signature wants to sell their goods only to the legitimate users to make profit and want to use a great secure system that avoid them from a great loss. Our approach of Self Organizing map can work in the large complex datasets and can cluster even unaware datasets. It is an unsupervised neural network that works even in the absence of an external teacher and provides fruitful results in detecting credit card frauds. It is interesting to note that credit card fraud affect owner the least and merchant the most. The existing legislation and card holder protection policies as well as insurance scheme affect most the merchant and customer the least. Card issuer bank also has to pay the administrative cost and infrastructure cost. Studies show that average time lag between the fraudulent transaction dates and charge back notification 1344 Mitali Bansal and Suman can be high as 72 days, thereby giving fraudster sufficient time to cause severe damage. In this paper first, you will see a brief survey of different approaches on credit card fraud detection systems,. In Section 2 we explain the design and architecture of SOM to detect Credit Card Fraud. Section 3, will represent results. Finally, Conclusion are presented in Section 4. A Survey of Credit card fraud Detection Fraud Detection Systems work by trying to identify anomalies in an environment [1]. At the early stage, the research focus lies in using rule based expert systems. The model’s rule constructed through the input of many fraud experts within the bank [2]. But when their processing is encountered, their output become was worst. Because the rule based expert system totally lies on the prior information of the data set that is generally not available easily in the case of credit card frauds. After these many Artificial Neural Network (ANN) is mostly used and solved very complex problems in a very efficient way [3]. Some believe that unsupervised methods are best to detect credit card frauds because these methods work well even in absence of external teacher. While supervised methods are based on prior data knowledge and surely needs an external teacher. Unsupervised method is used [4] [5] to detect some kind of anomalies like fraud. They do not cluster the data but provides a ranking on the list of all segments and by this ranking method they provide how much a segment is anomalous as compare to the whole data sets or other segments [6]. Dempster-Shafer Theory [1] is able to detect anomalous data. They did an experiment to detect infected E-mails by the help of D-S theory. As this theory can also be helpful because in this modern era all the new card information is sent through e-mails by the banks. Some various other approaches have also been used to detect Credit Card Frauds, one of which is ID3 pre pruning method in which decision tree is formed to detect anomalous data [7]. Artificial Neural Networks are other efficient and intelligent methods to detect credit card fraud. A compound method that is based on rule-based systems and ANN is used to detect Credit card fraud by Brause et al. [8]. Our work is based on self-organizing map that is based on unsupervised approach to detect Credit Card Fraud. We focus on to detect anomalous data by making clusters so that legitimate and fraudulent transactions can be differentiated. Collection of data and its pre-processing is also explained by giving example in fraud detection. SYSTEM DESIGN ARCHITECTURE The SOM works well in detecting Credit Card Fraud and all its interesting properties we have already discussed. Here we provide some detailed prototype and working of SOM in fraud detection. Credit Card Fraud Detection Using Self Organised Map 1345 Our Approach to detect Credit Card Fraud Using SOM Our approach towards Real time Credit Card Fraud detection is modelled by prototype. It is a multilayered approach as: 1. Initial Selection of data set. 2. Conversion of data from Symbolic to Numerical Data Set. 3. Implementation of SOM. 4. A layer of further review and decision making. This multilayered approach works well in the detection of Credit Card Fraud. As this approach is based on SOM, so finally it will cluster the data into fraudulent and genuine sets. By further review the sets can be analyzed and proper decision can be taken based on those results. The algorithm that is implemented to detect credit card fraud using Self Organizing Map is represented in Figure 1: 1. Initially choose all neurons (weight vectors wi) randomly. 2. For each input vector Ii { 2. 1) Convert all the symbolic input to the Numerical input by applying some mean and standard deviation formulas. 2. 2) Perform the initial authentication process like verification of Pin, Address, expiry date etc. } 3. Choose the learning rate parameter randomly for eg. 0. 5 4. Initially update all neurons for each input vector Ii. 5. Apply the unsupervised approach to distinguish the transaction into fraudulent and non-fraudulent cluster. 5. 1) Perform iteration till a specific cluster is not formed for a input vector. 6. By applying SOM we can divide the transactions into fraudulent (Fk) and genuine vector (Gk). 7. Perform a manually review decision. 8. Get your optimized result. Figure 1: Algorithm to detect Credit Card Fraud Initial Selection of Data Set Input vectors are generally in the form of High Dimensional Real world quantities which will be fed to a neuron matrix. These quantities are generally divided as [9]: 1346 Mitali Bansal and Suman Figure 2: Division of Transactions to form an Input Matrix In Account related quantities we can include like account number, currency of account, account opening date, last date of credit or debit available balance etc. In customer related quantities we can include customer id, customer type like high profile, low profile etc. In transaction related quantities we can have transaction no, location, currency, its timestamp etc. Conversion of Symbolic data into Numeric In credit card fraud detection, all of the data of banking transactions will be in the form of the symbolic, so there is a need to convert that symbolic data into numeric one. For example location, name, customer id etc. Conversion of all this data needs some normal distribution mechanism on the basis of frequency. The normalizing of data is done using Z = (Ni-Mi) / S where Ni is frequency of occurrence of a particular entity, M is mean and S is standard deviation. Then after all this procedure we will arrive at normalized values [9]. Implementation of SOM After getting all the normalized values, we make a input vector matrix. After that randomly weight vector is selected, this is generally termed as Neuron matrix. Dimension of this neuron matrix will be same as input vector matrix. A randomly learning parameter α is also taken. The value of this learning parameter is a small positive value that can be adjusted according to the process. The commonly used similarity matrix is the Euclidian distance given by equation 1: Distance between two neuron = jx(p)=minj││X-Wj(p)││={ Xi-Wij(p)]}, (1) Where j=1, 2......m and W is neuron or weight matrix, X is Input vectorThe main output of SOM is the patterns and cluster it has given as output vector. The cluster in credit card fraud detection will be in the form of fraudulent and genuine set represented as Fk and Gk respectively. Credit Card Fraud Detection Using Self Organised Map 1347 Review and decision making The clustering of input data into fraudulent and genuine set shows the categories of transactions performed as well as rarely performed more frequently as well as rarely by each customer. Since by the help of SOM relationship as well as hidden patterns is unearthed, we get more accuracy in our results. If the extent of suspicious activity exceeds a certain threshold value that transaction can be sent for review. So, it reduces overall processing time and complexity. Results The no of transactions taken in Test1, Test2, Test3 and Test4 are 500, 1000, 1500 and 2000 respectively. When compared to ID3 algorithm our approach presents much efficient result as shown in figure 3. Conclusion As results shows that SOM gives better results in case of detecting credit card fraud. As all parameters are verified and well represented in plots. The uniqueness of our approach lies in using the normalization and clustering mechanism of SOM of detecting credit card fraud. This helps in detecting hidden patterns of the transactions which cannot be identified to the other traditional method. With appropriate no of weight neurons and with help of thousands of iterations the network is trained and then result is verified to new transactions. The concept of normalization will help to normalize the values in other fraud cases and SOM will be helpful in detecting anomalies in credit card fraud cas",
"title": ""
},
{
"docid": "fa1b427e152ee84b8c38687ab84d1f7c",
"text": "We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG [44], where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG’s per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth [18], we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 [11] for ImageNet [3], where our techniques produce improved accuracy (.15–.41% in precision@1) with substantially less computation (bypassing 25–40% of the layers).",
"title": ""
},
{
"docid": "7e17c1842a70e416f0a90bdcade31a8e",
"text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.",
"title": ""
},
{
"docid": "0dbca0a2aec1b27542463ff80fc4f59d",
"text": "An emerging research area named Learning-to-Rank (LtR) has shown that effective solutions to the ranking problem can leverage machine learning techniques applied to a large set of features capturing the relevance of a candidate document for the user query. Large-scale search systems must however answer user queries very fast, and the computation of the features for candidate documents must comply with strict back-end latency constraints. The number of features cannot thus grow beyond a given limit, and Feature Selection (FS) techniques have to be exploited to find a subset of features that both meets latency requirements and leads to high effectiveness of the trained models. In this paper, we propose three new algorithms for FS specifically designed for the LtR context where hundreds of continuous or categorical features can be involved. We present a comprehensive experimental analysis conducted on publicly available LtR datasets and we show that the proposed strategies outperform a well-known state-of-the-art competitor.",
"title": ""
},
{
"docid": "dacf68b5e159211d6e9bb8983ef8bb3c",
"text": "Analog-to-Digital converters plays vital role in medical and signal processing applications. Normally low power ADC's were required for long term and battery operated applications. SAR ADC is best suited for low power, medium resolution and moderate speed applications. This paper presents a 10-bit low power SAR ADC which is simulated in 180nm CMOS technology. Based on literature survey, low power consumption is attained by using Capacitive DAC. Capacitive DAC also incorporate Sample-and-Hold circuit in it. Dynamic latch comparator is used to increase in speed of operation and to get lower power consumption.",
"title": ""
},
{
"docid": "feef714b024ad00086a5303a8b74b0a4",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.",
"title": ""
}
] | scidocsrr |
6ea8102cc982f2bec5f454d7772f7c77 | A Humanized Version of Foxp2 Affects Cortico-Basal Ganglia Circuits in Mice | [
{
"docid": "54a8620e5f7ea945eabd0ed5420cefb3",
"text": "The cellular heterogeneity of the brain confounds efforts to elucidate the biological properties of distinct neuronal populations. Using bacterial artificial chromosome (BAC) transgenic mice that express EGFP-tagged ribosomal protein L10a in defined cell populations, we have developed a methodology for affinity purification of polysomal mRNAs from genetically defined cell populations in the brain. The utility of this approach is illustrated by the comparative analysis of four types of neurons, revealing hundreds of genes that distinguish these four cell populations. We find that even two morphologically indistinguishable, intermixed subclasses of medium spiny neurons display vastly different translational profiles and present examples of the physiological significance of such differences. This genetically targeted translating ribosome affinity purification (TRAP) methodology is a generalizable method useful for the identification of molecular changes in any genetically defined cell type in response to genetic alterations, disease, or pharmacological perturbations.",
"title": ""
}
] | [
{
"docid": "7e1f0cd43cdc9685474e19b7fd65791b",
"text": "Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.",
"title": ""
},
{
"docid": "0e218dd5654ae9125d40bdd5c0a326d6",
"text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.",
"title": ""
},
{
"docid": "a1bef11b10bc94f84914d103311a5941",
"text": "Class imbalance and class overlap are two of the major problems in data mining and machine learning. Several studies have shown that these data complexities may affect the performance or behavior of artificial neural networks. Strategies proposed to face with both challenges have been separately applied. In this paper, we introduce a hybrid method for handling both class imbalance and class overlap simultaneously in multi-class learning problems. Experimental results on five remote sensing data show that the combined approach is a promising method. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "522a9deb3926d067686d4c26354a78f7",
"text": "The golden age of cannabis pharmacology began in the 1960s as Raphael Mechoulam and his colleagues in Israel isolated and synthesized cannabidiol, tetrahydrocannabinol, and other phytocannabinoids. Initially, THC garnered most research interest with sporadic attention to cannabidiol, which has only rekindled in the last 15 years through a demonstration of its remarkably versatile pharmacology and synergy with THC. Gradually a cognizance of the potential of other phytocannabinoids has developed. Contemporaneous assessment of cannabis pharmacology must be even far more inclusive. Medical and recreational consumers alike have long believed in unique attributes of certain cannabis chemovars despite their similarity in cannabinoid profiles. This has focused additional research on the pharmacological contributions of mono- and sesquiterpenoids to the effects of cannabis flower preparations. Investigation reveals these aromatic compounds to contribute modulatory and therapeutic roles in the cannabis entourage far beyond expectations considering their modest concentrations in the plant. Synergistic relationships of the terpenoids to cannabinoids will be highlighted and include many complementary roles to boost therapeutic efficacy in treatment of pain, psychiatric disorders, cancer, and numerous other areas. Additional parts of the cannabis plant provide a wide and distinct variety of other compounds of pharmacological interest, including the triterpenoid friedelin from the roots, canniprene from the fan leaves, cannabisin from seed coats, and cannflavin A from seed sprouts. This chapter will explore the unique attributes of these agents and demonstrate how cannabis may yet fulfil its potential as Mechoulam's professed \"pharmacological treasure trove.\"",
"title": ""
},
{
"docid": "5bdf4585df04c00ebcf00ce94a86ab38",
"text": "High-voltage pulse-generators can be used effectively for bacterial decontamination in water treatment applications. Applying a pulsed electric field to the infected water sample guarantees killing of harmful germs and bacteria. In this paper, a modular high-voltage pulse-generator with sequential charging is proposed for water treatment via underwater pulsed streamer corona discharge. The proposed generator consists of series-connected modules similar to an arm of a modular multilevel converter. The modules' capacitors are charged sequentially from a relatively low-voltage dc supply, then they are connected in series and discharged into the load. Two configurations are proposed in this paper, one for low repetitive pulse rate applications, and the other for high repetitive pulse rates. In the first topology, the equivalent resistance of the infected water sample is used as a charging resistance for the generator's capacitors during the charging process. While in the second topology, the water resistance is bypassed during the charging process, and an external charging resistance with proper value is used instead. In this paper, detailed designs for the proposed pulse-generators are presented and validated by simulation results using MATLAB. A scaled down experimental setup has been built to show the viability of the proposed concept.",
"title": ""
},
{
"docid": "11ce5bca8989b3829683430abe2aee47",
"text": "Android is the most popular smartphone operating system with a market share of 80%, but as a consequence, also the platform most targeted by malware. To deal with the increasing number of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community, the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a fully automated, publicly available and comprehensive analysis system for Android apps. ANDRUBIS combines static analysis with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage. With ANDRUBIS, we collected a dataset of over 1,000,000 Android apps, including 40% malicious apps. This dataset allows us to discuss trends in malware behavior observed from apps dating back as far as 2010, as well as to present insights gained from operating ANDRUBIS as a publicly available service for the past two years.",
"title": ""
},
{
"docid": "a633e3714f730d53c7dd9719a18496de",
"text": "This paper addresses the problem of controlling a robot arm executing a cooperative task with a human who guides the robot through direct physical interaction. This problem is tackled by allowing the end effector to comply according to an impedance control law defined in the Cartesian space. While, in principle, the robot's dynamics can be fully compensated and any impedance behaviour can be imposed by the control, the stability of the coupled human-robot system is not guaranteed for any value of the impedance parameters. Moreover, if the robot is kinematically or functionally redundant, the redundant degrees of freedom play an important role. The idea proposed here is to use redundancy to ensure a decoupled apparent inertia at the end effector. Through an extensive experimental study on a 7-DOF KUKA LWR4 arm, we show that inertial decoupling enables a more flexible choice of the impedance parameters and improves the performance during manual guidance.",
"title": ""
},
{
"docid": "27b5e0594305a81c6fad15567ba1f3b9",
"text": "A novel approach to the design of series-fed antenna arrays has been presented, in which a modified three-way slot power divider is applied. In the proposed coupler, the power division is adjusted by changing the slot inclination with respect to the transmission line, whereas coupled transmission lines are perpendicular. The proposed modification reduces electrical length of the feeding line to <formula formulatype=\"inline\"><tex Notation=\"TeX\">$1 \\lambda$</tex></formula>, hence results in dissipation losses' reduction. The theoretical analysis and measurement results of the 2<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$\\, \\times \\,$</tex></formula>8 microstrip antenna array operating within 10.5-GHz frequency range are shown in the letter, proving the novel inclined-slot power divider's capability to provide appropriate power distribution and its potential application in the large antenna arrays.",
"title": ""
},
{
"docid": "ed33b5fae6bc0af64668b137a3a64202",
"text": "In this study the effect of the Edmodo social learning environment on mobile assisted language learning (MALL) was examined by seeking the opinions of students. Using a quantitative experimental approach, this study was conducted by conducting a questionnaire before and after using the social learning network Edmodo. Students attended lessons with their mobile devices. The course materials were shared in the network via Edmodo group sharing tools. The students exchanged idea and developed projects, and felt as though they were in a real classroom setting. The students were also able to access various multimedia content. The results of the study indicate that Edmodo improves students’ foreign language learning, increases their success, strengthens communication among students, and serves as an entertaining learning environment for them. The educationally suitable sharing structure and the positive user opinions described in this study indicate that Edmodo is also usable in other lessons. Edmodo can be used on various mobile devices, including smartphones, in addition to the web. This advantageous feature contributes to the usefulness of Edmodo as a scaffold for education.",
"title": ""
},
{
"docid": "e0919f53691d17c7cb495c19914683f8",
"text": "Carpooling has long held the promise of reducing gas consumption by decreasing mileage to deliver coriders. Although ad hoc carpools already exist in the real world through private arrangements, little research on the topic has been done. In this article, we present the first systematic work to design, implement, and evaluate a carpool service, called coRide, in a large-scale taxicab network intended to reduce total mileage for less gas consumption. Our coRide system consists of three components, a dispatching cloud server, passenger clients, and an onboard customized device, called TaxiBox. In the coRide design, in response to the delivery requests of passengers, dispatching cloud servers calculate cost-efficient carpool routes for taxicab drivers and thus lower fares for the individual passengers.\n To improve coRide’s efficiency in mileage reduction, we formulate an NP-hard route calculation problem under different practical constraints. We then provide (1) an optimal algorithm using Linear Programming, (2) a 2-approximation algorithm with a polynomial complexity, and (3) its corresponding online version with a linear complexity. To encourage coRide’s adoption, we present a win-win fare model as the incentive mechanism for passengers and drivers to participate. We test the performance of coRide by a comprehensive evaluation with a real-world trial implementation and a data-driven simulation with 14,000 taxi data from the Chinese city Shenzhen. The results show that compared with the ground truth, our service can reduce 33% of total mileage; with our win-win fare model, we can lower passenger fares by 49% and simultaneously increase driver profit by 76%.",
"title": ""
},
{
"docid": "f35db13e8b2afd0f23c421bd8828af35",
"text": "In this paper, we report a novel flexible tactile sensor array for an anthropomorphic artificial hand with the capability of measuring both normal and shear force distributions using quantum tunneling composite as a base material. There are four fan-shaped electrodes in a cell that decompose the contact force into normal and shear components. The sensor has been realized in a 2 × 6 array of unit sensors, and each unit sensor responds to normal and shear stresses in all three axes. By applying separated drops of conductive polymer instead of a full layer, cross-talk between the sensor cells is decreased. Furthermore, the voltage mirror method is used in this circuit to avoid crosstalk effect, which is based on a programmable system-on-chip. The measurement of a single sensor shows that the full-scale range of detectable forces are about 20, 8, and 8 N for the x-, y-, and z-directions, respectively. The sensitivities of a cell measured with a current setup are 0.47, 0.45, and 0.16 mV/mN for the x-, y-, and y-directions, respectively. The sensor showed a high repeatability, low hysteresis, and minimum tactile crosstalk. The proposed flexible three-axial tactile sensor array can be applied in a curved or compliant surface that requires slip detection and flexibility, such as a robotic finger.",
"title": ""
},
{
"docid": "ab474cc2128d488a884602a247b4e7b2",
"text": "Trajectory outlier detection is a fundamental building block for many location-based service (LBS) applications, with a large application base. We dedicate this paper on detecting the outliers from vehicle trajectories efficiently and effectively. In addition, we want our solution to be able to issue an alarm early when an outlier trajectory is only partially observed (i.e., the trajectory has not yet reached the destination). Most existing works study the problem on general Euclidean trajectories and require accesses to the historical trajectory database or computations on the distance metric that are very expensive. Furthermore, few of existing works consider some specific characteristics of vehicles trajectories (e.g., their movements are constrained by the underlying road networks), and majority of them require the input of complete trajectories. Motivated by this, we propose a vehicle outlier detection approach namely DB-TOD which is based on probabilistic model via modeling the driving behavior/preferences from the set of historical trajectories. We design outlier detection algorithms on both complete trajectory and partial one. Our probabilistic model-based approach makes detecting trajectory outlier extremely efficient while preserving the effectiveness, contributed by the relatively accurate model on driving behavior. We conduct comprehensive experiments using real datasets and the results justify both effectiveness and efficiency of our approach.",
"title": ""
},
{
"docid": "e6bb77b8f16e17b674d6baada5ac9b87",
"text": "Art is a uniquely human activity associated fundamentally with symbolic and abstract cognition. Its practice in human societies throughout the world, coupled with seeming non-functionality, has led to three major brain theories of art. (1) The localized brain regions and pathways theory links art to multiple neural regions. (2) The display of art and its aesthetics theory is tied to the biological motivation of courtship signals and mate selection strategies in animals. (3) The evolutionary theory links the symbolic nature of art to critical pivotal brain changes in Homo sapiens supporting increased development of language and hierarchical social grouping. Collectively, these theories point to art as a multi-process cognition dependent on diverse brain regions and on redundancy in art-related functional representation.",
"title": ""
},
{
"docid": "e6cba9e178f568c402be7b25c4f0777f",
"text": "This paper is a tutorial introduction to the Viterbi Algorithm, this is reinforced by an example use of the Viterbi Algorithm in the area of error correction in communications channels. Some extensions to the basic algorithm are also discussed briefly. Some of the many application areas where the Viterbi Algorithm has been used are considered, including it's use in communications, target tracking and pattern recognition problems. A proposal for further research into the use of the Viterbi Algorithm in Signature Verification is then presented, and is the area of present research at the moment.",
"title": ""
},
{
"docid": "9a79a9b2c351873143a8209d37b46f64",
"text": "The authors review research on police effectiveness in reducing crime, disorder, and fear in the context of a typology of innovation in police practices. That typology emphasizes two dimensions: one concerning the diversity of approaches, and the other, the level of focus. The authors find that little evidence supports the standard model of policing—low on both of these dimensions. In contrast, research evidence does support continued investment in police innovations that call for greater focus and tailoring of police efforts, combined with an expansion of the tool box of policing beyond simple law enforcement. The strongest evidence of police effectiveness in reducing crime and disorder is found in the case of geographically focused police practices, such as hot-spots policing. Community policing practices are found to reduce fear of crime, but the authors do not find consistent evidence that community policing (when it is implemented without models of problem-oriented policing) affects either crime or disorder. A developing body of evidence points to the effectiveness of problemoriented policing in reducing crime, disorder, and fear. More generally, the authors find that many policing practices applied broadly throughout the United States either have not been the subject of systematic research or have been examined in the context of research designs that do not allow practitioners or policy makers to draw very strong conclusions.",
"title": ""
},
{
"docid": "a4fb1919a1bf92608a55bc3feedf897d",
"text": "We develop an algebraic framework, Logic Programming Doctrines, for the syntax, proof theory, operational semantics and model theory of Horn Clause logic programming based on indexed premonoidal categories. Our aim is to provide a uniform framework for logic programming and its extensions capable of incorporating constraints, abstract data types, features imported from other programming language paradigms and a mathematical description of the state space in a declarative manner. We define a new way to embed information about data into logic programming derivations by building a sketch-like description of data structures directly into an indexed category of proofs. We give an algebraic axiomatization of bottom-up semantics in this general setting, describing categorical models as fixed points of a continuous operator. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b151d236ce17b4d03b384a29dbb91330",
"text": "To investigate the blood supply to the nipple areola complex (NAC) on thoracic CT angiograms (CTA) to improve breast pedicle design in reduction mammoplasty. In a single centre, CT scans of the thorax were retrospectively reviewed for suitability by a cardiothoracic radiologist. Suitable scans had one or both breasts visible in extended fields, with contrast enhancement of breast vasculature in a female patient. The arterial sources, intercostal space perforated, glandular/subcutaneous course, vessel entry point, and the presence of periareolar anastomoses were recorded for the NAC of each breast. From 69 patients, 132 breasts were suitable for inclusion. The most reproducible arterial contribution to the NAC was perforating branches arising from the internal thoracic artery (ITA) (n = 108, 81.8%), followed by the long thoracic artery (LTA) (n = 31, 23.5%) and anterior intercostal arteries (AI) (n = 21, 15.9%). Blood supply was superficial versus deep in (n = 86, 79.6%) of ITA sources, (n = 28, 90.3%) of LTA sources, and 10 (47.6%) of AI sources. The most vascularly reliable breast pedicle would be asymmetrical in 7.9% as a conservative estimate. We suggest that breast CT angiography can provide valuable information about NAC blood supply to aid customised pedicle design, especially in high-risk, large-volume breast reductions where the risk of vascular-dependent complications is the greatest and asymmetrical dominant vasculature may be present. Superficial ITA perforator supplies are predominant in a majority of women, followed by LTA- and AIA-based sources, respectively.",
"title": ""
},
{
"docid": "26db4ecbc2ad4b8db0805b06b55fe27d",
"text": "The advent of high voltage (HV) wide band-gap power semiconductor devices has enabled the medium voltage (MV) grid tied operation of non-cascaded neutral point clamped (NPC) converters. This results in increased power density, efficiency as well as lesser control complexity. The multi-chip 15 kV/40 A SiC IGBT and 15 kV/20 A SiC MOSFET are two such devices which have gained attention for MV grid interface applications. Such converters based on these devices find application in active power filters, STATCOM or as active front end converters for solid state transformers. This paper presents an experimental comparative evaluation of these two SiC devices for 3-phase grid connected applications using a 3-level NPC converter as reference. The IGBTs are generally used for high power applications due to their lower conduction loss while MOSFETs are used for high frequency applications due to their lower switching loss. The thermal performance of these devices are compared based on device loss characteristics, device heat-run tests, 3-level pole heat-run tests, PLECS thermal simulation based loss comparison and MV experiments on developed hardware prototypes. The impact of switching frequency on the harmonic control of the grid connected converter is also discussed and suitable device is selected for better grid current THD.",
"title": ""
},
{
"docid": "fb7c268419d798587e1675a5a1a37232",
"text": "Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image reranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets.",
"title": ""
},
{
"docid": "f383dd5dd7210105406c2da80cf72f89",
"text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".",
"title": ""
}
] | scidocsrr |
fcc403f4319dc81eba63c968aaaf8c51 | Thin Structures in Image Based Rendering | [
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "d82553a7bf94647aaf60eb36748e567f",
"text": "We propose a novel image-based rendering algorithm for handling complex scenes that may include reflective surfaces. Our key contribution lies in treating the problem in the gradient domain. We use a standard technique to estimate scene depth, but assign depths to image gradients rather than pixels. A novel view is obtained by rendering the horizontal and vertical gradients, from which the final result is reconstructed through Poisson integration using an approximate solution as a data term. Our algorithm is able to handle general scenes including reflections and similar effects without explicitly separating the scene into reflective and transmissive parts, as required by previous work. Our prototype renderer is fully implemented on the GPU and runs in real time on commodity hardware.",
"title": ""
}
] | [
{
"docid": "dd911eff60469b32330c5627c288f19f",
"text": "Routing Algorithms are driving the growth of the data transmission in wireless sensor networks. Contextually, many algorithms considered the data gathering and data aggregation. This paper uses the scenario of clustering and its impact over the SPIN protocol and also finds out the effect over the energy consumption in SPIN after uses of clustering. The proposed scheme is implemented using TCL/C++ programming language and evaluated using Ns2.34 simulator and compare with LEACH. Simulation shows proposed protocol exhibits significant performance gains over the LEACH for lifetime of network and guaranteed data transmission.",
"title": ""
},
{
"docid": "ad5943b20597be07646cca1af9d23660",
"text": "Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely welldefined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.",
"title": ""
},
{
"docid": "5387c752db7b4335a125df91372099b3",
"text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.",
"title": ""
},
{
"docid": "4933f3f3007dab687fc852e9c2b1ab0a",
"text": "This paper presents a topology for bidirectional solid-state transformers with a minimal device count. The topology, referenced as dynamic-current or Dyna-C, has two current-source inverter stages with a high-frequency galvanic isolation, requiring 12 switches for four-quadrant three-phase ac/ac power conversion. The topology has voltage step-up/down capability, and the input and output can have arbitrary power factors and frequencies. Further, the Dyna-C can be configured as isolated power converters for single- or multiterminal dc, and single- or multiphase ac systems. The modular nature of the Dyna-C lends itself to be connected in series and/or parallel for high-voltage high-power applications. The proposed converter topology can find a broad range of applications such as isolated battery chargers, uninterruptible power supplies, renewable energy integration, smart grid, and power conversion for space-critical applications including aviation, locomotives, and ships. This paper outlines various configurations of the Dyna-C, as well as the relative operation and controls. The converter functionality is validated through simulations and experimental measurements of a 50-kVA prototype.",
"title": ""
},
{
"docid": "529b6b658674a52191d4a8fed97e44eb",
"text": "We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to \"focus\" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).",
"title": ""
},
{
"docid": "ba2632b7a323e785b57328d32a26bc99",
"text": "Modern malware is designed with mutation characteristics, namely polymorphism and metamorphism, which causes an enormous growth in the number of variants of malware samples. Categorization of malware samples on the basis of their behaviors is essential for the computer security community, because they receive huge number of malware everyday, and the signature extraction process is usually based on malicious parts characterizing malware families. Microsoft released a malware classification challenge in 2015 with a huge dataset of near 0.5 terabytes of data, containing more than 20K malware samples. The analysis of this dataset inspired the development of a novel paradigm that is effective in categorizing malware variants into their actual family groups. This paradigm is presented and discussed in the present paper, where emphasis has been given to the phases related to the extraction, and selection of a set of novel features for the effective representation of malware samples. Features can be grouped according to different characteristics of malware behavior, and their fusion is performed according to a per-class weighting paradigm. The proposed method achieved a very high accuracy ($\\approx$ 0.998) on the Microsoft Malware Challenge dataset.",
"title": ""
},
{
"docid": "c1477b801a49df62eb978b537fd3935e",
"text": "The striatum is thought to play an essential role in the acquisition of a wide range of motor, perceptual, and cognitive skills, but neuroimaging has not yet demonstrated striatal activation during nonmotor skill learning. Functional magnetic resonance imaging was performed while participants learned probabilistic classification, a cognitive task known to rely on procedural memory early in learning and declarative memory later in learning. Multiple brain regions were active during probabilistic classification compared with a perceptual-motor control task, including bilateral frontal cortices, occipital cortex, and the right caudate nucleus in the striatum. The left hippocampus was less active bilaterally during probabilistic classification than during the control task, and the time course of this hippocampal deactivation paralleled the expected involvement of medial temporal structures based on behavioral studies of amnesic patients. Findings provide initial evidence for the role of frontostriatal systems in normal cognitive skill learning.",
"title": ""
},
{
"docid": "f301f87dee3c13d06e34f533bb69cf01",
"text": "Representation of news events as latent feature vectors is essential for several tasks, such as news recommendation, news event linking, etc. However, representations proposed in the past fail to capture the complex network structure of news events. In this paper we propose Event2Vec, a novel way to learn latent feature vectors for news events using a network. We use recently proposed network embedding techniques, which are proven to be very effective for various prediction tasks in networks. As events involve different classes of nodes, such as named entities, temporal information, etc, general purpose network embeddings are agnostic to event semantics. To address this problem, we propose biased random walks that are tailored to capture the neighborhoods of news events in event networks. We then show that these learned embeddings are effective for news event recommendation and news event linking tasks using strong baselines, such as vanilla Node2Vec, and other state-of-the-art graph-based event ranking techniques.",
"title": ""
},
{
"docid": "3e00367b754777a6659578963f006a69",
"text": "This paper presents a study on a three-phase 24-pulse Transformer Rectifier Unit (TRU) for use in aircraft electric power system. Four three-phase systems with 15°, 30°, 45°, and 60° phase shifts are obtained by interconnection of conventional transformers in zig-zag configuration. The system is modeled in details using Simulink (SimPowerSystems). Simulation results are presented and the obtained performance is compared with those of a 12-pulse TRU.",
"title": ""
},
{
"docid": "1df9ac95778bbe7ad750810e9b5a9756",
"text": "To characterize muscle synergy organization underlying multidirectional control of stance posture, electromyographic activity was recorded from 11 lower limb and trunk muscles of 7 healthy subjects while they were subjected to horizontal surface translations in 12 different, randomly presented directions. The latency and amplitude of muscle responses were quantified for each perturbation direction. Tuning curves for each muscle were examined to relate the amplitude of the muscle response to the direction of surface translation. The latencies of responses for the shank and thigh muscles were constant, regardless of perturbation direction. In contrast, the latencies for another thigh [tensor fascia latae (TFL)] and two trunk muscles [rectus abdominis (RAB) and erector spinae (ESP)] were either early or late, depending on the perturbation direction. These three muscles with direction-specific latencies may play different roles in postural control as prime movers or as stabilizers for different translation directions, depending on the timing of recruitment. Most muscle tuning curves were within one quadrant, having one direction of maximal activity, generally in response to diagonal surface translations. Two trunk muscles (RAB and ESP) and two lower limb muscles (semimembranosus and peroneus longus) had bipolar tuning curves, with two different directions of maximal activity, suggesting that these muscle can play different roles as part of different synergies, depending on translation direction. Muscle tuning curves tended to group into one of three regions in response to 12 different directions of perturbations. Two muscles [rectus femoris (RFM) and TFL] were maximally active in response to lateral surface translations. The remaining muscles clustered into one of two diagonal regions. The diagonal regions corresponded to the two primary directions of active horizontal force vector responses. Two muscles (RFM and adductor longus) were maximally active orthogonal to their predicted direction of maximal activity based on anatomic orientation. Some of the muscles in each of the synergic regions were not anatomic synergists, suggesting a complex central organization for recruitment of muscles. The results suggest that neither a simple reflex mechanism nor a fixed muscle synergy organization is adequate to explain the muscle activation patterns observed in this postural control task. Our results are consistent with a centrally mediated pattern of muscle latencies combined with peripheral influence on muscle magnitude. We suggest that a flexible continuum of muscle synergies that are modifiable in a task-dependent manner be used for equilibrium control in stance.",
"title": ""
},
{
"docid": "f25afc147ceb24fb1aca320caa939f10",
"text": "Third party intervention is a typical response to destructive and persistent social conflict and comes in a number of different forms attended by a variety of issues. Mediation is a common form of intervention designed to facilitate a negotiated settlement on substantive issues between conflicting parties. Mediators are usually external to the parties and carry an identity, motives and competencies required to play a useful role in addressing the dispute. While impartiality is generally seen as an important prerequisite for effective intervention, biased mediators also appear to have a role to play. This article lays out the different forms of third-party intervention in a taxonomy of six methods, and proposes a contingency model which matches each type of intervention to the appropriate stage of conflict escalation. Interventions are then sequenced, in order to assist the parties in de-escalating and resolving the conflict. It must be pointed out, however, that the mixing of interventions with different power bases raises a number of ethical and moral questions about the use of reward and coercive power by third parties. The article then discusses several issues around the practice of intervention. It is essential to give these issues careful consideration if third-party methods are to play their proper and useful role in the wider process of conflict transformation. Psychology from the University of Saskatchewan and a Ph.D. in Social Psychology from the University of Michigan. He has provided training and consulting services to various organizations and international institutes in conflict management. His current interests include third party intervention, interactive conflict resolution, and reconciliation in situations of ethnopolitical conflict. A b s t r a c t A b o u t t h e C o n t r i b u t o r",
"title": ""
},
{
"docid": "36fdd31b04f53f7aef27b9d4af5f479f",
"text": "Smart meters have been deployed in many countries across the world since early 2000s. The smart meter as a key element for the smart grid is expected to provide economic, social, and environmental benefits for multiple stakeholders. There has been much debate over the real values of smart meters. One of the key factors that will determine the success of smart meters is smart meter data analytics, which deals with data acquisition, transmission, processing, and interpretation that bring benefits to all stakeholders. This paper presents a comprehensive survey of smart electricity meters and their utilization focusing on key aspects of the metering process, different stakeholder interests, and the technologies used to satisfy stakeholder interests. Furthermore, the paper highlights challenges as well as opportunities arising due to the advent of big data and the increasing popularity of cloud environments.",
"title": ""
},
{
"docid": "75952b1d2c9c2f358c4c2e3401a00245",
"text": "This book is an outstanding contribution to the philosophical study of language and mind, by one of the most influential thinkers of our time. In a series of penetrating essays, Noam Chomsky cuts through the confusion and prejudice which has infected the study of language and mind, bringing new solutions to traditional philosophical puzzles and fresh perspectives on issues of general interest, ranging from the mind–body problem to the unification of science. Using a range of imaginative and deceptively simple linguistic analyses, Chomsky argues that there is no coherent notion of “language” external to the human mind, and that the study of language should take as its focus the mental construct which constitutes our knowledge of language. Human language is therefore a psychological, ultimately a “biological object,” and should be analysed using the methodology of the natural sciences. His examples and analyses come together in this book to give a unique and compelling perspective on language and the mind.",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "b645d8f57b60703e3910e2e5ce60117b",
"text": "We propose a multi-lingual multi-task architecture to develop supervised models with a minimal amount of labeled data for sequence labeling. In this new architecture, we combine various transfer models using two layers of parameter sharing. On the first layer, we construct the basis of the architecture to provide universal word representation and feature extraction capability for all models. On the second level, we adopt different parameter sharing strategies for different transfer schemes. This architecture proves to be particularly effective for low-resource settings, when there are less than 200 training sentences for the target task. Using Name Tagging as a target task, our approach achieved 4.3%-50.5% absolute Fscore gains compared to the mono-lingual single-task baseline model. 1",
"title": ""
},
{
"docid": "aa562b52c51fa6c4563280a6ce82f8c0",
"text": "We propose a framework that learns a representation transferable across different domains and tasks in a label efficient manner. Our approach battles domain shift with a domain adversarial loss, and generalizes the embedding to novel task using a metric learning-based approach. Our model is simultaneously optimized on labeled source data and unlabeled or sparsely labeled data in the target domain. Our method shows compelling results on novel classes within a new domain even when only a few labeled examples per class are available, outperforming the prevalent fine-tuning approach. In addition, we demonstrate the effectiveness of our framework on the transfer learning task from image object recognition to video action recognition.",
"title": ""
},
{
"docid": "e07377cb36e31c8190d5ac96f3891f2a",
"text": "We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation. COST weighs a system’s scalability against the overheads introduced by the system, and indicates the actual performance gains of the system, without rewarding systems that bring substantial but parallelizable overheads. We survey measurements of data-parallel systems recently reported in SOSP and OSDI, and find that many systems have either a surprisingly large COST, often hundreds of cores, or simply underperform one thread for all of their reported configurations.",
"title": ""
},
{
"docid": "b4dd76179734fb43e74c9c1daef15bbf",
"text": "Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.",
"title": ""
},
{
"docid": "f3f15a37a1d1a2a3a3647dc14f075297",
"text": "Stress is known to inhibit neuronal growth in the hippocampus. In addition to reducing the size and complexity of the dendritic tree, stress and elevated glucocorticoid levels are known to inhibit adult neurogenesis. Despite the negative effects of stress hormones on progenitor cell proliferation in the hippocampus, some experiences which produce robust increases in glucocorticoid levels actually promote neuronal growth. These experiences, including running, mating, enriched environment living, and intracranial self-stimulation, all share in common a strong hedonic component. Taken together, the findings suggest that rewarding experiences buffer progenitor cells in the dentate gyrus from the negative effects of elevated stress hormones. This chapter considers the evidence that stress and glucocorticoids inhibit neuronal growth along with the paradoxical findings of enhanced neuronal growth under rewarding conditions with a view toward understanding the underlying biological mechanisms.",
"title": ""
},
{
"docid": "815a9db2fb8c2aeadc766270a85517fd",
"text": "Resistive-switching random access memory (RRAM) based on the formation and the dissolution of a conductive filament (CF) through insulating materials, e.g., transition metal oxides, may find applications as novel memory and logic devices. Understanding the resistive-switching mechanism is essential for predicting and controlling the scaling and reliability performances of the RRAM. This paper addresses the set/reset characteristics of RRAM devices based on $\\hbox{HfO}_{x}$. The set process is analyzed as a function of the initial high-resistance state and of the current compliance. The reset process is studied as a function of the initial low-resistance state. Finally, the intermediate set states, obtained by set at variable compliance current, and reset states, obtained by reset at variable stopping voltage, are characterized with respect to their reset voltage, allowing for a microscopic interpretation of intermediate states in terms of different filament morphologies.",
"title": ""
}
] | scidocsrr |
35ebc67bbdc3701184c6ed579dff44bb | ALIZE 3.0 - open source toolkit for state-of-the-art speaker recognition | [
{
"docid": "978dd8a7f33df74d4a5cea149be6ebb0",
"text": "A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or toverify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct identification. Last, the performances of various systems are compared.",
"title": ""
}
] | [
{
"docid": "7c5abed8220171f38e3801298f660bfa",
"text": "Heavy metal remediation of aqueous streams is of special concern due to recalcitrant and persistency of heavy metals in environment. Conventional treatment technologies for the removal of these toxic heavy metals are not economical and further generate huge quantity of toxic chemical sludge. Biosorption is emerging as a potential alternative to the existing conventional technologies for the removal and/or recovery of metal ions from aqueous solutions. The major advantages of biosorption over conventional treatment methods include: low cost, high efficiency, minimization of chemical or biological sludge, regeneration of biosorbents and possibility of metal recovery. Cellulosic agricultural waste materials are an abundant source for significant metal biosorption. The functional groups present in agricultural waste biomass viz. acetamido, alcoholic, carbonyl, phenolic, amido, amino, sulphydryl groups etc. have affinity for heavy metal ions to form metal complexes or chelates. The mechanism of biosorption process includes chemisorption, complexation, adsorption on surface, diffusion through pores and ion exchange etc. The purpose of this review article is to provide the scattered available information on various aspects of utilization of the agricultural waste materials for heavy metal removal. Agricultural waste material being highly efficient, low cost and renewable source of biomass can be exploited for heavy metal remediation. Further these biosorbents can be modified for better efficiency and multiple reuses to enhance their applicability at industrial scale.",
"title": ""
},
{
"docid": "5b31cdfd19e40a2ee5f1094e33366902",
"text": "Much of the early literature on 'cultural competence' focuses on the 'categorical' or 'multicultural' approach, in which providers learn relevant attitudes, values, beliefs, and behaviors of certain cultural groups. In essence, this involves learning key 'dos and don'ts' for each group. Literature and educational materials of this kind focus on broad ethnic, racial, religious, or national groups, such as 'African American', 'Hispanic', or 'Asian'. The problem with this categorical or 'list of traits' approach to clinical cultural competence is that culture is multidimensional and dynamic. Culture comprises multiple variables, affecting all aspects of experience. Cultural processes frequently differ within the same ethnic or social group because of differences in age cohort, gender, political association, class, religion, ethnicity, and even personality. Culture is therefore a very elusive and nebulous concept, like art. The multicultural approach to cultural competence results in stereotypical thinking rather than clinical competence. A newer, cross cultural approach to culturally competent clinical practice focuses on foundational communication skills, awareness of cross-cutting cultural and social issues, and health beliefs that are present in all cultures. We can think of these as universal human beliefs, needs, and traits. This patient centered approach relies on identifying and negotiating different styles of communication, decision-making preferences, roles of family, sexual and gender issues, and issues of mistrust, prejudice, and racism, among other factors. In the current paper, we describe 'cultural' challenges that arise in the care of four patients from disparate cultures, each of whom has advanced colon cancer that is no longer responding to chemotherapy. We then illustrate how to apply principles of patient centered care to these challenges.",
"title": ""
},
{
"docid": "11ffdc076696536cef886a7ba130f049",
"text": "This work is about recognizing human activities occurring in videos at distinct semantic levels, including individual actions, interactions, and group activities. The recognition is realized using a two-level hierarchy of Long Short-Term Memory (LSTM) networks, forming a feed-forward deep architecture, which can be trained end-to-end. In comparison with existing architectures of LSTMs, we make two key contributions giving the name to our approach as Confidence-Energy Recurrent Network – CERN. First, instead of using the common softmax layer for prediction, we specify a novel energy layer (EL) for estimating the energy of our predictions. Second, rather than finding the common minimum-energy class assignment, which may be numerically unstable under uncertainty, we specify that the EL additionally computes the p-values of the solutions, and in this way estimates the most confident energy minimum. The evaluation on the Collective Activity and Volleyball datasets demonstrates: (i) advantages of our two contributions relative to the common softmax and energy-minimization formulations and (ii) a superior performance relative to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "e79df31bd411d7c62d625a047dde61ce",
"text": "The depth resolution achieved by a continuous wave time-of-flight (C-ToF) imaging system is determined by the coding (modulation and demodulation) functions that it uses. Almost all current C-ToF systems use sinusoid or square coding functions, resulting in a limited depth resolution. In this article, we present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. Given a fixed total source power and acquisition time, the new Hamiltonian coding scheme can achieve up to an order of magnitude higher resolution as compared to the current state-of-the-art methods, especially in low signal-to-noise ratio (SNR) settings. We also develop a comprehensive physically-motivated simulator for C-ToF cameras that can be used to evaluate various coding schemes prior to a real hardware implementation. Since most off-the-shelf C-ToF sensors use sinusoid or square functions, we develop a hardware prototype that can implement a wide range of coding functions. Using this prototype and our software simulator, we demonstrate the performance advantages of the proposed Hamiltonian coding functions in a wide range of imaging settings.",
"title": ""
},
{
"docid": "a1af04cc0616533bd47bb660f0eff3cd",
"text": "Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs) from airborne LiDAR (light detection and ranging) data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing) working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily.",
"title": ""
},
{
"docid": "fda9db396d7c35ba64a7a5453aaa80dc",
"text": "A novel dynamic latched comparator with offset voltage compensation is presented. The proposed comparator uses one phase clock signal for its operation and can drive a larger capacitive load with complementary version of the regenerative output latch stage. As it provides a larger voltage gain up to 22 V/V to the regenerative latch, the inputreferred offset voltage of the latch is reduced and metastability is improved. The proposed comparator is designed using 90 nm PTM technology and 1 V power supply voltage. It demonstrates up to 24.6% less offset voltage and 30.0% less sensitivity of delay to decreasing input voltage difference (17 ps/decade) than the conventional double-tail latched comparator at approximately the same area and power consumption. In addition, with a digitally controlled capacitive offset calibration technique, the offset voltage of the proposed comparator is further reduced from 6.03 to 1.10 mV at 1-sigma at the operating clock frequency of 3 GHz, and it consumes 54 lW/GHz after calibration.",
"title": ""
},
{
"docid": "faac043b0c32bad5a44d52b93e468b78",
"text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.",
"title": ""
},
{
"docid": "88d9c077f588e9e02453bd0ea40cfcae",
"text": "This study explored the prevalence of and motivations behind 'drunkorexia' – restricting food intake prior to drinking alcohol. For both male and female university students (N = 3409), intentionally changing eating behaviour prior to drinking alcohol was common practice (46%). Analyses performed on a targeted sample of women (n = 226) revealed that food restriction prior to alcohol use was associated with greater symptomology than eating more food. Those who restrict eating prior to drinking to avoid weight gain scored higher on measures of disordered eating, whereas those who restrict to get intoxicated faster scored higher on measures of alcohol abuse.",
"title": ""
},
{
"docid": "3b1b829e6d017d574562e901f4963bc4",
"text": "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm— maximum variance unfolding—for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modern tools in convex optimization that are proving increasingly useful in many areas of machine learning.",
"title": ""
},
{
"docid": "e56bd360fe21949d0617c6e1ddafefff",
"text": "This study addresses the problem of identifying the meaning of unknown words or entities in a discourse with respect to the word embedding approaches used in neural language models. We proposed a method for on-the-fly construction and exploitation of word embeddings in both the input and output layers of a neural model by tracking contexts. This extends the dynamic entity representation used in Kobayashi et al. (2016) and incorporates a copy mechanism proposed independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we construct a new task and dataset called Anonymized Language Modeling for evaluating the ability to capture word meanings while reading. Experiments conducted using our novel dataset show that the proposed variant of RNN language model outperformed the baseline model. Furthermore, the experiments also demonstrate that dynamic updates of an output layer help a model predict reappearing entities, whereas those of an input layer are effective to predict words following reappearing entities.",
"title": ""
},
{
"docid": "924e10782437c323b8421b156db50584",
"text": "Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The notion of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured and semi-structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, and refinement, giving the ontology engineer a wealth of coordinated tools for ontology modelling. Besides of the general architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, KAON Text-To-Onto.",
"title": ""
},
{
"docid": "23eb979ec3e17db2b162b659e296a10e",
"text": "The authors would like to thank the Marketing Science Institute for their generous assistance in funding this research. We would also like to thank Claritas for providing us with data. We are indebted to Vincent Bastien, former CEO of Louis Vuitton, for the time he has spent with us critiquing our framework.",
"title": ""
},
{
"docid": "2e0547228597476a28c6b99b6f927299",
"text": "Several virtual reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 10 years. The purpose of this review is to outline the current state of virtual reality research in the treatment of mental health problems. PubMed and PsycINFO were searched for all articles containing the words “virtual reality”. In addition a manual search of the references contained in the papers resulting from this search was conducted and relevant periodicals were searched. Studies reporting the results of treatment utilizing VR in the mental health field and involving at least one patient were identified. More than 50 studies using VR were identified, the majority of which were case studies. Seventeen employed a between groups design: 4 involved patients with fear of flying; 3 involved patients with fear of heights; 3 involved patients with social phobia/public speaking anxiety; 2 involved people with spider phobia; 2 involved patients with agoraphobia; 2 involved patients with body image disturbance and 1 involved obese patients. There are both advantages in terms of delivery and disadvantages in terms of side effects to using VR. Although virtual reality based therapy appears to be superior to no treatment the effectiveness of VR therapy over traditional therapeutic approaches is not supported by the research currently available. There is a lack of good quality research on the effectiveness of VR therapy. Before clinicians will be able to make effective use of this emerging technology greater emphasis must be placed on controlled trials with clinically identified populations.",
"title": ""
},
{
"docid": "9a5fd2fa3ec899fad8969de102f55379",
"text": "The development of Machine Translation system for ancient language such as Sanskrit language is much more fascinating and challenging task. Due to lack of linguistic community, there are no wide work accomplish in Sanskrit translation while it is mother language by virtue of its importance in cultural heritage of India. In this paper, we integrate a traditional rule based approach of machine translation with Artificial Neural Network (ANN) model which translates an English sentence (source language sentence) into equivalent Sanskrit sentence (target language sentence). We use feed forward ANN for the selection of Sanskrit word like noun, verb, object, adjective etc from English to Sanskrit User Data Vector (UDV). Due to morphological richness of Sanskrit language, this system makes limited use of syntax and uses only morphological markings to identify Subject, Object, Verb, Preposition, Adjective, Adverb and as well as Conjunctive sentences also. It uses limited parsing for part of speech (POS) tagging, identification of clause, its Subject, Object, Verb etc and Gender-Number-Person (GNP) of noun, adjective and object. This system represents the translation between the SVO and SOV classes of languages. This system gives translation result in GUI form and handles English sentences of different classes.",
"title": ""
},
{
"docid": "f1635d5cf51f0a4d70090f5f672de605",
"text": "Enrichment analysis is a popular method for analyzing gene sets generated by genome-wide experiments. Here we present a significant update to one of the tools in this domain called Enrichr. Enrichr currently contains a large collection of diverse gene set libraries available for analysis and download. In total, Enrichr currently contains 180 184 annotated gene sets from 102 gene set libraries. New features have been added to Enrichr including the ability to submit fuzzy sets, upload BED files, improved application programming interface and visualization of the results as clustergrams. Overall, Enrichr is a comprehensive resource for curated gene sets and a search engine that accumulates biological knowledge for further biological discoveries. Enrichr is freely available at: http://amp.pharm.mssm.edu/Enrichr.",
"title": ""
},
{
"docid": "18acdeb37257f2f7f10a5baa8957a257",
"text": "Time-memory trade-off methods provide means to invert one way functions. Such attacks offer a flexible trade-off between running time and memory cost in accordance to users' computational resources. In particular, they can be applied to hash values of passwords in order to recover the plaintext. They were introduced by Martin Hellman and later improved by Philippe Oechslin with the introduction of rainbow tables. The drawbacks of rainbow tables are that they do not always guarantee a successful inversion. We address this issue in this paper. In the context of passwords, it is pertinent that frequently used passwords are incorporated in the rainbow table. It has been known that up to 4 given passwords can be incorporated into a chain but it is an open problem if more than 4 passwords can be achieved. We solve this problem by showing that it is possible to incorporate more of such passwords along a chain. Furthermore, we prove that this results in faster recovery of such passwords during the online running phase as opposed to assigning them at the beginning of the chains. For large chain lengths, the average improvement translates to 3 times the speed increase during the online recovery time.",
"title": ""
},
{
"docid": "09c27f3f680188637177e7f2913c1ef7",
"text": "The implementation of a monitoring and control system for the induction motor based on programmable logic controller (PLC) technology is described. Also, the implementation of the hardware and software for speed control and protection with the results obtained from tests on induction motor performance is provided. The PLC correlates the operational parameters to the speed requested by the user and monitors the system during normal operation and under trip conditions. Tests of the induction motor system driven by inverter and controlled by PLC prove a higher accuracy in speed regulation as compared to a conventional V/f control system. The efficiency of PLC control is increased at high speeds up to 95% of the synchronous speed. Thus, PLC proves themselves as a very versatile and effective tool in industrial control of electric drives.",
"title": ""
},
{
"docid": "45a8fea3e8d780c65811cee79082237f",
"text": "Pedestrian dead reckoning, especially on smart-phones, is likely to play an increasingly important role in indoor tracking and navigation, due to its low cost and ability to work without any additional infrastructure. A challenge however, is that positioning, both in terms of step detection and heading estimation, must be accurate and reliable, even when the use of the device is so varied in terms of placement (e.g. handheld or in a pocket) or orientation (e.g holding the device in either portrait or landscape mode). Furthermore, the placement can vary over time as a user performs different tasks, such as making a call or carrying the device in a bag. A second challenge is to be able to distinguish between a true step and other periodic motion such as swinging an arm or tapping when the placement and orientation of the device is unknown. If this is not done correctly, then the PDR system typically overestimates the number of steps taken, leading to a significant long term error. We present a fresh approach, robust PDR (R-PDR), based on exploiting how bipedal motion impacts acquired sensor waveforms. Rather than attempting to recognize different placements through sensor data, we instead simply determine whether the motion of one or both legs impact the measurements. In addition, we formulate a set of techniques to accurately estimate the device orientation, which allows us to very accurately (typically over 99%) reject false positives. We demonstrate that regardless of device placement, we are able to detect the number of steps taken with >99.4% accuracy. R-PDR thus addresses the two main limitations facing existing PDR techniques.",
"title": ""
},
{
"docid": "53d1ddf4809ab735aa61f4059a1a38b1",
"text": "In this paper we present a wearable Haptic Feedback Device to convey intuitive motion direction to the user through haptic feedback based on vibrotactile illusions. Vibrotactile illusions occur on the skin when two or more vibrotactile actuators in proximity are actuated in coordinated sequence, causing the user to feel combined sensations, instead of separate ones. By combining these illusions we can produce various sensation patterns that are discernible by the user, thus allowing to convey different information with each pattern. A method to provide information about direction through vibrotactile illusions is introduced on this paper. This method uses a grid of vibrotactile actuators around the arm actuated in coordination. The sensation felt on the skin is consistent with the desired direction of motion, so the desired motion can be intuitively understood. We show that the users can recognize the conveyed direction, and implemented a proof of concept of the proposed method to guide users' elbow flexion/extension motion.",
"title": ""
}
] | scidocsrr |
0ea7a1202a3a2df640f7dbf9a0451d2d | Exploitation and exploration in a performance based contextual advertising system | [
{
"docid": "341b0588f323d199275e89d8c33d6b47",
"text": "We propose novel multi-armed bandit (explore/exploit) schemes to maximize total clicks on a content module published regularly on Yahoo! Intuitively, one can ``explore'' each candidate item by displaying it to a small fraction of user visits to estimate the item's click-through rate (CTR), and then ``exploit'' high CTR items in order to maximize clicks. While bandit methods that seek to find the optimal trade-off between explore and exploit have been studied for decades, existing solutions are not satisfactory for web content publishing applications where dynamic set of items with short lifetimes, delayed feedback and non-stationary reward (CTR) distributions are typical. In this paper, we develop a Bayesian solution and extend several existing schemes to our setting. Through extensive evaluation with nine bandit schemes, we show that our Bayesian solution is uniformly better in several scenarios. We also study the empirical characteristics of our schemes and provide useful insights on the strengths and weaknesses of each. Finally, we validate our results with a ``side-by-side'' comparison of schemes through live experiments conducted on a random sample of real user visits to Yahoo!",
"title": ""
},
{
"docid": "cce513c48e630ab3f072f334d00b67dc",
"text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press",
"title": ""
}
] | [
{
"docid": "7d08501a0123d773f9fe755f1612e57e",
"text": "Language-music comparative studies have highlighted the potential for shared resources or neural overlap in auditory short-term memory. However, there is a lack of behavioral methodologies for comparing verbal and musical serial recall. We developed a visual grid response that allowed both musicians and nonmusicians to perform serial recall of letter and tone sequences. The new method was used to compare the phonological similarity effect with the impact of an operationalized musical equivalent-pitch proximity. Over the course of three experiments, we found that short-term memory for tones had several similarities to verbal memory, including limited capacity and a significant effect of pitch proximity in nonmusicians. Despite being vulnerable to phonological similarity when recalling letters, however, musicians showed no effect of pitch proximity, a result that we suggest might reflect strategy differences. Overall, the findings support a limited degree of correspondence in the way that verbal and musical sounds are processed in auditory short-term memory.",
"title": ""
},
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "d81cadc01ab599fd34d2ccfa8377de51",
"text": "1. The Situation in Cognition The situated cognition movement in the cognitive sciences, like those sciences themselves, is a loose-knit family of approaches to understanding the mind and cognition. While it has both philosophical and psychological antecedents in thought stretching back over the last century (see Gallagher, this volume, Clancey, this volume,), it has developed primarily since the late 1970s as an alternative to, or a modification of, the then predominant paradigms for exploring the mind within the cognitive sciences. For this reason it has been common to characterize situated cognition in terms of what it is not, a cluster of \"anti-isms\". Situated cognition has thus been described as opposed to Platonism, Cartesianism, individualism, representationalism, and even",
"title": ""
},
{
"docid": "aeba4012971d339a9a953a7b86f57eb8",
"text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"title": ""
},
{
"docid": "f4b270b09649ba05dd22d681a2e3e3b7",
"text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.",
"title": ""
},
{
"docid": "5eb65797b9b5e90d5aa3968d5274ae72",
"text": "Blockchains enable tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statistics obtained from the blockchain, and the truthfulness of trustless nodes are financially enforced by smart contract terms. Further, we propose an incentive-compatible consensus mechanism based on proof-of-stake to financially encourage the cache helpers to stay active in service. We model the interaction between the cache helpers and the content providers as a Chinese restaurant game. Based on the theoretical analysis regarding the Nash equilibrium of the game, we propose a decentralized strategy-searching algorithm using sequential best response. The simulation results demonstrate both the efficiency and reliability of the proposed equilibrium searching algorithm.",
"title": ""
},
{
"docid": "e4914b41b7d38ff04b0e5a9b88cf1dc6",
"text": "In this paper, we investigate the secure nearest neighbor (SNN) problem, in which a client issues an encrypted query point E(q) to a cloud service provider and asks for an encrypted data point in E(D) (the encrypted database) that is closest to the query point, without allowing the server to learn the plaintexts of the data or the query (and its result). We show that efficient attacks exist for existing SNN methods [21], [15], even though they were claimed to be secure in standard security models (such as indistinguishability under chosen plaintext or ciphertext attacks). We also establish a relationship between the SNN problem and the order-preserving encryption (OPE) problem from the cryptography field [6], [5], and we show that SNN is at least as hard as OPE. Since it is impossible to construct secure OPE schemes in standard security models [6], [5], our results imply that one cannot expect to find the exact (encrypted) nearest neighbor based on only E(q) and E(D). Given this hardness result, we design new SNN methods by asking the server, given only E(q) and E(D), to return a relevant (encrypted) partition E(G) from E(D) (i.e., G ⊆ D), such that that E(G) is guaranteed to contain the answer for the SNN query. Our methods provide customizable tradeoff between efficiency and communication cost, and they are as secure as the encryption scheme E used to encrypt the query and the database, where E can be any well-established encryption schemes.",
"title": ""
},
{
"docid": "4a7a4db8497b0d13c8411100dab1b207",
"text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.",
"title": ""
},
{
"docid": "f9b99ad1fcf9963cca29e7ddfca20428",
"text": "Nested Named Entities (nested NEs), one containing another, are commonly seen in biomedical text, e.g., accounting for 16.7% of all named entities in GENIA corpus. While many works have been done in recognizing non-nested NEs, nested NEs have been largely neglected. In this work, we treat the task as a binary classification problem and solve it using Support Vector Machines. For each token in nested NEs, we use two schemes to set its class label: labeling as the outmost entity or the inner entity. Our preliminary results show that while the outmost labeling tends to work better in recognizing the outmost entities, the inner labeling recognizes the inner NEs better. This result should be useful for recognition of nested NEs.",
"title": ""
},
{
"docid": "90125582272e3f16a34d5d0c885f573a",
"text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.",
"title": ""
},
{
"docid": "a4ddf6920fa7a5c09fa0f62f9b96a2e3",
"text": "In this paper, a class of single-phase Z-source (ZS) ac–ac converters is proposed with high-frequency transformer (HFT) isolation. The proposed HFT isolated (HFTI) ZS ac–ac converters possess all the features of their nonisolated counterparts, such as providing wide range of buck-boost output voltage with reversing or maintaining the phase angle, suppressing the in-rush and harmonic currents, and improved reliability. In addition, the proposed converters incorporate HFT for electrical isolation and safety, and therefore can save an external bulky line frequency transformer, for applications such as dynamic voltage restorers, etc. The proposed HFTI ZS converters are obtained from conventional (nonisolated) ZS ac–ac converters by adding only one extra bidirectional switch, and replacing two inductors with an HFT, thus saving one magnetic core. The switching signals for buck and boost modes are presented with safe-commutation strategy to remove the switch voltage spikes. A quasi-ZS-based HFTI ac–ac is used to discuss the operation principle and circuit analysis of the proposed class of HFTI ZS ac–ac converters. Various ZS-based HFTI proposed ac–ac converters are also presented thereafter. Moreover, a laboratory prototype of the proposed converter is constructed and experiments are conducted to produce output voltage of 110 Vrms / 60 Hz, which verify the operation of the proposed converters.",
"title": ""
},
{
"docid": "7e6573b3e080481949a2b45eb6c68a42",
"text": "We study the problem of minimizing the sum of a smooth convex function and a convex blockseparable regularizer and propose a new randomized coordinate descent method, which we call ALPHA. Our method at every iteration updates a random subset of coordinates, following an arbitrary distribution. No coordinate descent methods capable to handle an arbitrary sampling have been studied in the literature before for this problem. ALPHA is a remarkably flexible algorithm: in special cases, it reduces to deterministic and randomized methods such as gradient descent, coordinate descent, parallel coordinate descent and distributed coordinate descent – both in nonaccelerated and accelerated variants. The variants with arbitrary (or importance) sampling are new. We provide a complexity analysis of ALPHA, from which we deduce as a direct corollary complexity bounds for its many variants, all matching or improving best known bounds.",
"title": ""
},
{
"docid": "d68bf9cd549c6d3fe067f343bd38c439",
"text": "Most multiobjective evolutionary algorithms are based on Pareto dominance for measuring the quality of solutions during their search, among them NSGA-II is well-known. A very few algorithms are based on decomposition and implicitly or explicitly try to optimize aggregations of the objectives. MOEA/D is a very recent such an algorithm. One of the major advantages of MOEA/D is that it is very easy to use well-developed single optimization local search within it. This paper compares the performance of MOEA/D and NSGA-II on the multiobjective travelling salesman problem and studies the effect of local search on the performance of MOEA/D.",
"title": ""
},
{
"docid": "5190176eb4e743b8ac356fa97c06aa7c",
"text": "This paper presents a flexible control technique of active and reactive power for single phase grid-tied photovoltaic inverter, supplied from PV array, based on quarter cycle phase delay methodology to generate the fictitious quadrature signal in order to emulate the PQ theory of three-phase systems. The investigated scheme is characterized by independent control of active and reactive power owing to the independent PQ reference signals that can satisfy the features and new functions of modern grid-tied inverters fed from renewable energy resources. The study is conducted on 10 kW PV array using PSIM program. The obtained results demonstrate the high capability to provide quick and accurate control of the injected active and reactive power to the main grid. The harmonic spectra of power components and the resultant grid current indicate that the single-phase PQ control scheme guarantees and satisfies the power quality requirements and constrains, which permits application of such scheme on a wide scale integrated with other PV inverters where independent PQ reference signals would be generated locally by energy management unit in case of microgrid, or from remote data center in case of smart grid.",
"title": ""
},
{
"docid": "8c9155ce72bc3ba11bd4680d46ad69b5",
"text": "Many theorists assume that the cognitive system is composed of a collection of encapsulated processing components or modules, each dedicated to performing a particular cognitive function. On this view, selective impairments of cognitive tasks following brain damage, as evidenced by double dissociations, are naturally interpreted in terms of the loss of particular processing components. By contrast, the current investigation examines in detail a double dissociation between concrete and abstract work reading after damage to a connectionist network that pronounces words via meaning and yet has no separable components (Plaut & Shallice, 1993). The functional specialization in the network that gives rise to the double dissociation is not transparently related to the network's structure, as modular theories assume. Furthermore, a consideration of the distribution of effects across quantitatively equivalent individual lesions in the network raises specific concerns about the interpretation of single-case studies. The findings underscore the necessity of relating neuropsychological data to cognitive theories in the context of specific computational assumptions about how the cognitive system operates normally and after damage.",
"title": ""
},
{
"docid": "aafaffb28d171e2cddadbd9b65539e21",
"text": "LCD column drivers have traditionally used nonlinear R-string style digital-to-analog converters (DAC). This paper describes an architecture that uses 840 linear charge redistribution 10/12-bit DACs to implement a 420-output column driver. Each DAC performs its conversion in less than 15 /spl mu/s and draws less than 5 /spl mu/A. This architecture allows 10-bit independent color control in a 17 mm/sup 2/ die for the LCD television market.",
"title": ""
},
{
"docid": "480c066863a97bde11b0acc32b427f4e",
"text": "When computer security incidents occur, it's critical that organizations be able to handle them in a timely manner. The speed with which an organization can recognize, analyze, and respond to an incident will affect the damage and lower recovery costs. Organized incident management requires defined, repeatable processes and the ability to learn from incidents that threaten the confidentiality, availability, and integrity of critical systems and data. Some organizations assign responsibility for incident management to a defined group of people or a designated unit, such as a computer security incident response team. This article looks at the development, purpose, and evolution of such specialized teams; the evolving nature of attacks they must deal with; and methods to evaluate the performance of such teams as well as the emergence of information sharing as a core service.",
"title": ""
},
{
"docid": "026a49cd48c7100b5b9f8f7197e71a1f",
"text": "In-wheel motors have tremendous potential to create an advanced all-wheel drive system. In this paper, a novel power assisted steering technology and its torque distribution control system were proposed, due to the independent driving characteristics of four-wheel-independent-drive electric vehicle. The first part of this study deals with the full description of the basic theory of differential drive assisted steering system. After that, 4-wheel-drive (4WD) electric vehicle dynamics model as well as driver model were built. Furthermore, the differential drive assisted steering control system, as well as the drive torque distribution and compensation control system, was also presented. Therein, the proportional–integral (PI) feedback control loop was employed to track the reference steering effort by controlling the drive torque distribution between the two sides wheels of the front axle. After that, the direct yaw moment control subsystem and the traction control subsystem were introduced, which were both employed to make the differential drive assisted steering work as well as wished. Finally, the open-loop and closed-loop simulation for validation were performed. The results verified that, the proposed differential drive torque assisted steering system cannot only reduce the steering efforts significantly, as well as ensure a stiffer steering feel at high vehicle speed and improve the returnability of the vehicle, but also keep the lateral stability of the vehicle. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b05a72a6fa5e381b341ba8c9107a690c",
"text": "Acknowledgments are widely used in scientific articles to express gratitude and credit collaborators. Despite suggestions that indexing acknowledgments automatically will give interesting insights, there is currently, to the best of our knowledge, no such system to track acknowledgments and index them. In this paper we introduce AckSeer, a search engine and a repository for automatically extracted acknowledgments in digital libraries. AckSeer is a fully automated system that scans items in digital libraries including conference papers, journals, and books extracting acknowledgment sections and identifying acknowledged entities mentioned within. We describe the architecture of AckSeer and discuss the extraction algorithms that achieve a F1 measure above 83%. We use multiple Named Entity Recognition (NER) tools and propose a method for merging the outcome from different recognizers. The resulting entities are stored in a database then made searchable by adding them to the AckSeer index along with the metadata of the containing paper/book.\n We build AckSeer on top of the documents in CiteSeerx digital library yielding more than 500,000 acknowledgments and more than 4 million mentioned entities.",
"title": ""
},
{
"docid": "2b0969dd0089bd2a2054957477ea4ce1",
"text": "A self-signaling action is an action chosen partly to secure good news about one’s traits or abilities, even when the action has no causal impact on these traits and abilities. We discuss some of the odd things that happen when self-signaling is introduced into an otherwise rational conception of action. We employ a signaling game perspective in which the diagnostic signals are an endogenous part of the equilibrium choice. We are interested (1) in pure self-signaling, separate from any desire to be regarded well by others, and (2) purely diagnostic motivation, that is, caring about what an action might reveal about a trait even when that action has no causal impact on it. When diagnostic motivation is strong, the person’s actions exhibit a rigidity characteristic of personal rules. Our model also predicts that a boost in self-image positively affects actions even though it leaves true preferences unchanged — we call this a “moral placebo effect.” 1 The chapter draws on (co-authored) Chapter 3 of Bodner’s doctoral dissertation (Bodner, 1995) and an unpublished MIT working paper (Bodner and Prelec, 1997). The authors thank Bodner’s dissertation advisors France Leclerc and Richard Thaler, workshop discussants Thomas Schelling, Russell Winer, and Mathias Dewatripont, and George Ainslie, Michael Bratman, Juan Carillo, Itzakh Gilboa, George Loewenstein, Al Mela, Matthew Rabin, Duncan Simester and Florian Zettelmeyer for comments on these ideas (with the usual disclaimer). We are grateful to Birger Wernerfelt for drawing attention to Bernheim's work on social conformity. Author addresses: Bodner – Director, Learning Innovations, 13\\4 Shimshon St., Jerusalem, 93501, Israel, learning@netvision.net.il; Prelec — E56-320, MIT, Sloan School, 38 Memorial Drive, Cambridge, MA 02139, dprelec@mit.edu. 1 Psychological evidence When we make a choice we reveal something of our inner traits or dispositions, not only to others, but also to ourselves. After the fact, this can be a source of pleasure or pain, depending on whether we were impressed or disappointed by our actions. Before the fact, the anticipation of future pride or remorse can influence what we choose to do. In a previous paper (Bodner and Prelec, 1997), we described how the model of a utility maximizing individual could be expanded to include diagnostic utility as a separate motive for action. We review the basic elements of that proposal here. The inspiration comes directly from signaling games in which actions of one person provide an informative signal to others, which in turn affects esteem (Bernheim, 1994). Here, however, actions provide a signal to ourselves, that is, actions are selfsignaling. For example, a person who takes the daily jog in spite of the rain may see that as a gratifying signal of willpower, dedication, or future well being. For someone uncertain about where he or she stands with respect to these dispositions, each new choice can provide a bit of good or bad \"news.” We incorporate the value of such \"news\" into the person's utility function. The notion that a person may draw inferences from an action he enacted partially in order to gain that inference has been posed as a philosophical paradox (e.g. Campbell and Sawden, 1985; Elster, 1985, 1989). A key problem is the following: Suppose that the disposition in question is altruism, and a person interprets a 25¢ donation to a panhandler as evidence of altruism. If the boost in self-esteem makes it worth giving the quarter even when there is no concern for the poor, than clearly, such a donation is not valid evidence of altruism. Logically, giving is valid evidence of high altruism only if a person with low altruism would not have given the quarter. This reasoning motivates our equilibrium approach, in which inferences from actions are an endogenous part of the equilibrium choice. As an empirical matter several studies have demonstrated that diagnostic considerations do indeed affect behavior (Quattrone and Tversky, 1984; Shafir and Tversky, 1992; Bodner, 1995). An elegant experiment by Quattrone and Tversky (1984) both defines the self-signaling phenomenon and demonstrates its existence. Quattrone and Tversky first asked each subject to take a cold pressor pain test in which the subject's arm is submerged in a container of cold water until the subject can no longer tolerate the pain. Subsequently the subject was told that recent medical studies had discovered a certain inborn heart condition, and that people with this condition are “frequently ill, prone to heart-disease, and have shorter-than-average life expectancy.” Subjects were also told that this type could be identified by the effect of exercise on the cold pressor test. Subjects were randomly assigned to one of two conditions in which they were told that the bad type of heart was associated with either increases or with decreases in tolerance to the cold water after exercise. Subjects then repeated the cold pressor test, after riding an Exercycle for one minute. As predicted, the vast majority of subjects showed changes in tolerance on the second cold pressor trial in the direction correlated of “good news”—if told that decreased tolerance is diagnostic of a bad heart they endured the near-freezing water longer (and vice versa). The result shows that people are willing to bear painful consequences for a behavior that is a signal, though not a cause, of a medical diagnosis. An experiment by Shafir and Tversky (1992) on \"Newcomb's paradox\" reinforces the same point. In the philosophical version of the paradox, a person is (hypothetically) presented with two boxes, A and B. Box A contains either nothing or some large amount of money deposited by an \"omniscient being.\" Box B contains a small amount of money for sure. The decision-maker doesn’t know what Box A contains choice, and has to choose whether to take the contents of that box (A) or of both boxes (A+B). What makes the problem a paradox is that the person is asked to believe that the omniscient being has already predicted her choice, and on that basis has already either \"punished\" a greedy choice of (A+B) with no deposit in A or \"rewarded\" a choice of (A) with a large deposit. The dominance principle argues in favor of choosing both boxes, because the deposits are fixed at the moment of choice. This is the philosophical statement of the problem. In the actual experiment, Shafir and Tversky presented a variant of Newcomb’s problem at the end of another, longer experiment, in which subjects repeatedly played a Prisoner’s Dilemma game against (virtual) opponents via computer terminals. After finishing these games, a final “bonus” problem appeared, with the two Newcomb boxes, and subjects had to choose whether to take money from one box or from both boxes. The experimental cover story did not mention an omniscient being but instead informed the subjects that \"a program developed at MIT recently was applied during the entire session [of Prisoner’s Dilemma choices] to analyze the pattern of your preference.” Ostensibly, this mighty program could predict choices, one or two boxes, with 85% accuracy, and, of course, if the program predicted a choice of both boxes it would then put nothing in Box A. Although it was evident that the money amounts were already set at the moment of choice, most experimental subjects opted for the single box. It is “as if” they believed that by declining to take the money in Box B, they could change the amount of money already deposited in box A. Although these are relatively recent experiments, their results are consistent with a long stream of psychological research, going back at least to the James-Lange theory of emotions which claimed that people infer their own states from behavior (e.g., they feel afraid if they see themselves running). The notion that people adopt the perspective of an outside observer when interpreting their own actions has been extensively explored in the research on self-perception (Bem, 1972). In a similar vein, there is an extensive literature confirming the existence of “self-handicapping” strategies, where a person might get too little sleep or under-prepare for an examination. In such a case, a successful performance could be attributed to ability while unsuccessful performance could be externalized as due to the lack of proper preparation (e.g. Berglas and Jones, 1978; Berglas and Baumeister, 1993). This broader context of psychological research suggests that we should view the results of Quattrone and Tversky, and Shafir and Tversky not as mere curiosities, applying to only contrived experimental situations, but instead as evidence of a general motivational “short circuit.” Motivation does not require causality, even when the lack of causality is utterly transparent. If anything, these experiments probably underestimate the impact of diagnosticity in realistic decisions, where the absence of causal links between actions and dispositions is less evident. Formally, our model distinguishes between outcome utility — the utility of the anticipated causal consequences of choice — and diagnostic utility — the value of the adjusted estimate of one’s disposition, adjusted in light of the choice. Individuals act so as to maximize some combination of the two sources of utility, and (in one version of the model) make correct inferences about what their choices imply about their dispositions. When diagnostic utility is sufficiently important, the individual chooses the same action independent of disposition. We interpret this as a personal rule. We describe other ways in which the behavior of self-signaling individuals is qualitatively different from that of standard economic agents. First, a self-signaling person will be more likely to reveal discrepancies between resolutions and actions when resolutions pertain to actions that are contingent or delayed. Thus she might honestly commit to do some worthy action if the circumstances requiring t",
"title": ""
}
] | scidocsrr |
5c6d6cf604f72d7b6dd6225ef5617bfd | Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression | [
{
"docid": "b9aa1b23ee957f61337e731611a6301a",
"text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.",
"title": ""
},
{
"docid": "3921107e01c28a9b739f10c51a48505f",
"text": "The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.",
"title": ""
},
{
"docid": "8860af067ed1af9aba072d85f3e6171b",
"text": "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3% increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2× speedup respectively, which is significant.",
"title": ""
},
{
"docid": "e743bfe8c4f19f1f9a233106919c99a7",
"text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.",
"title": ""
},
{
"docid": "26dac00bc328dc9c8065ff105d1f8233",
"text": "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.",
"title": ""
}
] | [
{
"docid": "9d28358ff86447c9c301b13a967461c4",
"text": "According to a simple anatomical and functional model of word reading, letters displayed in one hemifield are first analysed through a cascade of contralateral retinotopic areas, which compute increasingly abstract representations. Eventually, an invariant representation of letter identities is created in the visual word form area (VWFA), reproducibly located within the left occipito-temporal sulcus. The VWFA then projects to structures involved in phonological or lexico-semantic processing. This model yields detailed predictions on the reading impairments that may follow left occipitotemporal lesions. Those predictions were confronted to behavioural, anatomical and functional MRI data gathered in normals and in patients suffering from left posterior cerebral artery infarcts. In normal subjects, alphabetic stimuli activated both the VWFA and the right-hemispheric symmetrical region (R-VWFA) relative to fixation, but only the VWFA showed a preference for alphabetic strings over simple chequerboards. The comparison of normalized brain lesions with reading-induced activations showed that the critical lesion site for the classical syndrome of pure alexia can be tightly localized to the VWFA. Reading impairments resulting from deafferentation of an intact VWFA from right- or left-hemispheric input were dissected using the same methods, shedding light on the connectivity of the VWFA. Finally, the putative role of right-hemispheric processing in the letter-by-letter reading strategy was clarified. In a letter-by-letter reader, the R-VWFA assumed some of the functional properties normally specific to the VWFA. These data corroborate our initial model of normal word perception and underline that an alternative right-hemispheric pathway can underlie functional recovery from alexia.",
"title": ""
},
{
"docid": "32c3c226186b5d10b50ce4bac8f20630",
"text": "A sub-1 V CMOS low-dropout (LDO) voltage regulator with 103 nA low-quiescent current is presented in this paper. The proposed LDO uses a digital error amplifier that can make the quiescent current lower than other LDOs with the traditional error amplifier. Besides, the LDO can be stable even without the output capacitor. With a 0.9 V power supply, the output voltage is designed as 0.5 V. The maximum output current of the LDO is 50 mA at an output of 0.5 V. The prototype of the LDO is fabricated with TSMC 0.35 mum CMOS processes. The active area without pads is only 240 mum times 400 mum.",
"title": ""
},
{
"docid": "a1f838270925e4769e15edfb37b281fd",
"text": "Assess extensor carpi ulnaris (ECU) tendon position in the ulnar groove, determine the frequency of tendon “dislocation” with the forearm prone, neutral, and supine, and determine if an association exists between ulnar groove morphology and tendon position in asymptomatic volunteers. Axial proton density-weighted MR was performed through the distal radioulnar joint with the forearm prone, neutral, and supine in 38 asymptomatic wrists. The percentage of the tendon located beyond the ulnar-most border of the ulnar groove was recorded. Ulnar groove depth and length was measured and ECU tendon signal was assessed. 15.8 % of tendons remained within the groove in all forearm positions. In 76.3 %, the tendon translated medially from prone to supine. The tendon “dislocated” in 0, 10.5, and 39.5 % with the forearm prone, neutral and supine, respectively. In 7.9 % prone, 5.3 % neutral, and 10.5 % supine exams, the tendon was 51–99 % beyond the ulnar border of the ulnar groove. Mean ulnar groove depth and length were 1.6 and 7.7 mm, respectively, with an overall trend towards greater degrees of tendon translation in shorter, shallower ulnar grooves. The ECU tendon shifts in a medial direction when the forearm is supine; however, tendon “dislocation” has not been previously documented in asymptomatic volunteers. The ECU tendon medially translated or frankly dislocated from the ulnar groove in the majority of our asymptomatic volunteers, particularly when the forearm is supine. Overall greater degrees of tendon translation were observed in shorter and shallower ulnar grooves.",
"title": ""
},
{
"docid": "189c27376ac9d6345e3ace59e7030d01",
"text": "A probabilistic or weighted grammar implies a posterior probability distribution over possible parses of a given input sentence. One often needs to extract information from this distribution, by computing the expected counts (in the unknown parse) of various grammar rules, constituents, transitions, or states. This requires an algorithm such as inside-outside or forward-backward that is tailored to the grammar formalism. Conveniently, each such algorithm can be obtained by automatically differentiating an “inside” algorithm that merely computes the log-probability of the evidence (the sentence). This mechanical procedure produces correct and efficient code. As for any other instance of back-propagation, it can be carried out manually or by software. This pedagogical paper carefully spells out the construction and relates it to traditional and nontraditional views of these algorithms.",
"title": ""
},
{
"docid": "6dddd252eec80ec4f3535a82e25809cf",
"text": "The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.",
"title": ""
},
{
"docid": "bf4cb9dd54258890b11651a43cefdec4",
"text": "In the 50 years since their discovery, the aminoglycoside antibiotics have seen unprecedented use. Discovered in the 1940s, they were the long-sought remedy for tuberculosis and other serious bacterial infections. The side effects of renal and auditory toxicity, however, led to a decline of their use in most countries in the 1970s and 1980s. Nevertheless, today the aminoglycosides are still the most commonly used antibiotics worldwide thanks to the combination of their high efficacy with low cost. This review first summarizes the history, chemistry, antibacterial actions and acute side effects of the drugs. It then details the pathophysiology of aminoglycoside ototoxicity including experimental and clinical observations, risk factors and incidence. Pharmacokinetics, cellular actions and our current understanding of the underlying molecular mechanisms of ototoxicity are discussed at length. The review concludes with recent advances towards therapeutic intervention to prevent aminoglycoside ototoxicity.",
"title": ""
},
{
"docid": "7182814fb9304323a060242d36b10b8a",
"text": "Consumer reviews are now part of everyday decision-making. Yet, the credibility of these reviews is fundamentally undermined when businesses commit review fraud, creating fake reviews for themselves or their competitors. We investigate the economic incentives to commit review fraud on the popular review platform Yelp, using two complementary approaches and datasets. We begin by analyzing restaurant reviews that are identified by Yelp’s filtering algorithm as suspicious, or fake – and treat these as a proxy for review fraud (an assumption we provide evidence for). We present four main findings. First, roughly 16% of restaurant reviews on Yelp are filtered. These reviews tend to be more extreme (favorable or unfavorable) than other reviews, and the prevalence of suspicious reviews has grown significantly over time. Second, a restaurant is more likely to commit review fraud when its reputation is weak, i.e., when it has few reviews, or it has recently received bad reviews. Third, chain restaurants – which benefit less from Yelp – are also less likely to commit review fraud. Fourth, when restaurants face increased competition, they become more likely to receive unfavorable fake reviews. Using a separate dataset, we analyze businesses that were caught soliciting fake reviews through a sting conducted by Yelp. These data support our main results, and shed further light on the economic incentives behind a business’s decision to leave fake reviews.",
"title": ""
},
{
"docid": "96c1f90ff04e7fd37d8b8a16bc4b9c54",
"text": "Graph triangulation, which finds all triangles in a graph, has been actively studied due to its wide range of applications in the network analysis and data mining. With the rapid growth of graph data size, disk-based triangulation methods are in demand but little researched. To handle a large-scale graph which does not fit in memory, we must iteratively load small parts of the graph. In the existing literature, achieving the ideal cost has been considered to be impossible for billion-scale graphs due to the memory size constraint. In this paper, we propose an overlapped and parallel disk-based triangulation framework for billion-scale graphs, OPT, which achieves the ideal cost by (1) full overlap of the CPU and I/O operations and (2) full parallelism of multi-core CPU and FlashSSD I/O. In OPT, triangles in memory are called the internal triangles while triangles constituting vertices in memory and vertices in external memory are called the external triangles. At the macro level, OPT overlaps the internal triangulation and the external triangulation, while it overlaps the CPU and I/O operations at the micro level. Thereby, the cost of OPT is close to the ideal cost. Moreover, OPT instantiates both vertex-iterator and edge-iterator models and benefits from multi-thread parallelism on both types of triangulation. Extensive experiments conducted on large-scale datasets showed that (1) OPT achieved the elapsed time close to that of the ideal method with less than 7% of overhead under the limited memory budget, (2) OPT achieved linear speed-up with an increasing number of CPU cores, (3) OPT outperforms the state-of-the-art parallel method by up to an order of magnitude with 6 CPU cores, and (4) for the first time in the literature, the triangulation results are reported for a billion-vertex scale real-world graph.",
"title": ""
},
{
"docid": "a73f07080a2f93a09b05b58184acf306",
"text": "This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.",
"title": ""
},
{
"docid": "13529522be402878286138168f264478",
"text": "I. Cantador (), P. Castells Universidad Autónoma de Madrid 28049 Madrid, Spain e-mails: ivan.cantador@uam.es, pablo.castells@uam.es Abstract An increasingly important type of recommender systems comprises those that generate suggestions for groups rather than for individuals. In this chapter, we revise state of the art approaches on group formation, modelling and recommendation, and present challenging problems to be included in the group recommender system research agenda in the context of the Social Web.",
"title": ""
},
{
"docid": "87fa8c6c894208e24328aa9dbb71a889",
"text": "In this paper, the design and measurements of a 8-12GHz high-efficiency MMIC high power amplifier (HPA) implemented in a 0.25μm GaAS pHEMT process is described. The 3-stage amplifier has demonstrated from 37% to 54% power-added efficiency (PAE) with 12W of output power and up to 27dB of small signal gain range from 8-12GHz. In particular, over the frequency band of 9-11 GHz, the circuit achieved above 45% PAE. The key to this design is determining and matching the optimum source and load impedance for PAE at the first two harmonics in output stage.",
"title": ""
},
{
"docid": "e18131e86ee96edf815cbf8f80f3ab24",
"text": "This dissertation investigates the use of hierarchy and problem decomposition as a means of solving large, stochastic, sequential decision problems. These problems are framed as Markov decision problems (MDPs). The new technical content of this dissertation begins with a discussion of the concept of temporal abstraction. Temporal abstraction is shown to be equivalent to the transformation of a policy deened over a region of an MDP to an action in a semi-Markov decision problem (SMDP). Several algorithms are presented for performing this transformation eeciently. This dissertation introduces the HAM method for generating hierarchical, temporally abstract actions. This method permits the partial speciication of abstract actions in a way that corresponds to an abstract plan or strategy. Abstract actions speciied as HAMs can be optimally reened for new tasks by solving a reduced SMDP. The formal results show that traditional MDP algorithms can be used to optimally reene HAMs for new tasks. This can be achieved in much less time than it would take to learn a new policy for the task from scratch. HAMs complement some novel decomposition algorithms that are presented in this dissertation. These algorithms work by constructing a cache of policies for diierent regions of the MDP and then optimally combining the cached solution to produce a global solution that is within provable bounds of the optimal solution. Together, the methods developed in this dissertation provide important tools for 2 producing good policies for large MDPs. Unlike some ad-hoc methods, these methods provide strong formal guarantees. They use prior knowledge in a principled way, and they reduce larger MDPs into smaller ones while maintaining a well-deened relationship between the smaller problem and the larger problem.",
"title": ""
},
{
"docid": "5a9113dc952bb51faf40d242e91db09c",
"text": "This study highlights the changes in lycopene and β-carotene retention in tomato juice subjected to combined pressure-temperature (P-T) treatments ((high-pressure processing (HPP; 500-700 MPa, 30 °C), pressure-assisted thermal processing (PATP; 500-700 MPa, 100 °C), and thermal processing (TP; 0.1 MPa, 100 °C)) for up to 10 min. Processing treatments utilized raw (untreated) and hot break (∼93 °C, 60 s) tomato juice as controls. Changes in bioaccessibility of these carotenoids as a result of processing were also studied. Microscopy was applied to better understand processing-induced microscopic changes. TP did not alter the lycopene content of the tomato juice. HPP and PATP treatments resulted in up to 12% increases in lycopene extractability. all-trans-β-Carotene showed significant degradation (p < 0.05) as a function of pressure, temperature, and time. Its retention in processed samples varied between 60 and 95% of levels originally present in the control. Regardless of the processing conditions used, <0.5% lycopene appeared in the form of micelles (<0.5% bioaccessibility). Electron microscopy images showed more prominent lycopene crystals in HPP and PATP processed juice than in thermally processed juice. However, lycopene crystals did appear to be enveloped regardless of the processing conditions used. The processed juice (HPP, PATP, TP) showed significantly higher (p < 0.05) all-trans-β-carotene micellarization as compared to the raw unprocessed juice (control). Interestingly, hot break juice subjected to combined P-T treatments showed 15-30% more all-trans-β-carotene micellarization than the raw juice subjected to combined P-T treatments. This study demonstrates that combined pressure-heat treatments increase lycopene extractability. However, the in vitro bioaccessibility of carotenoids was not significantly different among the treatments (TP, PATP, HPP) investigated.",
"title": ""
},
{
"docid": "b4f82364c5c4900058f50325ccc9e4c4",
"text": "OBJECTIVE\nThis study reports the psychometric properties of the 24-item version of the Diabetes Knowledge Questionnaire (DKQ).\n\n\nRESEARCH DESIGN AND METHODS\nThe original 60-item DKQ was administered to 502 adult Mexican-Americans with type 2 diabetes who are part of the Starr County Diabetes Education Study. The sample was composed of 252 participants and 250 support partners. The subjects were randomly assigned to the educational and social support intervention (n = 250) or to the wait-listed control group (n = 252). A shortened 24-item version of the DKQ was derived from the original instrument after data collection was completed. Reliability was assessed by means of Cronbach's coefficient alpha. To determine validity, differentiation between the experimental and control groups was conducted at baseline and after the educational portion of the intervention.\n\n\nRESULTS\nThe 24-item version of the DKQ (DKQ-24) attained a reliability coefficient of 0.78, indicating internal consistency, and showed sensitivity to the intervention, suggesting construct validation.\n\n\nCONCLUSIONS\nThe DKQ-24 is a reliable and valid measure of diabetes-related knowledge that is relatively easy to administer to either English or Spanish speakers.",
"title": ""
},
{
"docid": "8e00a3e7a07b69bce89a66fc6d4934aa",
"text": "This article is organised in five main sections. First, the sub-area of task-based instruction is introduced and contextualised. Its origins within communicative language teaching and second language acquisition research are sketched, and the notion of a task in language learning is defined. There is also brief coverage of the different and sometimes contrasting groups who are interested in the use of tasks. The second section surveys research into tasks, covering the different perspectives (interactional, cognitive) which have been influential. Then a third section explores how performance on tasks has been measured, generally in terms of how complex the language used is, how accurate it is, and how fluent. There is also discussion of approaches to measuring interaction. A fourth section explores the pedagogic and interventionist dimension of the use of tasks. The article concludes with a survey of the various critiques of tasks that have been made in recent years.",
"title": ""
},
{
"docid": "8c6ec02821d17fbcf79d1a42ed92a971",
"text": "OBJECTIVE\nTo explore whether an association exists between oocyte meiotic spindle morphology visualized by polarized light microscopy at the time of intracytoplasmic sperm injection and the ploidy of the resulting embryo.\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nPrivate IVF clinic.\n\n\nPATIENT(S)\nPatients undergoing preimplantation genetic screening/diagnosis (n = 113 patients).\n\n\nINTERVENTION(S)\nOocyte meiotic spindles were assessed by polarized light microscopy and classified at the time of intracytoplasmic sperm injection as normal, dysmorphic, translucent, telophase, or no visible spindle. Single blastomere biopsy was performed on day 3 of culture for analysis by array comparative genomic hybridization.\n\n\nMAIN OUTCOME MEASURE(S)\nSpindle morphology and embryo ploidy association was evaluated by regression methods accounting for non-independence of data.\n\n\nRESULT(S)\nThe frequency of euploidy in embryos derived from oocytes with normal spindle morphology was significantly higher than all other spindle classifications combined (odds ratio [OR] 1.93, 95% confidence interval [CI] 1.33-2.79). Oocytes with translucent (OR 0.25, 95% CI 0.13-0.46) and no visible spindle morphology (OR 0.35, 95% CI 0.19-0.63) were significantly less likely to result in euploid embryos when compared with oocytes with normal spindle morphology. There was no significant difference between normal and dysmorphic spindle morphology (OR 0.73, 95% CI 0.49-1.08), whereas no telophase spindles resulted in euploid embryos (n = 11). Assessment of spindle morphology was found to be independently associated with embryo euploidy after controlling for embryo quality (OR 1.73, 95% CI 1.16-2.60).\n\n\nCONCLUSION(S)\nOocyte spindle morphology is associated with the resulting embryo's ploidy. Oocytes with normal spindle morphology are significantly more likely to produce euploid embryos compared with oocytes with meiotic spindles that are translucent or not visible.",
"title": ""
},
{
"docid": "bbc936a3b4cd942ba3f2e1905d237b82",
"text": "Silkworm silk is among the most widely used natural fibers for textile and biomedical applications due to its extraordinary mechanical properties and superior biocompatibility. A number of physical and chemical processes have also been developed to reconstruct silk into various forms or to artificially produce silk-like materials. In addition to the direct use and the delicate replication of silk's natural structure and properties, there is a growing interest to introduce more new functionalities into silk while maintaining its advantageous intrinsic properties. In this review we assess various methods and their merits to produce functional silk, specifically those with color and luminescence, through post-processing steps as well as biological approaches. There is a highlight on intrinsically colored and luminescent silk produced directly from silkworms for a wide range of applications, and a discussion on the suitable molecular properties for being incorporated effectively into silk while it is being produced in the silk gland. With these understanding, a new generation of silk containing various functional materials (e.g., drugs, antibiotics and stimuli-sensitive dyes) would be produced for novel applications such as cancer therapy with controlled release feature, wound dressing with monitoring/sensing feature, tissue engineering scaffolds with antibacterial, anticoagulant or anti-inflammatory feature, and many others.",
"title": ""
},
{
"docid": "1b27922ab1693a15d230301c3a868afd",
"text": "Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT are computationally complex because of the repeated use of the forward and backward projection. Inspired by this success of deep learning in computer vision applications, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the texture are not fully recovered, which was unfamiliar to the radiologists. To cope with this problem, here we propose a direct residual learning approach on directional wavelet domain to solve this problem and to improve the performance against previous work. In particular, the new network estimates the noise of each input wavelet transform, and then the de-noised wavelet coefficients are obtained by subtracting the noise from the input wavelet transform bands. The experimental results confirm that the proposed network has significantly improved performance, preserving the detail texture of the original images.",
"title": ""
},
{
"docid": "7d523681adcc8e33eecce8442f6cd5b9",
"text": "Infrared thermography is a widely used technique to measure and portray the surface temperature of an object in form of thermal images. Two-dimensional images, however, have some inherent limitations with regard to the fidelity with which they can depict the surface temperature of a three dimensional object. In the past two decades, there have been several works describing different techniques to generate 3-D models textured with thermal information using various combinations of sensors in order to address some of these limitations. Most of these approaches generate 3-D thermograms of an object from a single perspective with bulky measurement systems and therefore do not address problems that arise when scanning objects in a continuous manner from multiple perspectives. But reductions in cost, size, and weight of infrared and depth-sensing cameras as well as a significant increase in computational power of personal computers have enabled the development of low cost, handheld, real-time 3-D thermal imaging systems. This paper elaborates through a series of experiments on the main factors that affect the real-time generation of 3-D thermograms with such a system and demonstrates how taking these factors into consideration significantly improves the appearance and fidelity of the generated 3-D thermogram. Most of the insight gained in this paper can be transferred to 3-D thermal imaging systems based on other combination of sensors.",
"title": ""
},
{
"docid": "7bf3adb52e9f2c40d419872f82429a06",
"text": "OBJECTIVES\nWe examine recent published research on the extraction of information from textual documents in the Electronic Health Record (EHR).\n\n\nMETHODS\nLiterature review of the research published after 1995, based on PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers already included.\n\n\nRESULTS\n174 publications were selected and are discussed in this review in terms of methods used, pre-processing of textual documents, contextual features detection and analysis, extraction of information in general, extraction of codes and of information for decision-support and enrichment of the EHR, information extraction for surveillance, research, automated terminology management, and data mining, and de-identification of clinical text.\n\n\nCONCLUSIONS\nPerformance of information extraction systems with clinical text has improved since the last systematic review in 1995, but they are still rarely applied outside of the laboratory they have been developed in. Competitive challenges for information extraction from clinical text, along with the availability of annotated clinical text corpora, and further improvements in system performance are important factors to stimulate advances in this field and to increase the acceptance and usage of these systems in concrete clinical and biomedical research contexts.",
"title": ""
}
] | scidocsrr |
044423195a1a39eb794ddbb010b857d7 | Goal-Driven Conceptual Blending: A Computational Approach for Creativity | [
{
"docid": "c5f6a559d8361ad509ec10bbb6c3cc9b",
"text": "In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.",
"title": ""
}
] | [
{
"docid": "227786365219fe1efab6414bae0d8cdb",
"text": "Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open.\n We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function.\n Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction.",
"title": ""
},
{
"docid": "d07f3937b0500c63fea93db8f0ca33e2",
"text": "Style is a familiar category for the analysis of art. It is less so in the history of anatomical illustration. The great Renaissance and Baroque picture books of anatomy illustrated with stylish woodcuts and engravings, such as those by Charles Estienne, Andreas Vesalius and Govard Bidloo, showed figures in dramatic action in keeping with philosophical and theological ideas about human nature. Parallels can be found in paintings of the period, such as those by Titian, Michelangelo and Hans Baldung Grien. The anatomists also claimed to portray the body in an objective manner, and showed themselves as heroes of the discovery of human knowledge. Rembrandt's painting of Dr Nicholas Tulp is the best-known image of the anatomist as hero. The British empirical tradition in the 18th century saw William Cheselden and William Hunter working with techniques of representation that were intended to guarantee detailed realism. The ambition to portray forms life-size led to massive volumes, such as those by Antonio Mascagni. John Bell, the Scottish anatomist, criticized the size and pretensions of the earlier books and argued for a plain style adapted to the needs of teaching and surgery. Henry Gray's famous Anatomy of 1858, illustrated by Henry Vandyke Carter, aspired to a simple descriptive mode of functional representation that avoided stylishness, resulting in a style of its own. Successive editions of Gray progressively saw the replacement of Gray's method and of all his illustrations. The 150th anniversary edition, edited by Susan Standring, radically re-thinks the role of Gray's book within the teaching of medicine.",
"title": ""
},
{
"docid": "e6e3a5499991b2bbbcd5d4c95ae5c111",
"text": "Compelling evidence from many animal taxa indicates that male genitalia are often under postcopulatory sexual selection for characteristics that increase a male's relative fertilization success. There could, however, also be direct precopulatory female mate choice based on male genital traits. Before clothing, the nonretractable human penis would have been conspicuous to potential mates. This observation has generated suggestions that human penis size partly evolved because of female choice. Here we show, based upon female assessment of digitally projected life-size, computer-generated images, that penis size interacts with body shape and height to determine male sexual attractiveness. Positive linear selection was detected for penis size, but the marginal increase in attractiveness eventually declined with greater penis size (i.e., quadratic selection). Penis size had a stronger effect on attractiveness in taller men than in shorter men. There was a similar increase in the positive effect of penis size on attractiveness with a more masculine body shape (i.e., greater shoulder-to-hip ratio). Surprisingly, larger penis size and greater height had almost equivalent positive effects on male attractiveness. Our results support the hypothesis that female mate choice could have driven the evolution of larger penises in humans. More broadly, our results show that precopulatory sexual selection can play a role in the evolution of genital traits.",
"title": ""
},
{
"docid": "009a7247ef27758f6c303cea8108dae1",
"text": "We describe a method for automatic generation of a learning path for education or selfeducation. As a knowledge base, our method uses the semantic structure view from Wikipedia, leveraging on its broad variety of covered concepts. We evaluate our results by comparing them with the learning paths suggested by a group of teachers. Our algorithm is a useful tool for instructional design process.",
"title": ""
},
{
"docid": "cc0687b22e2ba514a2eef5a7aa88963a",
"text": "In this paper, we face the problem of phonetic segmentation under the hierarchical clustering framework. We extend the framework with an unsupervised segmentation algorithm based on a divisive clustering technique and compare both approaches: agglomerative nesting (Bottom-up) against divisive analysis (Top-down). As both approaches require prior knowledge of the number of segments to be estimated, we present a stopping criterion in order to make these algorithms become standalone. This criterion provides an estimation of the underlying number of segments inside the speech acoustic data. The evaluation of both approaches using the stopping criterion reveals good compromise between boundary estimation (Hit rate) and number of segments estimation (over-under segmentation).",
"title": ""
},
{
"docid": "b1e3fe6f24823a9e0dde74f6393d1348",
"text": "The dynamic tree is an abstract data type that allows the maintenance of a collection of trees subject to joining by adding edges (linking) and splitting by deleting edges (cutting), while at the same time allowing reporting of certain combinations of vertex or edge values. For many applications of dynamic trees, values must be combined along paths. For other applications, values must be combined over entire trees. For the latter situation, we show that an idea used originally in parallel graph algorithms, to represent trees by Euler tours, leads to a simple implementation with a time of O(log n) per tree operation, where n is the number of tree vertices. We apply this representation to the implementation of two versions of the network simplex algorithm, resulting in a time of O(log n) per pivot, where n is the number of vertices in the problem network.",
"title": ""
},
{
"docid": "44e7e452b9b27d2028d15c88256eff30",
"text": "In social media communication, multilingual speakers often switch between languages, and, in such an environment, automatic language identification becomes both a necessary and challenging task. In this paper, we describe our work in progress on the problem of automatic language identification for the language of social media. We describe a new dataset that we are in the process of creating, which contains Facebook posts and comments that exhibit code mixing between Bengali, English and Hindi. We also present some preliminary word-level language identification experiments using this dataset. Different techniques are employed, including a simple unsupervised dictionary-based approach, supervised word-level classification with and without contextual clues, and sequence labelling using Conditional Random Fields. We find that the dictionary-based approach is surpassed by supervised classification and sequence labelling, and that it is important to take contextual clues into consideration.",
"title": ""
},
{
"docid": "18f13858b5f9e9a8e123d80b159c4d72",
"text": "Cryptocurrency, and its underlying technologies, has been gaining popularity for transaction management beyond financial transactions. Transaction information is maintained in the blockchain, which can be used to audit the integrity of the transaction. The focus on this paper is the potential availability of block-chain technology of other transactional uses. Block-chain is one of the most stable open ledgers that preserves transaction information, and is difficult to forge. Since the information stored in block-chain is not related to personally identifiable information, it has the characteristics of anonymity. Also, the block-chain allows for transparent transaction verification since all information in the block-chain is open to the public. These characteristics are the same as the requirements for a voting system. That is, strong robustness, anonymity, and transparency. In this paper, we propose an electronic voting system as an application of blockchain, and describe block-chain based voting at a national level through examples.",
"title": ""
},
{
"docid": "ac0b86c5a0e7949c5e77610cee865e2b",
"text": "BACKGROUND\nDegenerative lumbosacral stenosis is a common problem in large breed dogs. For severe degenerative lumbosacral stenosis, conservative treatment is often not effective and surgical intervention remains as the last treatment option. The objective of this retrospective study was to assess the middle to long term outcome of treatment of severe degenerative lumbosacral stenosis with pedicle screw-rod fixation with or without evidence of radiological discospondylitis.\n\n\nRESULTS\nTwelve client-owned dogs with severe degenerative lumbosacral stenosis underwent pedicle screw-rod fixation of the lumbosacral junction. During long term follow-up, dogs were monitored by clinical evaluation, diagnostic imaging, force plate analysis, and by using questionnaires to owners. Clinical evaluation, force plate data, and responses to questionnaires completed by the owners showed resolution (n = 8) or improvement (n = 4) of clinical signs after pedicle screw-rod fixation in 12 dogs. There were no implant failures, however, no interbody vertebral bone fusion of the lumbosacral junction was observed in the follow-up period. Four dogs developed mild recurrent low back pain that could easily be controlled by pain medication and an altered exercise regime.\n\n\nCONCLUSIONS\nPedicle screw-rod fixation offers a surgical treatment option for large breed dogs with severe degenerative lumbosacral stenosis with or without evidence of radiological discospondylitis in which no other treatment is available. Pedicle screw-rod fixation alone does not result in interbody vertebral bone fusion between L7 and S1.",
"title": ""
},
{
"docid": "bf164afc6315bf29a07e6026a3db4a26",
"text": "iBeacons are a new way to interact with hardware. An iBeacon is a Bluetooth Low Energy device that only sends a signal in a specific format. They are like a lighthouse that sends light signals to boats. This paper explains what an iBeacon is, how it works and how it can simplify your daily life, what restriction comes with iBeacon and how to improve this restriction., as well as, how to use Location-based Services to track items. E.g., every time you touchdown at an airport and wait for your suitcase at the luggage reclaim, you have no information when your luggage will arrive at the conveyor belt. With an iBeacon inside your suitcase, it is possible to track the luggage and to receive a push notification about it even before you can see it. This is just one possible solution to use them. iBeacon can create a completely new shopping experience or make your home smarter. This paper demonstrates the luggage tracking use case and evaluates its possibilities and restrictions.",
"title": ""
},
{
"docid": "a094869c9f79d0fccbc6892a345fec8b",
"text": "Recent years have seen an exploration of data volumes from a myriad of IoT devices, such as various sensors and ubiquitous cameras. The deluge of IoT data creates enormous opportunities for us to explore the physical world, especially with the help of deep learning techniques. Traditionally, the Cloud is the option for deploying deep learning based applications. However, the challenges of Cloud-centric IoT systems are increasing due to significant data movement overhead, escalating energy needs, and privacy issues. Rather than constantly moving a tremendous amount of raw data to the Cloud, it would be beneficial to leverage the emerging powerful IoT devices to perform the inference task. Nevertheless, the statically trained model could not efficiently handle the dynamic data in the real in-situ environments, which leads to low accuracy. Moreover, the big raw IoT data challenges the traditional supervised training method in the Cloud. To tackle the above challenges, we propose In-situ AI, the first Autonomous and Incremental computing framework and architecture for deep learning based IoT applications. We equip deep learning based IoT system with autonomous IoT data diagnosis (minimize data movement), and incremental and unsupervised training method (tackle the big raw IoT data generated in ever-changing in-situ environments). To provide efficient architectural support for this new computing paradigm, we first characterize the two In-situ AI tasks (i.e. inference and diagnosis tasks) on two popular IoT devices (i.e. mobile GPU and FPGA) and explore the design space and tradeoffs. Based on the characterization results, we propose two working modes for the In-situ AI tasks, including Single-running and Co-running modes. Moreover, we craft analytical models for these two modes to guide the best configuration selection. We also develop a novel two-level weight shared In-situ AI architecture to efficiently deploy In-situ tasks to IoT node. Compared with traditional IoT systems, our In-situ AI can reduce data movement by 28-71%, which further yields 1.4X-3.3X speedup on model update and contributes to 30-70% energy saving.",
"title": ""
},
{
"docid": "016ba468269a1693cb49005712e00d52",
"text": "In 2011, Google released a one-month production trace with hundreds of thousands of jobs running across over 12,000 heterogeneous hosts. In order to perform in-depth research based on the trace, it is necessary to construct a close-to-practice simulation system. In this paper, we devise a distributed cloud simulator (or toolkit) based on virtual machines, with three important features. (1) The dynamic changing resource amounts (such as CPU rate and memory size) consumed by the reproduced jobs can be emulated as closely as possible to the real values in the trace. (2) Various types of events (e.g., kill/evict event) can be emulated precisely based on the trace. (3) Our simulation toolkit is able to emulate more complex and useful cases beyond the original trace to adapt to various research demands. We evaluate the system on a real cluster environment with 16×8=128 cores and 112 virtual machines (VMs) constructed by XEN hypervisor. To the best of our knowledge, this is the first work to reproduce Google cloud environment with real experimental system setting and real-world large scale production trace. Experiments show that our simulation system could effectively reproduce the real checkpointing/restart events based on Google trace, by leveraging Berkeley Lab Checkpoint/Restart (BLCR) tool. It can simultaneously process up to 1200 emulated Google jobs over the 112 VMs. Such a simulation toolkit has been released as a GNU GPL v3 software for free downloading, and it has been successfully applied to the fundamental research on the optimization of checkpoint intervals for Google tasks. Copyright c ⃝ 2013 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "fae921cbf39b45fd73f8e8e8cb3cc92f",
"text": "Analyses of areal variations in the subsidence and rebound occurring over stressed aquifer systems, in conjunction with measurements of the hydraulic head fluctuations causing these displacements, can yield valuable information about the compressibility and storage properties of the aquifer system. Historically, stress-strain relationships have been derived from paired extensometer/piezometer installations, which provide only point source data. Because of the general unavailability of spatially detailed deformation data, areal stress-strain relations and their variability are not commonly considered in constraining conceptual and numerical models of aquifer systems. Interferometric synthetic aperture radar (InSAR) techniques can map ground displacements at a spatial scale of tens of meters over 100 km wide swaths. InSAR has been used previously to characterize larger magnitude, generally permanent aquifer system compaction and land subsidence at yearly and longer timescales, caused by sustained drawdown of groundwater levels that produces intergranular stresses consistently greater than the maximum historical stress. We present InSAR measurements of the typically small-magnitude, generally recoverable deformations of the Las Vegas Valley aquifer system occurring at seasonal timescales. From these we derive estimates of the elastic storage coefficient for the aquifer system at several locations in Las Vegas Valley. These high-resolution measurements offer great potential for future investigations into the mechanics of aquifer systems and the spatial heterogeneity of aquifer system structure and material properties as well as for monitoring ongoing aquifer system compaction and land subsidence.",
"title": ""
},
{
"docid": "b754b1d245aa68aeeb37cf78cf54682f",
"text": "This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. OPEN ACCESS Entropy 2013, 15 3823",
"title": ""
},
{
"docid": "53821da1274fd420fe0f7eeba024b95d",
"text": "An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.",
"title": ""
},
{
"docid": "450f39fb29cc8b9a51e67da5a4d723c5",
"text": "Trends in data mining are increasing over the time. Current world is of internet and everything is available over internet, which leads to criminal and malicious activity. So the identity of available content is now a need. Available content is always in the form of text data. Authorship analysis is the statistical study of linguistic and computational characteristics of the written documents of individuals. This paper describes review of various methods for authorship analysis and identification for a set of provided text. Surely research in authorship analysis and identification will continue and even increase over decades. In this article, we put our vision of future authorship analysis and identification with high performance and solution for behavioral feature extraction from set of text documents.",
"title": ""
},
{
"docid": "b776307764d3946fc4e7f6158b656435",
"text": "Recent development advances have allowed silicon (Si) semiconductor technology to approach the theoretical limits of the Si material; however, power device requirements for many applications are at a point that the present Si-based power devices can not handle. The requirements include higher blocking voltages, switching frequencies, efficiency, and reliability. To overcome these limitations, new semiconductor materials for power device applications are needed. For high power requirements, wide band gap semiconductors like silicon carbide (SiC), gallium nitride (GaN), and diamond with their superior electrical properties are likely candidates to replace Si in the near future. This paper compares all the aforementioned wide bandgap semiconductors with respect to their promise and applicability for power applications and predicts the future of power device semiconductor materials.",
"title": ""
},
{
"docid": "a8edc02eb78637f18fc948d81397fc75",
"text": "When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object $$o$$ o in a multidimensional numeric data set $$O$$ O , in which subspace is $$o$$ o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.",
"title": ""
},
{
"docid": "d12a485101f9453abcd2437c4cfccb01",
"text": "This report describes a low cost indoor position sensing system utilising a combination of radio frequency and ultrasonics. Using a single rf transmitter and four ceiling mounted ultrasonic transmitters it provides coverage in a typical room in an area greater than 8m by 8m. As well as finding position within a room, it uses data encoded into the rf signal to determine the relevant web server for a building, and which floor and room the user is in. It is intended to be used primarily by wearable/mobile computers, though it has also been extended for use as a tracking system.",
"title": ""
},
{
"docid": "47eef1318d313e2f89bb700f8cd34472",
"text": "This paper sets out to detect controversial news reports using online discussions as a source of information. We define controversy as a public discussion that divides society and demonstrate that a content and stylometric analysis of these debates yields useful signals for extracting disputed news items. Moreover, we argue that a debate-based approach could produce more generic models, since the discussion architectures we exploit to measure controversy occur on many different platforms.",
"title": ""
}
] | scidocsrr |
1f3a590a37044a2a27bfe3bdd913f1a3 | Adaptive Semi-Supervised Learning with Discriminative Least Squares Regression | [
{
"docid": "25442f28ef0964869966213df255d3be",
"text": "In this paper, we propose a novel ℓ1-norm graph model to perform unsupervised and semi-supervised learning methods. Instead of minimizing the ℓ2-norm of spectral embedding as traditional graph based learning methods, our new graph learning model minimizes the ℓ1-norm of spectral embedding with well motivation. The sparsity produced by the ℓ1-norm minimization results in the solutions with much clearer cluster structures, which are suitable for both image clustering and classification tasks. We introduce a new efficient iterative algorithm to solve the ℓ1-norm of spectral embedding minimization problem, and prove the convergence of the algorithm. More specifically, our algorithm adaptively re-weight the original weights of graph to discover clearer cluster structure. Experimental results on both toy data and real image data sets show the effectiveness and advantages of our proposed method.",
"title": ""
},
{
"docid": "6228f059be27fa5f909f58fb60b2f063",
"text": "We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F0 = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F0. Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.",
"title": ""
},
{
"docid": "04ba17b4fc6b506ee236ba501d6cb0cf",
"text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.",
"title": ""
}
] | [
{
"docid": "0ff1ea411bcdd28b6c8bc773176f8e1c",
"text": "The paper presents a generalization of Haskell's IO monad suitable for synchronous concurrent programming. The new monad integrates the deterministic concurrency paradigm of synchronous programming with the powerful abstraction features of functional languages and with full support for imperative programming. For event-driven applications, it offers an alternative to the use of existing, thread-based concurrency extensions of functional languages. The concepts presented have been applied in practice in a framework for programming interactive graphics.",
"title": ""
},
{
"docid": "24a3924f15cb058668e8bcb7ba53ee66",
"text": "This paper presents a latest survey of different technologies used in medical image segmentation using Fuzzy C Means (FCM).The conventional fuzzy c-means algorithm is an efficient clustering algorithm that is used in medical image segmentation. To update the study of image segmentation the survey has performed. The techniques used for this survey are Brain Tumor Detection Using Segmentation Based on Hierarchical Self Organizing Map, Robust Image Segmentation in Low Depth Of Field Images, Fuzzy C-Means Technique with Histogram Based Centroid Initialization for Brain Tissue Segmentation in MRI of Head Scans.",
"title": ""
},
{
"docid": "6c937adbdfe7f86a83948f1a28d67649",
"text": "BACKGROUND\nViral warts are a common skin condition, which can range in severity from a minor nuisance that resolve spontaneously to a troublesome, chronic condition. Many different topical treatments are available.\n\n\nOBJECTIVES\nTo evaluate the efficacy of local treatments for cutaneous non-genital warts in healthy, immunocompetent adults and children.\n\n\nSEARCH METHODS\nWe updated our searches of the following databases to May 2011: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library, MEDLINE (from 2005), EMBASE (from 2010), AMED (from 1985), LILACS (from 1982), and CINAHL (from 1981). We searched reference lists of articles and online trials registries for ongoing trials.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of topical treatments for cutaneous non-genital warts.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors independently selected trials and extracted data; a third author resolved any disagreements.\n\n\nMAIN RESULTS\nWe included 85 trials involving a total of 8815 randomised participants (26 new studies were included in this update). There was a wide range of different treatments and a variety of trial designs. Many of the studies were judged to be at high risk of bias in one or more areas of trial design.Trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (RR (risk ratio) 1.56, 95% CI (confidence interval) 1.20 to 2.03). Subgroup analysis for different sites, hands (RR 2.67, 95% CI 1.43 to 5.01) and feet (RR 1.29, 95% CI 1.07 to 1.55), suggested it might be more effective for hands than feet.A meta-analysis of cryotherapy versus placebo for warts at all sites favoured neither intervention nor control (RR 1.45, 95% CI 0.65 to 3.23). Subgroup analysis for different sites, hands (RR 2.63, 95% CI 0.43 to 15.94) and feet (RR 0.90, 95% CI 0.26 to 3.07), again suggested better outcomes for hands than feet. One trial showed cryotherapy to be better than both placebo and SA, but only for hand warts.There was no significant difference in cure rates between cryotherapy at 2-, 3-, and 4-weekly intervals.Aggressive cryotherapy appeared more effective than gentle cryotherapy (RR 1.90, 95% CI 1.15 to 3.15), but with increased adverse effects.Meta-analysis did not demonstrate a significant difference in effectiveness between cryotherapy and SA at all sites (RR 1.23, 95% CI 0.88 to 1.71) or in subgroup analyses for hands and feet.Two trials with 328 participants showed that SA and cryotherapy combined appeared more effective than SA alone (RR 1.24, 95% CI 1.07 to 1.43).The benefit of intralesional bleomycin remains uncertain as the evidence was inconsistent. The most informative trial with 31 participants showed no significant difference in cure rate between bleomycin and saline injections (RR 1.28, 95% CI 0.92 to 1.78).Dinitrochlorobenzene was more than twice as effective as placebo in 2 trials with 80 participants (RR 2.12, 95% CI 1.38 to 3.26).Two trials of clear duct tape with 193 participants demonstrated no advantage over placebo (RR 1.43, 95% CI 0.51 to 4.05).We could not combine data from trials of the following treatments: intralesional 5-fluorouracil, topical zinc, silver nitrate (which demonstrated possible beneficial effects), topical 5-fluorouracil, pulsed dye laser, photodynamic therapy, 80% phenol, 5% imiquimod cream, intralesional antigen, and topical alpha-lactalbumin-oleic acid (which showed no advantage over placebo).We did not identify any RCTs that evaluated surgery (curettage, excision), formaldehyde, podophyllotoxin, cantharidin, diphencyprone, or squaric acid dibutylester.\n\n\nAUTHORS' CONCLUSIONS\nData from two new trials comparing SA and cryotherapy have allowed a better appraisal of their effectiveness. The evidence remains more consistent for SA, but only shows a modest therapeutic effect. Overall, trials comparing cryotherapy with placebo showed no significant difference in effectiveness, but the same was also true for trials comparing cryotherapy with SA. Only one trial showed cryotherapy to be better than both SA and placebo, and this was only for hand warts. Adverse effects, such as pain, blistering, and scarring, were not consistently reported but are probably more common with cryotherapy.None of the other reviewed treatments appeared safer or more effective than SA and cryotherapy. Two trials of clear duct tape demonstrated no advantage over placebo. Dinitrochlorobenzene (and possibly other similar contact sensitisers) may be useful for the treatment of refractory warts.",
"title": ""
},
{
"docid": "fbcebe9e6b22049918f262dae0dcd099",
"text": "Trust is a fundamental concern in large-scale open distributed systems. It lies at the core of all interactions between the entities that have to operate in such uncertain and constantly changing environments. Given this complexity, these components, and the ensuing system, are increasingly being conceptualised, designed, and built using agent-based techniques and, to this end, this paper examines the specific role of trust in multi-agent systems. In particular, we survey the state of the art and provide an account of the main directions along which research efforts are being focused. In so doing, we critically evaluate the relative strengths and weaknesses of the main models that have been proposed and show how, fundamentally, they all seek to minimise the uncertainty in interactions. Finally, we outline the areas that require further research in order to develop a comprehensive treatment of trust in complex computational settings.",
"title": ""
},
{
"docid": "dded827d0b9c513ad504663547018749",
"text": "In this paper, various key points in the rotor design of a low-cost permanent-magnet-assisted synchronous reluctance motor (PMa-SynRM) are introduced and their effects are studied. Finite-element approach has been utilized to show the effects of these parameters on the developed average electromagnetic torque and total d-q inductances. One of the features considered in the design of this motor is the magnetization of the permanent magnets mounted in the rotor core using the stator windings. This feature will cause a reduction in cost and ease of manufacturing. Effectiveness of the design procedure is validated by presenting simulation and experimental results of a 1.5-kW prototype PMa-SynRM",
"title": ""
},
{
"docid": "a531694dba7fc479b43d0725bc68de15",
"text": "This paper gives an introduction to the essential challenges of software engineering and requirements that software has to fulfill in the domain of automation. Besides, the functional characteristics, specific constraints and circumstances are considered for deriving requirements concerning usability, the technical process, the automation functions, used platform and the well-established models, which are described in detail. On the other hand, challenges result from the circumstances at different points in the single phases of the life cycle of the automated system. The requirements for life-cycle-management, tools and the changeability during runtime are described in detail.",
"title": ""
},
{
"docid": "2747952e921f9e0c2beb524957edf2a0",
"text": "AngloGold Ashanti is an international gold mining company that has recently implemented an information security awareness program worldwide at all of their operations. Following the implementation, there was a normal business need to evaluate and measure the success and effectiveness of the program. A measuring tool that can be applied globally and that addressed AngloGold Ashanti’s unique requirements was developed and applied at the mining sites located in the West Africa region. The objective of this paper is, firstly, to give a brief overview on the measuring tool developed and, secondly to report on the application and results in the West Africa region.",
"title": ""
},
{
"docid": "c43164c1828b7889137fe26afce61f58",
"text": "We describe an artificial ant colony capable of solving the traveling salesman problem (TSP). Ants of the artificial colony are able to generate successively shorter feasible tours by using information accumulated in the form of a pheromone trail deposited on the edges of the TSP graph. Computer simulations demonstrate that the artificial ant colony is capable of generating good solutions to both symmetric and asymmetric instances of the TSP. The method is an example, like simulated annealing, neural networks, and evolutionary computation, of the successful use of a natural metaphor to design an optimization algorithm.",
"title": ""
},
{
"docid": "d846edbd57098464fa2b0f05e0e54942",
"text": "This paper explores recent developments in agile systems engineering. We draw a distinction between agility in the systems engineering process versus agility in the resulting system itself. In the first case the emphasis is on carefully exploring the space of design alternatives and to delay the freeze point as long as possible as new information becomes available during product development. In the second case we are interested in systems that can respond to changed requirements after initial fielding of the system. We provide a list of known and emerging methods in both domains and explore a number of illustrative examples such as the case of the Iridium satellite constellation or recent developments in the automobile industry.",
"title": ""
},
{
"docid": "2fdf6538c561e05741baafe43ec6f145",
"text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"title": ""
},
{
"docid": "e79699c7578d30ab42ce173a7c1055f8",
"text": "Cellulose is the major component of plant biomass, which is readily available and does not compete with the food supply. Hydrolysis of cellulose as the entry point of biorefinery schemes is an important process for chemical and biochemical industries based on sugars, especially for fuel ethanol production. However, cellulose not only provides a renewable carbon source, but also offers challenges to researchers due to the structural recalcitrance. 2] Considerable efforts have been devoted to the study of hydrolysis of cellulose by enzymes, acids, and supercritical water. Recently, acid-catalyzed hydrolysis has attracted increasing attention. To overcome resistance to degradation through breaking hydrogen bonds and b-1,4glycosidic bonds, ionic liquids have been employed to form homogeneous solutions of cellulose prior to hydrolysis. Although the homogeneous hydrolysis can be carried out under mild conditions with a high glucose yield, the workup for separation of sugars, dehydrated products, and unreacted cellulose from the ionic liquid is normally difficult. 4c, 5] Hydrolysis of cellulose in diluted acids has been practiced for a long time. However, acid-waste generation and corrosion hazards are significant drawbacks of this process. To move towards more environmentally sustainable approaches, Onda et al. demonstrated that sulfonated activated carbon can convert amorphous ball-milled cellulose into glucose with a yield of 41 %. Almost simultaneously, Hara et al. completely hydrolyzed cellulose to water soluble b-1,4-glucans at 100 8C using a more robust sulfonated activated carbon catalyst. Hydrolysis of cellobiose and cellulose catalyzed by layered niobium molybdate was achieved by Takagaki et al. , athough the yield of glucose from cellulose was low. Fukuoka et al. found that mesoporouscarbon-supported Ru catalysts were also able to catalyze the hydrolysis of cellulose into glucose. More recently, sulfonated silica/carbon cellulose hydrolysis catalysts were synthesized by Jacobs et al. , affording glucose in 50 % yield. Zhang et al. employed sulfonated carbon with mesoporous structure for hydrolysis of cellulose, giving a glucose yield of 75 %, which is the highest recorded yield on a solid acid. Considering the real biomass components and practical process for glucose production, two challenges remain for a catalytic system. Firstly, solid catalysts should be readily separated from the solid residues. Although cellulose can be converted almost completely in some cases, 11] lignin components can not be converted and humins are formed sometimes as solid residues. Secondly, to achieve a high yield of glucose, the reaction was usually conducted at low cellulose/liquid ratio (ca. 1:100). 8–11] However, concentration of the glucose solution, prior to the production of ethanol or other compounds, is energy-consuming. Thus effective treatment of high cellulose loadings is required. In view of the importance of the cellulose/liquid ratio, Jacobs et al. carried out hexitol production from concentrated cellulose with heteropoly acid and Ru/C. We designed and synthesized a magnetic solid acid catalyst for the hydrolysis of cellulose at high cellulose/liquid ratio (1:10 or 1:15; Scheme 1). Sulfonic acid-functionalized mesopo-",
"title": ""
},
{
"docid": "91bf2f458111b34eb752c9e3c88eb10a",
"text": "The scope of this paper is to explore, analyze and develop a universal architecture that supports mobile payments and mobile banking, taking into consideration the third and the emerging fourth generation communication technologies. Interaction and cooperation between payment and banking systems, integration of existing technologies and exploitation of intelligent procedures provide the prospect to develop an open financial services architecture (OFSA), which satisfies requirements of all involved entities. A unified scenario is designed and a prototype is implemented to demonstrate the feasibility of the proposed architecture. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c32db73f8d9ef779b91bfffa9caeb946",
"text": "Sir: Negative symptoms like blunted affect, lack of spontaneity , and emotional and social withdrawal are disabling conditions for many schizophrenic patients. Pharmacologic strategies alone are frequently insufficient in the treatment of negative symptoms. New treatment approaches are therefore required. Bright light therapy is the treatment of choice for seasonal depression , but is now also shown to be efficacious in nonseasonal depression. 1 Until now, no studies of bright light therapy in schizophrenic patients have been published. This is the first study to evaluate the safety and tolerability of bright light therapy in patients diagnosed with the residual subtype of schizophrenia. Method. Ten patients (8 men and 2 women) with a diagnosis of schizophrenia (DSM-IV criteria) were included in the study, which was conducted from January 2001 to October 2003. At study entry, the mean age of all patients was 41.8 years. Inclusion criteria were residual subtype of schizophrenia (295.6) and stable antipsychotic medication treatment for at least 4 weeks. Antidepressants were not allowed, and any medication inducing photosensitivity was an exclusion criterion. All patients signed informed consent statements before they were enrolled in the study, and the study was approved by the local human subjects research committee. Bright light therapy with 10,000 lux (Chronolux CL–100; Samarit; Aachen, Germany) was applied 1 hour daily, 5 days a week, for 4 weeks. All patients were evaluated with the Positive and Negative Syndrome Scale (PANSS), 2 a visual analog scale (VAS) for mood and a VAS for drive (ranging from 0 mm [abso-lute best mood or drive] to 100 mm [absolute worst mood or drive]), the Clinical Global Impressions scale (CGI), 3 and the Hamilton Rating Scale for Depression (17 items). 4 Measurements were conducted by blinded raters at the screening visit and at weeks 1, 2, and 4, and follow-up examinations were conducted at weeks 8 and 12. Statistical analyses were conducted using SPSS, Version 12 (SPSS Inc.; Chicago, Ill.). The effect of light therapy on the time course of the outcome variables listed above was tested with the Friedman test, as the assumption of normality was not met. Post hoc comparisons between individual time points were performed using the Wilcoxon test. During the treatment period (weeks 1–4), patients were analyzed by an intent-to-treat method, replacing missing data by the last-observation-carried-forward method. Results. Nine patients concluded 4 weeks of treatment, and 1 patient discontinued after 2 weeks for personal reasons. …",
"title": ""
},
{
"docid": "ed0736d1f8c35ec8b0c2f5bb9adfb7f9",
"text": "Neff's (2003a, 2003b) notion of self-compassion emphasizes kindness towards one's self, a feeling of connectedness with others, and mindful awareness of distressing experiences. Because exposure to trauma and subsequent posttraumatic stress symptoms (PSS) may be associated with self-criticism and avoidance of internal experiences, the authors examined the relationship between self-compassion and PSS. Out of a sample of 210 university students, 100 endorsed experiencing a Criterion A trauma. Avoidance symptoms significantly correlated with self-compassion, but reexperiencing and hyperarousal did not. Individuals high in self-compassion may engage in less avoidance strategies following trauma exposure, allowing for a natural exposure process.",
"title": ""
},
{
"docid": "d39f806d1a8ecb33fab4b5ebb49b0dd1",
"text": "Texture analysis has been a particularly dynamic field with different computer vision and image processing applications. Most of the existing texture analysis techniques yield to significant results in different applications but fail in difficult situations with high sensitivity to noise. Inspired by previous works on texture analysis by structure layer modeling, this paper deals with representing the texture's structure layer using the structure tensor field. Based on texture pattern size approximation, the proposed algorithm investigates the adaptability of the structure tensor to the local geometry of textures by automatically estimating the sub-optimal structure tensor size. An extension of the algorithm targeting non-structured textures is also proposed. Results show that using the proposed tensor size regularization method, relevant local information can be extracted by eliminating the need of repetitive tensor field computation with different tensor size to reach an acceptable performance.",
"title": ""
},
{
"docid": "3a3a2261e1063770a9ccbd0d594aa561",
"text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.",
"title": ""
},
{
"docid": "66b7ed8c1d20bceafb0a1a4194cd91e8",
"text": "In this paper a novel watermarking scheme for image authentication and recovery is presented. The algorithm can detect modified regions in images and is able to recover a good approximation of the original content of the tampered regions. For this purpose, two different watermarks have been used: a semi-fragile watermark for image authentication and a robust watermark for image recovery, both embedded in the Discrete Wavelet Transform domain. The proposed method achieves good image quality with mean Peak Signal-to-Noise Ratio values of the watermarked images of 42 dB and identifies image tampering of up to 20% of the original image.",
"title": ""
},
{
"docid": "d9599c4140819670a661bd4955680bb7",
"text": "The paper assesses the demand for rural electricity services and contrasts it with the technology options available for rural electrification. Decentralised Distributed Generation can be economically viable as reflected by case studies reported in literature and analysed in our field study. Project success is driven by economically viable technology choice; however it is largely contingent on organisational leadership and appropriate institutional structures. While individual leadership can compensate for deployment barriers, we argue that a large scale roll out of rural electrification requires an alignment of economic incentives and institutional structures to implement, operate and maintain the scheme. This is demonstrated with the help of seven case studies of projects across north India. 1 Introduction We explore the contribution that decentralised and renewable energy technologies can make to rural electricity supply in India. We take a case study approach, looking at seven sites across northern India where renewable energy technologies have been established to provide electrification for rural communities. We supplement our case studies with stakeholder interviews and household surveys, estimating levels of demand for electricity services from willingness and ability to pay. We also assess the overall viability of Distributed Decentralised Generation (DDG) projects by investigating the costs of implementation as well as institutional and organisational barriers to their operation and replication. Renewable energy technologies represent some of the most promising options available for distributed and decentralised electrification. Demand for reliable electricity services is significant. It represents a key driver behind economic development and raising basic standards of living. This is especially applicable to rural India home to 70% of the nation's population and over 25% of the world's poor. Access to reliable and affordable electricity can help support income-generating activity and allow utilisation of modern appliances and agricultural equipment whilst replacing inefficient and polluting kerosene lighting. Presently only around 55% of households are electrified (MOSPI 2006) leaving over 20 million households without power. The supply of electricity across India currently lacks both quality and quantity with an extensive shortfall in supply, a poor record for outages, high levels of transmission and distribution (T&D) losses and an overall need for extended and improved infrastructure (GoI 2006). The Indian Government recently outlined an ambitious plan for 100% village level electrification by the end of 2007 and total household electrification by 2012. To achieve this, a major programme of grid extension and strengthening of the rural electricity infrastructure has been initiated under …",
"title": ""
},
{
"docid": "13c2c1a1bd4ff886f93d8f89a14e39e2",
"text": "One of the key elements in qualitative data analysis is the systematic coding of text (Strauss and Corbin 1990:57%60; Miles and Huberman 1994:56). Codes are the building blocks for theory or model building and the foundation on which the analyst’s arguments rest. Implicitly or explicitly, they embody the assumptions underlying the analysis. Given the context of the interdisciplinary nature of research at the Centers for Disease Control and Prevention (CDC), we have sought to develop explicit guidelines for all aspects of qualitative data analysis, including codebook development.",
"title": ""
},
{
"docid": "1607be849e72e9fe2ba172b86cf98bd6",
"text": "Phishing is an internet fraud that acquires a user‘s credentials by deceptions. It includes theft of password, credit card number, bank account details, and other confidential information. It is the criminal scheme to steal the user‘s confidential data. There are many anti-phishing techniques used to protect users against phishing attacks. The statistical of APWG trends report for 1 st quarter 2013 says that now a day the maximum phishing attacks are done using URL Obfuscation phishing technique. Due to the different characteristics and methods used in URL Obfuscation, the detection of Obfuscated URL is complex. The current URL Obfuscation anti-phishing technique cannot detect all the counterfeit URLs. In this paper we have reviewed URL Obfuscation phishing technique and the detection of that Obfuscated URLs. Keywords— Anti-phishing, Hyperlink, Internet Security, Phishing, URL Obfuscation",
"title": ""
}
] | scidocsrr |
b3a429a245088e0a5defbc505c4091b6 | Can Computer Playfulness and Cognitive Absorption Lead to Problematic Technology Usage? | [
{
"docid": "6dc4e4949d4f37f884a23ac397624922",
"text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.",
"title": ""
},
{
"docid": "2a617a0388cc6653e4d014fc3019e724",
"text": "What kinds of psychological features do people have when they are overly involved in usage of the internet? Internet users in Korea were investigated in terms of internet over-use and related psychological profiles by the level of internet use. We used a modified Young's Internet Addiction Scale, and 13,588 users (7,878 males, 5,710 females), out of 20 million from a major portal site in Korea, participated in this study. Among the sample, 3.5% had been diagnosed as internet addicts (IA), while 18.4% of them were classified as possible internet addicts (PA). The Internet Addiction Scale showed a strong relationship with dysfunctional social behaviors. More IA tried to escape from reality than PA and Non-addicts (NA). When they got stressed out by work or were just depressed, IA showed a high tendency to access the internet. The IA group also reported the highest degree of loneliness, depressed mood, and compulsivity compared to the other groups. The IA group seemed to be more vulnerable to interpersonal dangers than others, showing an unusually close feeling for strangers. Further study is needed to investigate the direct relationship between psychological well-being and internet dependency.",
"title": ""
}
] | [
{
"docid": "9003a12f984d2bf2fd84984a994770f0",
"text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.",
"title": ""
},
{
"docid": "6d6e21d332a022cc747325439b7cac74",
"text": "We present a computational analysis of the language of drug users when talking about their drug experiences. We introduce a new dataset of over 4,000 descriptions of experiences reported by users of four main drug types, and show that we can predict with an F1-score of up to 88% the drug behind a certain experience. We also perform an analysis of the dominant psycholinguistic processes and dominant emotions associated with each drug type, which sheds light on the characteristics of drug users.",
"title": ""
},
{
"docid": "c00c6539b78ed195224063bcff16fb12",
"text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.",
"title": ""
},
{
"docid": "a3e730ef71a91e1303d4cd92407fed26",
"text": "Purpose – This paper investigates the interplay among the configuration dimensions (network structure, network flow, relationship governance, and service architecture) of LastMile Supply Networks (LMSN) and the underlying mechanisms influencing omnichannel performance. Design/methodology/approach – Based on mixed-method design incorporating a multiple embedded case study, mapping, survey and archival records, this research involved undertaking in-depth withinand cross-case analyses to examine seven LMSNs, employing a configuration approach. Findings – The existing literature in the operations management (OM) field was shown to provide limited understanding of LMSNs within the emerging omnichannel context. Case results suggest that particular configurations have intrinsic capabilities, and that these directly influence omnichannel performance. The study further proposes a taxonomy of LMSNs comprising six forms, with two hybrids, supporting the notion of equifinality in configuration theory. Propositions are developed to further explore interdependencies between configurational attributes, refining the relationships between LMSN types and factors influencing LMSN performance. Practical implications – The findings provide retailers a set of design parameters for the (re)configuration of LMSNs and facilitate performance evaluation using the concept of fit between configurational attributes. The developed model sheds light on the consequential effects when certain configurational attributes are altered, providing design indications. Given the global trend in urbanization, improved LMSN performance would have positive societal impacts in terms of service and resource efficiency. Originality/value – This is one of the first studies in the OM field to critically analyze LMSNs and their behaviors in omnichannel. Additionally, the paper offers several important avenues for future research.",
"title": ""
},
{
"docid": "ea3fd6ece19949b09fd2f5f2de57e519",
"text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.",
"title": ""
},
{
"docid": "65ddfd636299f556117e53b5deb7c7e5",
"text": "BACKGROUND\nMobile phone use is near ubiquitous in teenagers. Paralleling the rise in mobile phone use is an equally rapid decline in the amount of time teenagers are spending asleep at night. Prior research indicates that there might be a relationship between daytime sleepiness and nocturnal mobile phone use in teenagers in a variety of countries. As such, the aim of this study was to see if there was an association between mobile phone use, especially at night, and sleepiness in a group of U.S. teenagers.\n\n\nMETHODS\nA questionnaire containing an Epworth Sleepiness Scale (ESS) modified for use in teens and questions about qualitative and quantitative use of the mobile phone was completed by students attending Mountain View High School in Mountain View, California (n = 211).\n\n\nRESULTS\nMultivariate regression analysis indicated that ESS score was significantly associated with being female, feeling a need to be accessible by mobile phone all of the time, and a past attempt to reduce mobile phone use. The number of daily texts or phone calls was not directly associated with ESS. Those individuals who felt they needed to be accessible and those who had attempted to reduce mobile phone use were also ones who stayed up later to use the mobile phone and were awakened more often at night by the mobile phone.\n\n\nCONCLUSIONS\nThe relationship between daytime sleepiness and mobile phone use was not directly related to the volume of texting but may be related to the temporal pattern of mobile phone use.",
"title": ""
},
{
"docid": "66df2a7148d67ffd3aac5fc91e09ee5d",
"text": "Tree boosting, which combines weak learners (typically decision trees) to generate a strong learner, is a highly effective and widely used machine learning method. However, the development of a high performance tree boosting model is a time-consuming process that requires numerous trial-and-error experiments. To tackle this issue, we have developed a visual diagnosis tool, BOOSTVis, to help experts quickly analyze and diagnose the training process of tree boosting. In particular, we have designed a temporal confusion matrix visualization, and combined it with a t-SNE projection and a tree visualization. These visualization components work together to provide a comprehensive overview of a tree boosting model, and enable an effective diagnosis of an unsatisfactory training process. Two case studies that were conducted on the Otto Group Product Classification Challenge dataset demonstrate that BOOSTVis can provide informative feedback and guidance to improve understanding and diagnosis of tree boosting algorithms.",
"title": ""
},
{
"docid": "97cb7718c75b266a086441912e4b22c3",
"text": "Introduction Teacher education finds itself in a critical stage. The pressure towards more school-based programs which is visible in many countries is a sign that not only teachers, but also parents and politicians, are often dissatisfied with the traditional approaches in teacher education In some countries a major part of preservice teacher education has now become the responsibility of the schools, creating a situation in which to a large degree teacher education takes the form of 'training on the job'. The argument for this tendency is that traditional teacher education programs are said to fail in preparing prospective teachers for the realities of the classroom (Goodlad, 1990). Many teacher educators object that a professional teacher should acquire more than just practical tools for managing classroom situations and that it is their job to present student teachers with a broader view on education and to offer them a proper grounding in psychology, sociology, etcetera. This is what Clandinin (1995) calls \" the sacred theory-practice story \" : teacher education conceived as the translation of theory on good teaching into practice. However, many studies have shown that the transfer of theory to practice is meager or even non-existent. Zeichner and Tabachnick (1981), for example, showed that many notions and educational conceptions, developed during preservice teacher education, were \"washed out\" during field experiences. Comparable findings were reported by Cole and Knowles (1993) and Veenman (1984), who also points towards the severe problems teachers experience once they have left preservice teacher education. Lortie (1975) presented us with another early study into the socialization process of teachers, showing the dominant role of practice in shaping teacher development. At Konstanz University in Germany, research has been carried out into the phenomenon of the \"transition shock\" (Müller-Fohrbrodt et al. It showed that, during their induction in the profession, teachers encounter a huge gap between theory and practice. As a consequence, they pass through a quite distinct attitude shift during their first year of teaching, in general creating an adjustment to current practices in the schools and not to recent scientific insights into learning and teaching.",
"title": ""
},
{
"docid": "a73917d842c18ed9c36a13fe9187ea4c",
"text": "Brain Magnetic Resonance Image (MRI) plays a non-substitutive role in clinical diagnosis. The symptom of many diseases corresponds to the structural variants of brain. Automatic structure segmentation in brain MRI is of great importance in modern medical research. Some methods were developed for automatic segmenting of brain MRI but failed to achieve desired accuracy. In this paper, we proposed a new patch-based approach for automatic segmentation of brain MRI using convolutional neural network (CNN). Each brain MRI acquired from a small portion of public dataset is firstly divided into patches. All of these patches are then used for training CNN, which is used for automatic segmentation of brain MRI. Experimental results showed that our approach achieved better segmentation accuracy compared with other deep learning methods.",
"title": ""
},
{
"docid": "ec1120018899c6c9fe16240b8e35efac",
"text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.",
"title": ""
},
{
"docid": "8a0cc5438a082ed9afd28ad8ed272034",
"text": "Researchers analyzed 23 blockchain implementation projects, each tracked for design decisions and architectural alignment showing benefits, detriments, or no effects from blockchain use. The results provide the basis for a framework that lets engineers, architects, investors, and project leaders evaluate blockchain technology’s suitability for a given application. This analysis also led to an understanding of why some domains are inherently problematic for blockchains. Blockchains can be used to solve some trust-based problems but aren’t always the best or optimal technology. Some problems that can be solved using them can also be solved using simpler methods that don’t necessitate as big an investment.",
"title": ""
},
{
"docid": "eea86b8c7d332edb903c213c5df89a53",
"text": "We introduce the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks. Syntactic scaffolds avoid expensive syntactic processing at runtime, only making use of a treebank during training, through a multitask objective. We improve over strong baselines on PropBank semantics, frame semantics, and coreference resolution, achieving competitive performance on all three tasks.",
"title": ""
},
{
"docid": "0a1f6c27cd13735858e7a6686fc5c2c9",
"text": "We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer’s policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives.",
"title": ""
},
{
"docid": "fd4cd4edfd9fa8fe463643f02b90b21a",
"text": "We propose a generic method for iteratively approximating various second-order gradient steps-Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient-in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for on-line learning, matrix momentum and stochastic meta-descent (SMD), implement this approach. Since both were originally derived by very different routes, this offers fresh insight into their operation, resulting in further improvements to SMD.",
"title": ""
},
{
"docid": "5a011a87ce3f37dc6b944d2686fa2f73",
"text": "Agents are self-contained objects within a software model that are capable of autonomously interacting with the environment and with other agents. Basing a model around agents (building an agent-based model, or ABM) allows the user to build complex models from the bottom up by specifying agent behaviors and the environment within which they operate. This is often a more natural perspective than the system-level perspective required of other modeling paradigms, and it allows greater flexibility to use agents in novel applications. This flexibility makes them ideal as virtual laboratories and testbeds, particularly in the social sciences where direct experimentation may be infeasible or unethical. ABMs have been applied successfully in a broad variety of areas, including heuristic search methods, social science models, combat modeling, and supply chains. This tutorial provides an introduction to tools and resources for prospective modelers, and illustrates ABM flexibility with a basic war-gaming example.",
"title": ""
},
{
"docid": "39838881287fd15b29c20f18b7e1d1eb",
"text": "In the software industry, a challenge firms often face is how to effectively commercialize innovations. An emerging business model increasingly embraced by entrepreneurs, called freemium, combines “free” and “premium” consumption in association with a product or service. In a nutshell, this model involves giving away for free a certain level or type of consumption while making money on premium consumption. We develop a unifying multi-period microeconomic framework with network externalities embedded into consumer learning in order to capture the essence of conventional for-fee models, several key freemium business models such as feature-limited or time-limited, and uniform market seeding models. Under moderate informativeness of word-of-mouth signals, we fully characterize conditions under which firms prefer freemium models, depending on consumer priors on the value of individual software modules, perceptions of crossmodule synergies, and overall value distribution across modules. Within our framework, we show that uniform seeding is always dominated by either freemium models or conventional for-fee models. We further discuss managerial and policy implications based on our analysis. Interestingly, we show that freemium, in one form or another, is always preferred from the social welfare perspective, and we provide guidance on when the firms need to be incentivized to align their interests with the society’s. Finally, we discuss how relaxing some of the assumptions of our model regarding costs or informativeness and heterogeneity of word of mouth may reduce the profit gap between seeding and the other models, and potentially lead to seeding becoming the preferred approach for the firm.",
"title": ""
},
{
"docid": "81f82ecbc43653566319c7e04f098aeb",
"text": "Social microblogs such as Twitter and Weibo are experiencing an explosive growth with billions of global users sharing their daily observations and thoughts. Beyond public interests (e.g., sports, music), microblogs can provide highly detailed information for those interested in public health, homeland security, and financial analysis. However, the language used in Twitter is heavily informal, ungrammatical, and dynamic. Existing data mining algorithms require extensive manually labeling to build and maintain a supervised system. This paper presents STED, a semi-supervised system that helps users to automatically detect and interactively visualize events of a targeted type from twitter, such as crimes, civil unrests, and disease outbreaks. Our model first applies transfer learning and label propagation to automatically generate labeled data, then learns a customized text classifier based on mini-clustering, and finally applies fast spatial scan statistics to estimate the locations of events. We demonstrate STED’s usage and benefits using twitter data collected from Latin America countries, and show how our system helps to detect and track example events such as civil unrests and crimes.",
"title": ""
},
{
"docid": "fcd0c523e74717c572c288a90c588259",
"text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.",
"title": ""
},
{
"docid": "387e9609e2fe3c6893b8ce0a1613f98a",
"text": "Many fault-tolerant and intrusion-tolerant systems require the ability to execute unsafe programs in a realistic environment without leaving permanent damages. Virtual machine technology meets this requirement perfectly because it provides an execution environment that is both realistic and isolated. In this paper, we introduce an OS level virtual machine architecture for Windows applications called Feather-weight Virtual Machine (FVM), under which virtual machines share as many resources of the host machine as possible while still isolated from one another and from the host machine. The key technique behind FVM is namespace virtualization, which isolates virtual machines by renaming resources at the OS system call interface. Through a copy-on-write scheme, FVM allows multiple virtual machines to physically share resources but logically isolate their resources from each other. A main technical challenge in FVM is how to achieve strong isolation among different virtual machines and the host machine, due to numerous namespaces and interprocess communication mechanisms on Windows. Experimental results demonstrate that FVM is more flexible and scalable, requires less system resource, incurs lower start-up and run-time performance overhead than existing hardware-level virtual machine technologies, and thus makes a compelling building block for security and fault-tolerant applications.",
"title": ""
}
] | scidocsrr |
aca37317ed979441b3d09fccf5d0561d | Neural Sketch Learning for Conditional Program Generation | [
{
"docid": "75a1c22e950ccb135c054353acb8571a",
"text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.",
"title": ""
},
{
"docid": "5bf4a17592eca1881a93cd4930f4187d",
"text": "The problem of automatically generating a computer program from some specification has been studied since the early days of AI. Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation. Here, for the first time, we directly compare both approaches on a large-scale, real-world learning task and we additionally contrast to rule-based program synthesis, which uses hand-crafted semantics to guide the program generation. Our neural models use a modified attention RNN to allow encoding of variable-sized sets of I/O pairs, which achieve 92% accuracy on a real-world test set, compared to the 34% accuracy of the previous best neural synthesis approach. The synthesis model also outperforms a comparable induction model on this task, but we more importantly demonstrate that the strength of each approach is highly dependent on the evaluation metric and end-user application. Finally, we show that we can train our neural models to remain very robust to the type of noise expected in real-world data (e.g., typos), while a highly-engineered rule-based system fails entirely.",
"title": ""
}
] | [
{
"docid": "2ebb21cb1c6982d2d3839e2616cac839",
"text": "In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze.",
"title": ""
},
{
"docid": "ad88d2e2213624270328be0aa019b5cd",
"text": "The traditional decision-making framework for newsvendor models is to assume a distribution of the underlying demand. However, the resulting optimal policy is typically sensitive to the choice of the distribution. A more conservative approach is to assume that the distribution belongs to a set parameterized by a few known moments. An ambiguity-averse newsvendor would choose to maximize the worst-case profit. Most models of this type assume that only the mean and the variance are known, but do not attempt to include asymmetry properties of the distribution. Other recent models address asymmetry by including skewness and kurtosis. However, closed-form expressions on the optimal bounds are difficult to find for such models. In this paper, we propose a framework under which the expectation of a piecewise linear objective function is optimized over a set of distributions with known asymmetry properties. This asymmetry is represented by the first two moments of multiple random variables that result from partitioning the original distribution. In the simplest case, this reduces to semivariance. The optimal bounds can be solved through a second-order cone programming (SOCP) problem. This framework can be applied to the risk-averse and risk-neutral newsvendor problems and option pricing. We provide a closed-form expression for the worst-case newsvendor profit with only mean, variance and semivariance information.",
"title": ""
},
{
"docid": "5a4a6328fc88fbe32a81c904135b05c9",
"text": "Semi-supervised learning plays a significant role in multi-class classification, where a small number of labeled data are more deterministic while substantial unlabeled data might cause large uncertainties and potential threats. In this paper, we distinguish the label fitting of labeled and unlabeled training data through a probabilistic vector with an adaptive parameter, which always ensures the significant importance of labeled data and characterizes the contribution of unlabeled instance according to its uncertainty. Instead of using traditional least squares regression (LSR) for classification, we develop a new discriminative LSR by equipping each label with an adjustment vector. This strategy avoids incorrect penalization on samples that are far away from the boundary and simultaneously facilitates multi-class classification by enlarging the geometrical distance of instances belonging to different classes. An efficient alternative algorithm is exploited to solve the proposed model with closed form solution for each updating rule. We also analyze the convergence and complexity of the proposed algorithm theoretically. Experimental results on several benchmark datasets demonstrate the effectiveness and superiority of the proposed model for multi-class classification tasks.",
"title": ""
},
{
"docid": "48036770f56e84df8b05c198e8a89018",
"text": "Advances in low power VLSI design, along with the potentially low duty cycle of wireless sensor nodes open up the possibility of powering small wireless computing devices from scavenged ambient power. A broad review of potential power scavenging technologies and conventional energy sources is first presented. Low-level vibrations occurring in common household and office environments as a potential power source are studied in depth. The goal of this paper is not to suggest that the conversion of vibrations is the best or most versatile method to scavenge ambient power, but to study its potential as a viable power source for applications where vibrations are present. Different conversion mechanisms are investigated and evaluated leading to specific optimized designs for both capacitive MicroElectroMechancial Systems (MEMS) and piezoelectric converters. Simulations show that the potential power density from piezoelectric conversion is significantly higher. Experiments using an off-the-shelf PZT piezoelectric bimorph verify the accuracy of the models for piezoelectric converters. A power density of 70 mW/cm has been demonstrated with the PZT bimorph. Simulations show that an optimized design would be capable of 250 mW/cm from a vibration source with an acceleration amplitude of 2.5 m/s at 120 Hz. q 2002 Elsevier Science B.V.. All rights reserved.",
"title": ""
},
{
"docid": "4e97003a5609901f1f18be1ccbf9db46",
"text": "Fog computing is strongly emerging as a relevant and interest-attracting paradigm+technology for both the academic and industrial communities. However, architecture and methodological approaches are still prevalent in the literature, while few research activities have specifically targeted so far the issues of practical feasibility, cost-effectiveness, and efficiency of fog solutions over easily-deployable environments. In this perspective, this paper originally presents i) our fog-oriented framework for Internet-of-Things applications based on innovative scalability extensions of the open-source Kura gateway and ii) its Docker-based containerization over challenging and resource-limited fog nodes, i.e., RaspberryPi devices. Our practical experience and experimental work show the feasibility of using even extremely constrained nodes as fog gateways; the reported results demonstrate that good scalability and limited overhead can be coupled, via proper configuration tuning and implementation optimizations, with the significant advantages of containerization in terms of flexibility and easy deployment, also when working on top of existing, off-the-shelf, and limited-cost gateway nodes.",
"title": ""
},
{
"docid": "b91b887b3ec5d5b3100d711e1550f64b",
"text": "In this paper we describe the implementation of a complete ANN training procedure for speech recognition using the block mode back-propagation learning algorithm. We exploit the high performance SIMD architecture of GPU using CUDA and its C-like language interface. We also compare the speed-up obtained implementing the training procedure only taking advantage of the multi-thread capabilities of multi-core processors. Our approach has been tested by training acoustic models for large vocabulary speech recognition tasks, showing a 6 times reduction of the time required to train real-world large size networks with respect to an already optimized implementation using the Intel MKL libraries.",
"title": ""
},
{
"docid": "482063f167e0c2e677c4ca8fbd8228c0",
"text": "In this paper we present a novel method for real-time high quality previsualization and cinematic relighting. The physically based Path Tracing algorithm is used within an Augmented Reality setup to preview high-quality light transport. A novel differential version of progressive path tracing is proposed, which calculates two global light transport solutions that are required for differential rendering. A real-time previsualization framework is presented, which renders the solution with a low number of samples during interaction and allows for progressive quality improvement. If a user requests the high-quality solution of a certain view, the tracking is stopped and the algorithm progressively converges to an accurate solution. The problem of rendering complex light paths is solved by using photon mapping. Specular global illumination effects like caustics can easily be rendered. Our framework utilizes the massive parallel power of modern GPUs to achieve fast rendering with complex global illumination, a depth of field effect, and antialiasing.",
"title": ""
},
{
"docid": "d45c7f39c315bf5e8eab3052e75354bb",
"text": "Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images require the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world videos. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication.",
"title": ""
},
{
"docid": "672fa729e41d20bdd396f9de4ead36b3",
"text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "44fe5a6d0d9c7b12fd88961d82778868",
"text": "Traumatic brain injury (TBI) remains a major cause of death and disability worldwide. Increasing evidence indicates that TBI is an important risk factor for neurodegenerative diseases including Alzheimer's disease, Parkinson's disease, and chronic traumatic encephalopathy. Despite improved supportive and rehabilitative care of TBI patients, unfortunately, all late phase clinical trials in TBI have yet to yield a safe and effective neuroprotective treatment. The disappointing clinical trials may be attributed to variability in treatment approaches and heterogeneity of the population of TBI patients as well as a race against time to prevent or reduce inexorable cell death. TBI is not just an acute event but a chronic disease. Among many mechanisms involved in secondary injury after TBI, emerging preclinical studies indicate that posttraumatic prolonged and progressive neuroinflammation is associated with neurodegeneration which may be treatable long after the initiating brain injury. This review provides an overview of recent understanding of neuroinflammation in TBI and preclinical cell-based therapies that target neuroinflammation and promote functional recovery after TBI.",
"title": ""
},
{
"docid": "0daa6d62dedf410bf782af662639507e",
"text": "The paper presents a novel, ultra-compact, reduced-area implementation of a D-type flip-flop using the GaAs Enhancement-Depletion (ED) PHEMT process of the OMMIC with the gate metal layout modified, at the device process level. The D cell has been developed as the building block of a serial to parallel 13-bit shifter embedded within an integrated core-chip for satellite X band SAR applications, but can be exploited for a wide set of logical GaAs-based applications. The novel D cell design, based on the Enhancement-Depletion Super-Buffer (EDSB) logical family, allows for an area reduction of about 20%, with respect to the conventional design, and simplified interconnections. Design rules have been developed to optimize the cell performances. Measured and simulated NOR transfer characteristics show good agreement. A dedicated layout for RF probing has been developed to test the D-type flip-flop behaviour and performances.",
"title": ""
},
{
"docid": "6f518559d8c99ea1e6368ec8c108cabe",
"text": "This paper introduces an integrated Local Interconnect Network (LIN) transceiver which sets a new performance benchmark in terms of electromagnetic compatibility (EMC). The proposed topology succeeds in an extraordinary high robustness against RF disturbances which are injected into the BUS and in very low electromagnetic emissions (EMEs) radiated by the LIN network without adding any external components for filtering. In order to evaluate the circuits superior EMC performance, it was designed using a HV-BiCMOS technology for automotive applications, the EMC behavior was measured and the results were compared with a state of the art topology.",
"title": ""
},
{
"docid": "8672ab8f10baf109492127ee599effdd",
"text": "In the embryonic and adult brain, neural stem cells proliferate and give rise to neurons and glia through highly regulated processes. Epigenetic mechanisms — including DNA and histone modifications, as well as regulation by non-coding RNAs — have pivotal roles in different stages of neurogenesis. Aberrant epigenetic regulation also contributes to the pathogenesis of various brain disorders. Here, we review recent advances in our understanding of epigenetic regulation in neurogenesis and its dysregulation in brain disorders, including discussion of newly identified DNA cytosine modifications. We also briefly cover the emerging field of epitranscriptomics, which involves modifications of mRNAs and long non-coding RNAs.",
"title": ""
},
{
"docid": "59344cfe759a89a68e7bc4b0a5c971b1",
"text": "A non-linear support vector machine (NLSVM) seizure classification SoC with 8-channel EEG data acquisition and storage for epileptic patients is presented. The proposed SoC is the first work in literature that integrates a feature extraction (FE) engine, patient specific hardware-efficient NLSVM classification engine, 96 KB SRAM for EEG data storage and low-noise, high dynamic range readout circuits. To achieve on-chip integration of the NLSVM classification engine with minimum area and energy consumption, the FE engine utilizes time division multiplexing (TDM)-BPF architecture. The implemented log-linear Gaussian basis function (LL-GBF) NLSVM classifier exploits the linearization to achieve energy consumption of 0.39 μ J/operation and reduces the area by 28.2% compared to conventional GBF implementation. The readout circuits incorporate a chopper-stabilized DC servo loop to minimize the noise level elevation and achieve noise RTI of 0.81 μ Vrms for 0.5-100 Hz bandwidth with an NEF of 4.0. The 5 × 5 mm (2) SoC is implemented in a 0.18 μm 1P6M CMOS process consuming 1.83 μ J/classification for 8-channel operation. SoC verification has been done with the Children's Hospital Boston-MIT EEG database, as well as with a specific rapid eye-blink pattern detection test, which results in an average detection rate, average false alarm rate and latency of 95.1%, 0.94% (0.27 false alarms/hour) and 2 s, respectively.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "6cd5b8ef199d926bccc583b7e058d9ee",
"text": "Over the last three decades, a large number of evolutionary algorithms have been developed for solving multi-objective optimization problems. However, there lacks an upto-date and comprehensive software platform for researchers to properly benchmark existing algorithms and for practitioners to apply selected algorithms to solve their real-world problems. The demand of such a common tool becomes even more urgent, when the source code of many proposed algorithms has not been made publicly available. To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multiobjective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators. With a user-friendly graphical user interface, PlatEMO enables users to easily compare several evolutionary algorithms at one time and collect statistical results in Excel or LaTeX files. More importantly, PlatEMO is completely open source, such that users are able to develop new algorithms on the basis of it. This paper introduces the main features of PlatEMO and illustrates how to use it for performing comparative experiments, embedding new algorithms, creating new test problems, and developing performance indicators. Source code of PlatEMO is now available at: http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html.",
"title": ""
},
{
"docid": "27b2148c05febeb1051c1d1229a397d6",
"text": "Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.",
"title": ""
},
{
"docid": "3cc07ea28720245f9c4983b0a4b1a66d",
"text": "A first line of attack in exploratory data analysis is data visualization, i.e., generating a 2-dimensional representation of data that makes clusters of similar points visually identifiable. Standard JohnsonLindenstrauss dimensionality reduction does not produce data visualizations. The t-SNE heuristic of van der Maaten and Hinton, which is based on non-convex optimization, has become the de facto standard for visualization in a wide range of applications. This work gives a formal framework for the problem of data visualization – finding a 2-dimensional embedding of clusterable data that correctly separates individual clusters to make them visually identifiable. We then give a rigorous analysis of the performance of t-SNE under a natural, deterministic condition on the “ground-truth” clusters (similar to conditions assumed in earlier analyses of clustering) in the underlying data. These are the first provable guarantees on t-SNE for constructing good data visualizations. We show that our deterministic condition is satisfied by considerably general probabilistic generative models for clusterable data such as mixtures of well-separated log-concave distributions. Finally, we give theoretical evidence that t-SNE provably succeeds in partially recovering cluster structure even when the above deterministic condition is not met.",
"title": ""
},
{
"docid": "6200d3c4435ae34e912fc8d2f92e904b",
"text": "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter $\\alpha$ is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.",
"title": ""
}
] | scidocsrr |
faa5e078449e45aa488e8c0194a567af | Alcohol addiction and the attachment system: an empirical study of attachment style, alexithymia, and psychiatric disorders in alcoholic inpatients. | [
{
"docid": "e89cf17cf4d336468f75173767af63a5",
"text": "This article explores the possibility that romantic love is an attachment process--a biosocial process by which affectional bonds are formed between adult lovers, just as affectional bonds are formed earlier in life between human infants and their parents. Key components of attachment theory, developed by Bowlby, Ainsworth, and others to explain the development of affectional bonds in infancy, were translated into terms appropriate to adult romantic love. The translation centered on the three major styles of attachment in infancy--secure, avoidant, and anxious/ambivalent--and on the notion that continuity of relationship style is due in part to mental models (Bowlby's \"inner working models\") of self and social life. These models, and hence a person's attachment style, are seen as determined in part by childhood relationships with parents. Two questionnaire studies indicated that relative prevalence of the three attachment styles is roughly the same in adulthood as in infancy, the three kinds of adults differ predictably in the way they experience romantic love, and attachment style is related in theoretically meaningful ways to mental models of self and social relationships and to relationship experiences with parents. Implications for theories of romantic love are discussed, as are measurement problems and other issues related to future tests of the attachment perspective.",
"title": ""
}
] | [
{
"docid": "cd0c68845416f111307ae7e14bfb7491",
"text": "Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals' activity space. First, a survey was conducted to collect individuals' daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment.",
"title": ""
},
{
"docid": "b43178b53f927eb90473e2850f948cb6",
"text": "We study the problem of learning a navigation policy for a robot to actively search for an object of interest in an indoor environment solely from its visual inputs. While scene-driven visual navigation has been widely studied, prior efforts on learning navigation policies for robots to find objects are limited. The problem is often more challenging than target scene finding as the target objects can be very small in the view and can be in an arbitrary pose. We approach the problem from an active perceiver perspective, and propose a novel framework that integrates a deep neural network based object recognition module and a deep reinforcement learning based action prediction mechanism. To validate our method, we conduct experiments on both a simulation dataset (AI2-THOR)and a real-world environment with a physical robot. We further propose a new decaying reward function to learn the control policy specific to the object searching task. Experimental results validate the efficacy of our method, which outperforms competing methods in both average trajectory length and success rate.",
"title": ""
},
{
"docid": "2f1ad82127aa6fb65b712d395c31f690",
"text": "This paper presents a 100-300-GHz quasi-optical network analyzer using compact transmitter and receiver modules. The transmitter includes a wideband double bow-tie slot antenna and employs a Schottky diode as a frequency harmonic multiplier. The receiver includes a similar antenna, a Schottky diode used as a subharmonic mixer, and an LO/IF diplexer. The 100-300-GHz RF signals are the 5th-11th harmonics generated by the frequency multiplier when an 18-27-GHz LO signal is applied. The measured transmitter conversion gain with Pin = 18$ dBm is from -35 to -59 dB for the 5th-11th harmonic, respectively, and results in a transmitter EIRP from +3 to -20 dBm up to 300 GHz. The measured mixer conversion gain is from -30 to -47 dB at the 5th-11th harmonic, respectively. The system has a dynamic range > 60 dB at 200 GHz in a 100-Hz bandwidth for a transmit and receive system based on 12-mm lenses and spaced 60 cm from each other. Frequency-selective surfaces at 150 and 200 GHz are tested by the proposed design and their measured results agree with simulations. Application areas are low-cost scalar network analyzers for wideband quasi-optical 100 GHz-1 THz measurements.",
"title": ""
},
{
"docid": "e58e294dbacf605e40ff2f59cc4f8a6a",
"text": "There are fundamental similarities between sleep in mammals and quiescence in the arthropod Drosophila melanogaster, suggesting that sleep-like states are evolutionarily ancient. The nematode Caenorhabditis elegans also has a quiescent behavioural state during a period called lethargus, which occurs before each of the four moults. Like sleep, lethargus maintains a constant temporal relationship with the expression of the C. elegans Period homologue LIN-42 (ref. 5). Here we show that quiescence associated with lethargus has the additional sleep-like properties of reversibility, reduced responsiveness and homeostasis. We identify the cGMP-dependent protein kinase (PKG) gene egl-4 as a regulator of sleep-like behaviour, and show that egl-4 functions in sensory neurons to promote the C. elegans sleep-like state. Conserved effects on sleep-like behaviour of homologous genes in C. elegans and Drosophila suggest a common genetic regulation of sleep-like states in arthropods and nematodes. Our results indicate that C. elegans is a suitable model system for the study of sleep regulation. The association of this C. elegans sleep-like state with developmental changes that occur with larval moults suggests that sleep may have evolved to allow for developmental changes.",
"title": ""
},
{
"docid": "3f6f191d3d60cd68238545f4b809d4b4",
"text": "This paper examines the dependence of the healthcare waste (HCW) generation rate on several social-economic and environmental parameters. Correlations were calculated between the quantities of healthcare waste generated (expressed in kg/bed/day) versus economic indices (GDP, healthcare expenditure per capita), social indices (HDI, IHDI, MPI, life expectancy, mean years of schooling, HIV prevalence, deaths due to tuberculosis and malaria, and under five mortality rate), and an environmental sustainability index (total CO2 emissions) from 42 countries worldwide. The statistical analysis included the examination of the normality of the data and the formation of linear multiple regression models to further investigate the correlation between those indices and HCW generation rates. Pearson and Spearman correlation coefficients were also calculated for all pairwise comparisons. Results showed that the life expectancy, the HDI, the mean years of schooling and the CO2 emissions positively affect the HCW generation rates and can be used as statistical predictors of those rates. The resulting best reduced regression model included the life expectancy and the CO2 emissions and explained 85% of the variability of the response.",
"title": ""
},
{
"docid": "7f1ad50ce66c855776aaacd0d53279aa",
"text": "A method to synchronize and control a system of parallel single-phase inverters without communication is presented. Inspired by the phenomenon of synchronization in networks of coupled oscillators, we propose that each inverter be controlled to emulate the dynamics of a nonlinear dead-zone oscillator. As a consequence of the electrical coupling between inverters, they synchronize and share the load in proportion to their ratings. We outline a sufficient condition for global asymptotic synchronization and formulate a methodology for controller design such that the inverter terminal voltages oscillate at the desired frequency, and the load voltage is maintained within prescribed bounds. We also introduce a technique to facilitate the seamless addition of inverters controlled with the proposed approach into an energized system. Experimental results for a system of three inverters demonstrate power sharing in proportion to power ratings for both linear and nonlinear loads.",
"title": ""
},
{
"docid": "097da6ee2d13e0b4b2f84a26752574f4",
"text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.",
"title": ""
},
{
"docid": "159cd44503cb9def6276cb2b9d33c40e",
"text": "In the airline industry, data analysis and data mining are a prerequisite to push customer relationship management (CRM) ahead. Knowledge about data mining methods, marketing strategies and airline business processes has to be combined to successfully implement CRM. This paper is a case study and gives an overview about distinct issues, which have to be taken into account in order to provide a first solution to run CRM processes. We do not focus on each individual task of the project; rather we give a sketch about important steps like data preparation, customer valuation and segmentation and also explain the limitation of the solutions.",
"title": ""
},
{
"docid": "01a5bc92db5ae56c3bae8ddc84a1aa9b",
"text": "Accurate and automatic detection and delineation of cervical cells are two critical precursor steps to automatic Pap smear image analysis and detecting pre-cancerous changes in the uterine cervix. To overcome noise and cell occlusion, many segmentation methods resort to incorporating shape priors, mostly enforcing elliptical shapes (e.g. [1]). However, elliptical shapes do not accurately model cervical cells. In this paper, we propose a new continuous variational segmentation framework with star-shape prior using directional derivatives to segment overlapping cervical cells in Pap smear images. We show that our star-shape constraint better models the underlying problem and outperforms state-of-the-art methods in terms of accuracy and speed.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "39fdfa5258c2cb22ed2d7f1f5b2afeaf",
"text": "Calling for research on automatic oversight for artificial intelligence systems.",
"title": ""
},
{
"docid": "5b61b6d96b7a4af62bf30b535a18e14a",
"text": "schooling were as universally endorsed as homework. Educators, parents, and policymakers of all political and pedagogical stripes insisted that homework is good and more is better—a view that was promoted most visibly in A Nation at Risk (National Commission on Excellence in Education, 1983) and What Works (U.S. Department of Education, 1986).1 Indeed, never in the history of American education was there a stronger professional and public consensus in favor of homework (see Gill & Schlossman, 1996; Gill & Schlossman, 2000). Homework has been touted for academic and character-building purposes, and for promoting America’s international competitiveness (see, e.g., Cooper, 2001; Keith, 1986; Maeroff, 1992; Maeroff, 1989; The Economist, 1995). It has been viewed as a key symbol, method, and yardstick of serious commitment to educational re-",
"title": ""
},
{
"docid": "268e434cedbf5439612b2197be73a521",
"text": "We have recently developed a chaotic gas turbine whose rotational motion might simulate turbulent Rayleigh-Bénard convection. The nondimensionalized equations of motion of our turbine are expressed as a star network of N Lorenz subsystems, referred to as augmented Lorenz equations. Here, we propose an application of the augmented Lorenz equations to chaotic cryptography, as a type of symmetric secret-key cryptographic method, wherein message encryption is performed by superimposing the chaotic signal generated from the equations on a plaintext in much the same way as in one-time pad cryptography. The ciphertext is decrypted by unmasking the chaotic signal precisely reproduced with a secret key consisting of 2N-1 (e.g., N=101) real numbers that specify the augmented Lorenz equations. The transmitter and receiver are assumed to be connected via both a quantum communication channel on which the secret key is distributed using a quantum key distribution protocol and a classical data communication channel on which the ciphertext is transmitted. We discuss the security and feasibility of our cryptographic method.",
"title": ""
},
{
"docid": "63046d1ca19a158052a62c8719f5f707",
"text": "Cloud machine learning (CML) techniques offer contemporary machine learning services, with pre-trained models and a service to generate own personalized models. This paper presents a completely unique emotional modeling methodology for incorporating human feeling into intelligent systems. The projected approach includes a technique to elicit emotion factors from users, a replacement illustration of emotions and a framework for predicting and pursuit user’s emotional mechanical phenomenon over time. The neural network based CML service has better training concert and enlarged exactness compare to other large scale deep learning systems. Opinions are important to almost all human activities and cloud based sentiment analysis is concerned with the automatic extraction of sentiment related information from text. With the rising popularity and availability of opinion rich resources such as personal blogs and online appraisal sites, new opportunities and issues arise as people now, actively use information technologies to explore and capture others opinions. In the existing system, a segmentation ranking model is designed to score the usefulness of a segmentation candidate for sentiment classification. A classification model is used for predicting the sentiment polarity of segmentation. The joint framework is trained directly using the sentences annotated with only sentiment polarity, without the use of any syntactic or sentiment annotations in segmentation level. However the existing system still has issue with classification accuracy results. To improve the classification performance, in the proposed system, cloud integrate the support vector machine, naive bayes and neural network algorithms along with joint segmentation approaches has been proposed to classify the very positive, positive, neutral, negative and very negative features more effectively using important feature selection. Also to handle the outliers we apply modified k-means clustering method on the given dataset. It is used to cloud cluster the outliers and hence the label as well as unlabeled features is handled efficiently. From the experimental result, we conclude that the proposed system yields better performance than the existing system.",
"title": ""
},
{
"docid": "d14da110523c56d3c1ab2be9d3fbcf8e",
"text": "Women are generally more risk averse than men. We investigated whether between- and within-gender variation in financial risk aversion was accounted for by variation in salivary concentrations of testosterone and in markers of prenatal testosterone exposure in a sample of >500 MBA students. Higher levels of circulating testosterone were associated with lower risk aversion among women, but not among men. At comparably low concentrations of salivary testosterone, however, the gender difference in risk aversion disappeared, suggesting that testosterone has nonlinear effects on risk aversion regardless of gender. A similar relationship between risk aversion and testosterone was also found using markers of prenatal testosterone exposure. Finally, both testosterone levels and risk aversion predicted career choices after graduation: Individuals high in testosterone and low in risk aversion were more likely to choose risky careers in finance. These results suggest that testosterone has both organizational and activational effects on risk-sensitive financial decisions and long-term career choices.",
"title": ""
},
{
"docid": "f78fcf875104f8bab2fa465c414331c6",
"text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"title": ""
},
{
"docid": "cec10dde2a3988b39d8b2e7655e92a3c",
"text": "As the performance gap between the CPU and main memory continues to grow, techniques to hide memory latency are essential to deliver a high performance computer system. Prefetching can often overlap memory latency with computation for array-based numeric applications. However, prefetching for pointer-intensive applications still remains a challenging problem. Prefetching linked data structures (LDS) is difficult because the address sequence of LDS traversal does not present the same arithmetic regularity as array-based applications and the data dependence of pointer dereferences can serialize the address generation process.\nIn this paper, we propose a cooperative hardware/software mechanism to reduce memory access latencies for linked data structures. Instead of relying on the past address history to predict future accesses, we identify the load instructions that traverse the LDS, and execute them ahead of the actual computation. To overcome the serial nature of the LDS address generation, we attach a prefetch controller to each level of the memory hierarchy and push, rather than pull, data to the CPU. Our simulations, using four pointer-intensive applications, show that the push model can achieve between 4% and 30% larger reductions in execution time compared to the pull model.",
"title": ""
},
{
"docid": "89feab547a2ab97f41ee9ea47a78ebd7",
"text": "Yarrowia lipolytica 3589, a tropical marine yeast, grew aerobically on a broad range of bromoalkanes varying in carbon chain length and differing in degree and position of bromide group. Amongst the bromoalkanes studied, viz. 2-bromopropane (2-BP), 1-bromobutane (1-BB), 1,5-dibromopentane (1,5-DBP) and 1-bromodecane (1-BD), the best utilized was 1-BD, with a maximal growth rate (μ(max) ) of 0.055 h⁻¹ and an affinity ratio (μ(max) /K(s) ) of 0.022. Utilization of these bromoalkanes as growth substrates was associated with a concomitant release of bromide (8202.9 µm) and cell mass (36 × 10⁹ cells/ml), occurring maximally on 1-BD. Adherence of yeast cells to these hydrophobic bromoalkanes was observed microscopically, with an increase in cell size and surface hydrophobicity. The maximal cell diameter was for 1-BD (4.66 µm), resulting in an increase in the calculated cell surface area (68.19 µm²) and sedimentation velocity (1.31 µm/s). Cell surface hydrophobicity values by microbial adhesion to solvents (MATS) analysis for yeasts grown on bromoalkanes and glucose were significantly high, i.e. >80%. Similarly, water contact angles also indicate that the cell surface of yeast cells grown in glucose possess a relatively more hydrophilic cell surface (θ = 49.1°), whereas cells grown in 1-BD possess a more hydrophobic cell surface (θ = 90.7°). No significant change in emulsification activity or surface tension was detected in the cell-free supernatant. Thus adherence to the bromoalkane droplets by an increase in cell size and surface hydrophobicity leading to debromination of the substrate might be the strategy employed in bromoalkane utilization and growth by Y. lipolytica 3589.",
"title": ""
},
{
"docid": "8721382dd1674fac3194d015b9c64f94",
"text": "fines excipients as “substances, other than the active drug substance of finished dosage form, which have been appropriately evaluated for safety and are included in a drug delivery system to either aid the processing of the drug delivery system during its manufacture; protect; support; enhance stability, bioavailability, or patient acceptability; assist in product identification; or enhance any other attributes of the overall safety and effectiveness of the drug delivery system during storage or use” (1). This definition implies that excipients serve a purpose in a formulation and contrasts with the old terminology, inactive excipients, which hints at the property of inertness. With a literal interpretation of this definition, an excipient can include diverse molecules or moieties such as replication incompetent viruses (adenoviral or retroviral vectors), bacterial protein components, monoclonal antibodies, bacteriophages, fusion proteins, and molecular chimera. For example, using gene-directed enzyme prodrug therapy, research indicated that chimera containing a transcriptional regulatory DNA sequence capable of being selectively activated in mammalian cells was linked to a sequence that encodes a -lactamase enzyme and delivered to target cells (2). The expressed enzyme in the targeted cells catalyzes the conversion of a subsequently administered prodrug to a toxic agent. A similar purpose is achieved by using an antibody conjugated to an enzyme followed by the administration of a noncytotoxic substance that is converted in vivo by the enzyme to its toxic form (3). In these examples, the chimera or the enzyme-linked antibody would qualify as excipients. Furthermore, many emerging delivery systems use a drug or gene covalently linked to the molecules, polymers, antibody, or chimera responsible for drug targeting, internalization, or transfection. Conventional wisdom dictates that such an entity be classified as the active substance or prodrug for regulatory purposes and be subject to one set of specifications for the entire molecule. The fact remains, however, that only a discrete part of this prodrug is responsible for the therapeutic effect, and a similar effect may be obtained by physically entrapping the drug as opposed to covalent conjugation. The situation is further complicated when fusion proteins are used as a combination of drug and delivery system or when the excipients themselves",
"title": ""
},
{
"docid": "65e3890edd57a0a6de65b4e38f3cea1c",
"text": "This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an `1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of `1-analysis for such problems.",
"title": ""
}
] | scidocsrr |
f45d110ac512a7916525b8f457d0a45c | Active Learning for Multivariate Time Series Classification with Positive Unlabeled Data | [
{
"docid": "8a5ae40bc5921d7614ca34ddf53cebbc",
"text": "In natural language processing community, sentiment classification based on insufficient labeled data is a well-known challenging problem. In this paper, a novel semi-supervised learning algorithm called active deep network (ADN) is proposed to address this problem. First, we propose the semi-supervised learning framework of ADN. ADN is constructed by restricted Boltzmann machines (RBM) with unsupervised fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, in the semi-supervised learning framework, we apply active learning to identify reviews that should be labeled as training data, then using the selected labeled reviews and all unlabeled reviews to train ADN architecture. Moreover, we combine the information density with ADN, and propose information ADN (IADN) method, which can apply the information density of all unlabeled reviews in choosing the manual labeled reviews. Experiments on five sentiment classification datasets show that ADN and IADN outperform classical semi-supervised learning algorithms, and deep learning techniques applied for sentiment classification. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "2e6193301f53719e58782bece34cb55a",
"text": "There is an increasing trend in using robots for medical purposes. One specific area is the rehabilitation. There are some commercial exercise machines used for rehabilitation purposes. However, these machines have limited use because of their insufficient motion freedom. In addition, these types of machines are not actively controlled and therefore can not accommodate complicated exercises required during rehabilitation. In this study, a rule based intelligent control methodology is proposed to imitate the faculties of an experienced physiotherapist. These involve interpretation of patient reactions, storing the information received, acting according to the available data, and learning from the previous experiences. Robot manipulator is driven by a servo motor and controlled by a computer using force/torque and position sensor information. Impedance control technique is selected for the force control.",
"title": ""
},
{
"docid": "ef9b5b0fbfd71c8d939bfe947c60292d",
"text": "OBJECTIVE\nSome prolonged and turbulent grief reactions include symptoms that differ from the DSM-IV criteria for major depressive disorder. The authors investigated a new diagnosis that would include these symptoms.\n\n\nMETHOD\nThey developed observer-based definitions of 30 symptoms noted clinically in previous longitudinal interviews of bereaved persons and then designed a plan to investigate whether any combination of these would serve as criteria for a possible new diagnosis of complicated grief disorder. Using a structured diagnostic interview, they assessed 70 subjects whose spouses had died. Latent class model analyses and signal detection procedures were used to calibrate the data against global clinical ratings and self-report measures of grief-specific distress.\n\n\nRESULTS\nComplicated grief disorder was found to be characterized by a smaller set of the assessed symptoms. Subjects elected by an algorithm for these symptoms patterns did not significantly overlap with subjects who received a diagnosis of major depressive disorder.\n\n\nCONCLUSIONS\nA new diagnosis of complicated grief disorder may be indicated. Its criteria would include the current experience (more than a year after a loss) of intense intrusive thoughts, pangs of severe emotion, distressing yearnings, feeling excessively alone and empty, excessively avoiding tasks reminiscent of the deceased, unusual sleep disturbances, and maladaptive levels of loss of interest in personal activities.",
"title": ""
},
{
"docid": "bd1ebfe449a1a95ac37f6c084e3e6dad",
"text": "Within the last years educational games have attracted some attention from the academic community. Multiple enhancements of the learning experience are usually attributed to educational games, although the most cited is their potential to improve students' motivation. In spite of these expected advantages, how to introduce video games in the learning process is an issue that is not completely clear yet, which reduces the potential impact of educational video games. Our goal at the <;e-UCM> research group is to identify the barriers that are limiting the integration of games in the learning process and propose approaches to tackle them. The result of this work is the <;e-Adventure> platform, an educational game authoring tool that aims to make of video games just another educational tool at the disposal of the instructors. In this paper we describe how <;e-Adventure> contributes to the integration of games in the learning process through three main focuses: reduction of the high development costs of educational games, involvement of instructors in the development process to enhance the educational value, and the production of the games using a white-box model. In addition we describe the current research that we are conducting using the platform as a test-bed.",
"title": ""
},
{
"docid": "083d5b88cc1bf5490a0783a4a94e9fb2",
"text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.",
"title": ""
},
{
"docid": "238adc0417c167aeb64c23b576f434d0",
"text": "This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.",
"title": ""
},
{
"docid": "877d7d467711e8cb0fd03a941c7dc9da",
"text": "Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.",
"title": ""
},
{
"docid": "e0e33d26cc65569e80213069cb5ad857",
"text": "Capsule Networks have great potential to tackle problems in structural biology because of their aention to hierarchical relationships. is paper describes the implementation and application of a Capsule Network architecture to the classication of RAS protein family structures on GPU-based computational resources. e proposed Capsule Network trained on 2D and 3D structural encodings can successfully classify HRAS and KRAS structures. e Capsule Network can also classify a protein-based dataset derived from a PSI-BLAST search on sequences of KRAS and HRAS mutations. Our results show an accuracy improvement compared to traditional convolutional networks, while improving interpretability through visualization of activation vectors.",
"title": ""
},
{
"docid": "10f6ae0e9c254279b0cf0f5e98caa9cd",
"text": "The automatic assessment of photo quality from an aesthetic perspective is a very challenging problem. Most existing research has predominantly focused on the learning of a universal aesthetic model based on hand-crafted visual descriptors . However, this research paradigm can achieve only limited success because (1) such hand-crafted descriptors cannot well preserve abstract aesthetic properties , and (2) such a universal model cannot always capture the full diversity of visual content. To address these challenges, we propose in this paper a novel query-dependent aesthetic model with deep learning for photo quality assessment. In our method, deep aesthetic abstractions are discovered from massive images , whereas the aesthetic assessment model is learned in a query- dependent manner. Our work addresses the first problem by learning mid-level aesthetic feature abstractions via powerful deep convolutional neural networks to automatically capture the underlying aesthetic characteristics of the massive training images . Regarding the second problem, because photographers tend to employ different rules of photography for capturing different images , the aesthetic model should also be query- dependent . Specifically, given an image to be assessed, we first identify which aesthetic model should be applied for this particular image. Then, we build a unique aesthetic model of this type to assess its aesthetic quality. We conducted extensive experiments on two large-scale datasets and demonstrated that the proposed query-dependent model equipped with learned deep aesthetic abstractions significantly and consistently outperforms state-of-the-art hand-crafted feature -based and universal model-based methods.",
"title": ""
},
{
"docid": "0998097311e16ad38e2404435a778dcb",
"text": "Civilian Global Positioning System (GPS) receivers are vulnerable to a number of different attacks such as blocking, jamming, and spoofing. The goal of such attacks is either to prevent a position lock (blocking and jamming), or to feed the receiver false information so that it computes an erroneous time or location (spoofing). GPS receivers are generally aware of when blocking or jamming is occurring because they have a loss of signal. Spoofing, however, is a surreptitious attack. Currently, no countermeasures are in use for detecting spoofing attacks. We believe, however, that it is possible to implement simple, low-cost countermeasures that can be retrofitted onto existing GPS receivers. This would, at the very least, greatly complicate spoofing attacks. Introduction: The civilian Global Positioning System (GPS) is widely used by both government and private industry for many important applications. Some of these applications include public safety services such as police, fire, rescue and ambulance. The cargo industry, buses, taxis, railcars, delivery vehicles, agricultural harvesters, private automobiles, spacecraft, marine and airborne traffic also use GPS systems for navigation. In fact, the Federal Aviation Administration (FAA) is in the process of drafting an instruction requiring that all radio navigation systems aboard aircraft use GPS [1]. Additional uses include hiking and surveying, as well as being used in robotics, cell phones, animal tracking and even GPS wristwatches. Utility companies and telecommunication companies use GPS timing signals to regulate the base frequency of their distribution grids. GPS timing signals are also used by the financial industry, the broadcast industry, mobile telecommunication providers, the international financial industry, banking (for money transfers and time locks), and other distributed computer network applications [2,3]. In short, anyone who wants to know their exact location, velocity, or time might find GPS useful. Unfortunately, the civilian GPS signals are not secure [1]. Only the military GPS signals are encrypted (authenticated), but these are generally unavailable to civilians, foreign governments, and most of the U.S. government, including most of the Department of Defense (DoD). Plans are underway to upgrade the existing GPS system, but they apparently do not include adding encryption or authentication to the civilian GPS signal [4,5]. The GPS signal strength measured at the surface of the Earth is about –160dBw (1x10-16 Watts), which is roughly equivalent to viewing a 25-Watt light bulb from a distance of 10,000 miles. This weak signal can be easily blocked by destroying or shielding the GPS receiver’s antenna. The GPS signal can also be effectively jammed by a signal of a similar frequency, but greater strength. Blocking and jamming, however, are not the greatest security risk, because the GPS receiver will be fully aware it is not receiving the GPS signals needed to determine position and time. A more pernicious attack involves feeding the GPS receiver fake GPS signals so that it believes it is located somewhere in space and time that it is not. This “spoofing” attack is more elegant than jamming because it is surreptitious. The Vulnerability Assessment Team (VAT) at Los Alamos National Laboratory (LANL) has recently demonstrated the ease with which civilian GPS spoofing attacks can be implemented [6]. This spoofing is most easily accomplished by using a GPS satellite simulator. Such GPS satellite simulators are uncontrolled, and widely available. To conduct the spoofing attack, an adversary broadcasts a fake GPS signal with a higher signal strength than the true GPS signal. The GPS receiver believes that the fake signal is actually the true GPS signal from space, and ignores the true GPS signal. The receiver then proceeds to calculate erroneous position or time information based on this false signal. How Does GPS work? The GPS is operated by DoD. It consists of a constellation of 27 satellites (24 active and 3 standby) in 6 separate orbits and reached full official operational capability status on July 17, 1995 [7]. GPS users have the ability to obtain a 3-D position, velocity and time fix in all types of weather, 24-hours a day. GPS users can locate their position to within ± 18 ft on average or ± 60-90 ft for a worst case 3-D fix [8]. Each GPS satellite broadcasts two signals, a civilian unencrypted signal and a military encrypted signal. The civilian GPS signal was never intended for critical or security applications, though that is, unfortunately, how it is now often used. The DoD reserves the military encrypted GPS signal for sensitive applications such as smart weapons. This paper will be focusing on the civilian (unencrypted) GPS signal. Any discussion of civilian GPS vulnerabilities are fully unclassified [9]. The carrier wave for the civilian signal is the same frequency (1575.2 MHz) for all of the GPS satellites. The C/A code provides the GPS receiver on the Earth’s surface with a unique identification number (a.k.a. PRN or Pseudo Random Noise code). In this manner, each satellite transmits a unique identification number that allows the GPS receiver to know which satellites it is receiving signals from. The Nav/System data provides the GPS receiver with information about the position of all the satellites in the constellation as well as precise timing data from the atomic clocks aboard the satellites. L1 Carrier 1575.2 MHz",
"title": ""
},
{
"docid": "723cf2a8b6142a7e52a0ff3fb74c3985",
"text": "The Internet of Mobile Things (IoMT) requires support for a data lifecycle process ranging from sorting, cleaning and monitoring data streams to more complex tasks such as querying, aggregation, and analytics. Current solutions for stream data management in IoMT have been focused on partial aspects of a data lifecycle process, with special emphasis on sensor networks. This paper aims to address this problem by developing an offline and real-time data lifecycle process that incorporates a layered, data-flow centric, and an edge/cloud computing approach that is needed for handling heterogeneous, streaming and geographicallydispersed IoMT data streams. We propose an end to end architecture to support an instant intra-layer communication that establishes a stream data flow in real-time to respond to immediate data lifecycle tasks at the edge layer of the system. Our architecture also provides offline functionalities for later analytics and visualization of IoMT data streams at the core layer of the system. Communication and process are thus the defining factors in the design of our stream data management solution for IoMT. We describe and evaluate our prototype implementation using real-time transit data feeds and a commercial edge-based platform. Preliminary results are showing the advantages of running data lifecycle tasks at the edge of the network for reducing the volume of data streams that are redundant and should not be transported to the cloud. Keywords—stream data lifecycle, edge computing, cloud computing, Internet of Mobile Things, end to end architectures",
"title": ""
},
{
"docid": "54cef03846f090678efd5b67d3cb5b17",
"text": "This paper based on the speed control of induction motor (IM) using proportional integral controller (PI controller) and proportional integral derivative controller (PID controller) with the use of vector control technique. The conventional PID controller is compared with the conventional PI controller for full load condition. MATLAB simulation is carried out and results are investigated for speed control of Induction Motor without any controller, with PI controller and with PID controller on full load condition.",
"title": ""
},
{
"docid": "1fcdfd02a6ecb12dec5799d6580c67d4",
"text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.",
"title": ""
},
{
"docid": "1d29d30089ffd9748c925a20f8a1216e",
"text": "• Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain.",
"title": ""
},
{
"docid": "68b2608c91525f3147f74b41612a9064",
"text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.",
"title": ""
},
{
"docid": "62f67cf8f628be029ce748121ff52c42",
"text": "This paper reviews interface design of web pages for e-commerce. Different tasks in e-commerce are contrasted. A systems model is used to illustrate the information flow between three subsystems in e-commerce: store environment, customer, and web technology. A customer makes several decisions: to enter the store, to navigate, to purchase, to pay, and to keep the merchandize. This artificial environment must be designed so that it can support customer decision-making. To retain customers it must be pleasing and fun, and create a task with natural flow. Customers have different needs, competence and motivation, which affect decision-making. It may therefore be important to customize the design of the e-store environment. Future ergonomics research will have to investigate perceptual aspects, such as presentation of merchandize, and cognitive issues, such as product search and navigation, as well as decision making while considering various economic parameters. Five theories on e-commerce research are presented.",
"title": ""
},
{
"docid": "0185d09853600b950f5a1af27e0cdd91",
"text": "In this paper, the problem of matching pairs of correlated random graphs with multi-valued edge attributes is considered. Graph matching problems of this nature arise in several settings of practical interest including social network de-anonymization, study of biological data, and web graphs. An achievable region of graph parameters for successful matching is derived by analyzing a new matching algorithm that we refer to as typicality matching. The algorithm operates by investigating the joint typicality of the adjacency matrices of the two correlated graphs. Our main result shows that the achievable region depends on the mutual information between the variables corresponding to the edge probabilities of the two graphs. The result is based on bounds on the typicality of permutations of sequences of random variables that might be of independent interest.",
"title": ""
},
{
"docid": "671952f18fb9041e7335f205666bf1f5",
"text": "This new handbook is an efficient way to keep up with the continuing advances in antenna technology and applications. The handbook is uniformly well written, up-to-date, and filled with a wealth of practical information. This makes it a useful reference for most antenna engineers and graduate students.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "e22f9516948725be20d8e331d5bafa56",
"text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in automatic surveillance of electrical infrastructure. For an automatic vision-based power line inspection system, detecting power lines from a cluttered background is one of the most important and challenging tasks. In this paper, a novel method is proposed, specifically for power line detection from aerial images. A pulse coupled neural filter is developed to remove background noise and generate an edge map prior to the Hough transform being employed to detect straight lines. An improved Hough transform is used by performing knowledge-based line clustering in Hough space to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective for automatic power line detection.",
"title": ""
},
{
"docid": "fb66a74a7cb4aa27556b428e378353a8",
"text": "This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Abstract—High-resolution radar sensors are able to resolve multiple measurements per object and therefore provide valuable information for vehicle environment perception. For instance, multiple measurements allow to infer the size of an object or to more precisely measure the object’s motion. Yet, the increased amount of data raises the demands on tracking modules: measurement models that are able to process multiple measurements for an object are necessary and measurement-toobject associations become more complex. This paper presents a new variational radar model for vehicles and demonstrates how this model can be incorporated in a Random-Finite-Setbased multi-object tracker. The measurement model is learned from actual data using variational Gaussian mixtures and avoids excessive manual engineering. In combination with the multiobject tracker, the entire process chain from the raw measurements to the resulting tracks is formulated probabilistically. The presented approach is evaluated on experimental data and it is demonstrated that data-driven measurement model outperforms a manually designed model.",
"title": ""
}
] | scidocsrr |
09af52aae82202026c94950754f17f7c | Wireless body sensor networks for health-monitoring applications. | [
{
"docid": "a75ab88f3b7f672bc357429793e74635",
"text": "To save life, casualty care requires that trauma injuries are accurately and expeditiously assessed in the field. This paper describes the initial bench testing of a wireless wearable pulse oximeter developed based on a small forehead mounted sensor. The battery operated device employs a lightweight optical reflectance sensor and incorporates an annular photodetector to reduce power consumption. The system also has short range wireless communication capabilities to transfer arterial oxygen saturation (SpO2), heart rate (HR), body acceleration, and posture information to a PDA. It has the potential for use in combat casualty care, such as for remote triage, and by first responders, such as firefighters",
"title": ""
}
] | [
{
"docid": "b120095067684a67fe3327d18860e760",
"text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.",
"title": ""
},
{
"docid": "e6cd81dfc8c6c505161e84faaf51fa04",
"text": "It was assumed that the degraded image H was of the form H= W*S, where W is the original image, S is the point spread function, and * denotes the operation of convolution. It was also assumed that W, S, and H are discrete probability-frequency functions, not necessarily normalized. That is, the numerical value of a point of W, S, or H is considered as a measure of the frequency of the occurrence of an event at that point. S is usually in normalized form. Units of energy (which may be considered unique events) originating at a point in W are distributed at points in H according to the frequencies indicated by S. H then represents the resulting sums of the effects of the units of energy originating at all points of W. In what follows, each of the three letters has two uses when subscripted. For example, Wi indicates either the ith location in the array W or the value associated with the ith location. The unsubscripted letter refers to the entire array or the value associated with the array as in W = E i Wi. The doublesubscripted Wi j in two dimensions is interpreted similarly to Wi in one dimension. In the approximation formulas, a subscript r appears, which is the number of the iteration.",
"title": ""
},
{
"docid": "0994065c757a88373a4d97e5facfee85",
"text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.",
"title": ""
},
{
"docid": "d54e33049b3f5170ec8bd09d8f17c05c",
"text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.",
"title": ""
},
{
"docid": "48b2d263a0f547c5c284c25a9e43828e",
"text": "This paper presents hierarchical topic models for integrating sentiment analysis with collaborative filtering. Our goal is to automatically predict future reviews to a given author from previous reviews. For this goal, we focus on differentiating author's preference, while previous sentiment analysis models process these review articles without this difference. Therefore, we propose a Latent Evaluation Topic model (LET) that infer each author's preference by introducing novel latent variables into author and his/her document layer. Because these variables distinguish the variety of words in each article by merging similar word distributions, LET incorporates the difference of writers' preferences into sentiment analysis. Consequently LET can determine the attitude of writers, and predict their reviews based on like-minded writers' reviews in the collaborative filtering approach. Experiments on review articles show that the proposed model can reduce the dimensionality of reviews to the low-dimensional set of these latent variables, and is a significant improvement over standard sentiment analysis models and collaborative filtering algorithms.",
"title": ""
},
{
"docid": "49e875364e2551dda40b682bd37d4ea6",
"text": "The short-circuit current calculation of any equipment in the power system is very important for selection of appropriate relay characteristics and circuit breaker for the protection of the system. The power system is undergoing changes because of large scale penetration of renewable energy sources in the conventional system. Major renewable sources which are included in the power system are wind energy and solar energy sources. The wind energy is supplied by wind turbine generators of various types. Type III generators i.e. Doubly Fed Induction Generator (DFIG) is the most common types of generator employed offering different behavior compared to conventionally employed synchronous generators. In this paper; the short circuit current contribution of DFIG is calculated analytically and the same is validated by PSCAD/EMTDC software under various wind speeds and by considering certain voltage drops of the generator output.",
"title": ""
},
{
"docid": "370b1775eddfb6241078285872e1a009",
"text": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL’03, ACE’05, and ClueWeb’09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.",
"title": ""
},
{
"docid": "9fefe5e216dec9b11f389c7d62175742",
"text": "Physical interaction in robotics is a complex problem that requires not only accurate reproduction of the kinematic trajectories but also of the forces and torques exhibited during the movement. We base our approach on Movement Primitives (MP), as MPs provide a framework for modelling complex movements and introduce useful operations on the movements, such as generalization to novel situations, time scaling, and others. Usually, MPs are trained with imitation learning, where an expert demonstrates the trajectories. However, MPs used in physical interaction either require additional learning approaches, e.g., reinforcement learning, or are based on handcrafted solutions. Our goal is to learn and generate movements for physical interaction that are learned with imitation learning, from a small set of demonstrated trajectories. The Probabilistic Movement Primitives (ProMPs) framework is a recent MP approach that introduces beneficial properties, such as combination and blending of MPs, and represents the correlations present in the movement. The ProMPs provides a variable stiffness controller that reproduces the movement but it requires a dynamics model of the system. Learning such a model is not a trivial task, and, therefore, we introduce the model-free ProMPs, that are learning jointly the movement and the necessary actions from a few demonstrations. We derive a variable stiffness controller analytically. We further extent the ProMPs to include force and torque signals, necessary for physical interaction. We evaluate our approach in simulated and real robot tasks.",
"title": ""
},
{
"docid": "016891dcefdf3668b6359d95617536b3",
"text": "While most steps in the modern object detection methods are learnable, the region feature extraction step remains largely handcrafted, featured by RoI pooling methods. This work proposes a general viewpoint that unifies existing region feature extraction methods and a novel method that is end-to-end learnable. The proposed method removes most heuristic choices and outperforms its RoI pooling counterparts. It moves further towards fully learnable object detection.",
"title": ""
},
{
"docid": "8c174dbb8468b1ce6f4be3676d314719",
"text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.",
"title": ""
},
{
"docid": "3f157067ce2d5d6b6b4c9d9faaca267b",
"text": "The rise of network forms of organization is a key consequence of the ongoing information revolution. Business organizations are being newly energized by networking, and many professional militaries are experimenting with flatter forms of organization. In this chapter, we explore the impact of networks on terrorist capabilities, and consider how this development may be associated with a move away from emphasis on traditional, episodic efforts at coercion to a new view of terror as a form of protracted warfare. Seen in this light, the recent bombings of U.S. embassies in East Africa, along with the retaliatory American missile strikes, may prove to be the opening shots of a war between a leading state and a terror network. We consider both the likely context and the conduct of such a war, and offer some insights that might inform policies aimed at defending against and countering terrorism.",
"title": ""
},
{
"docid": "a45ac7298f57a1be7bf5a968a3d4f10b",
"text": "Recent work has shown that tight concentration of the entire spectrum of singular values of a deep network’s input-output Jacobian around one at initialization can speed up learning by orders of magnitude. Therefore, to guide important design choices, it is important to build a full theoretical understanding of the spectra of Jacobians at initialization. To this end, we leverage powerful tools from free probability theory to provide a detailed analytic understanding of how a deep network’s Jacobian spectrum depends on various hyperparameters including the nonlinearity, the weight and bias distributions, and the depth. For a variety of nonlinearities, our work reveals the emergence of new universal limiting spectral distributions that remain concentrated around one even as the depth goes to infinity.",
"title": ""
},
{
"docid": "5174b54a546002863a50362c70921176",
"text": "The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain's abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.",
"title": ""
},
{
"docid": "ea5a455bca9ff0dbb1996bd97d89dfe5",
"text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.",
"title": ""
},
{
"docid": "95b23060ff9ee6393acc7b8a7f0c0535",
"text": "The increased price and the limited supply of rare-earth materials have been recognized as a problem by the international clean energy community. Rare-earth permanent magnets are widely used in electrical motors in hybrid and pure electrical vehicles, which are prized for improving fuel efficiency and reducing carbon dioxide (CO2) emissions. Such motors must have characteristics of high efficiency, compactness, and high torque density, as well as a wide range of operating speeds. So far, these demands have not been achieved without the use of rare-earth permanent magnets. Here, we show that a switched reluctance motor that is competitive with rare-earth permanent-magnet motors can be designed. The developed motor contains no rare-earth permanent magnets, but rather, employs high-silicon steel with low iron loss to improve efficiency. Experiments showed that the developed motor has competitive or better efficiency, torque density, compactness, and range of operating speeds compared with a standard rare-earth permanent-magnet motor. Our results demonstrate how a rare-earth-free motor could be developed to be competitive with rare-earth permanent-magnet motors, for use as a more affordable and sustainable alternative, not only in electric and hybrid vehicles, but also in the wide variety of industrial applications.",
"title": ""
},
{
"docid": "e9b438cfe853e98f05b661f9149c0408",
"text": "Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential \"backfire\" effects.",
"title": ""
},
{
"docid": "f80458241f0a33aebd8044bf85bd25ec",
"text": "Brachial–ankle pulse wave velocity (baPWV) is a promising technique to assess arterial stiffness conveniently. However, it is not known whether baPWV is associated with well-established indices of central arterial stiffness. We determined the relation of baPWV with aortic (carotid-femoral) PWV, leg (femoral-ankle) PWV, and carotid augmentation index (AI) by using both cross-sectional and interventional approaches. First, we studied 409 healthy adults aged 18–76 years. baPWV correlated significantly with aortic PWV (r=0.76), leg PWV (r=0.76), and carotid AI (r=0.52). A stepwise regression analysis revealed that aortic PWV was the primary independent correlate of baPWV, explaining 58% of the total variance in baPWV. Additional 23% of the variance was explained by leg PWV. Second, 13 sedentary healthy men were studied before and after a 16-week moderate aerobic exercise intervention (brisk walking to jogging; 30–45 min/day; 4–5 days/week). Reductions in aortic PWV observed with the exercise intervention were significantly and positively associated with the corresponding changes in baPWV (r=0.74). A stepwise regression analysis revealed that changes in aortic PWV were the only independent correlate of changes in baPWV (β=0.74), explaining 55% of the total variance. These results suggest that baPWV may provide qualitatively similar information to those derived from central arterial stiffness although some portions of baPWV may be determined by peripheral arterial stiffness.",
"title": ""
},
{
"docid": "5455a8fd6e6be03e3a4163665425247d",
"text": "The change in spring phenology is recognized to exert a major influence on carbon balance dynamics in temperate ecosystems. Over the past several decades, several studies focused on shifts in spring phenology; however, large uncertainties still exist, and one understudied source could be the method implemented in retrieving satellite-derived spring phenology. To account for this potential uncertainty, we conducted a multimethod investigation to quantify changes in vegetation green-up date from 1982 to 2010 over temperate China, and to characterize climatic controls on spring phenology. Over temperate China, the five methods estimated that the vegetation green-up onset date advanced, on average, at a rate of 1.3 ± 0.6 days per decade (ranging from 0.4 to 1.9 days per decade) over the last 29 years. Moreover, the sign of the trends in vegetation green-up date derived from the five methods were broadly consistent spatially and for different vegetation types, but with large differences in the magnitude of the trend. The large intermethod variance was notably observed in arid and semiarid vegetation types. Our results also showed that change in vegetation green-up date is more closely correlated with temperature than with precipitation. However, the temperature sensitivity of spring vegetation green-up date became higher as precipitation increased, implying that precipitation is an important regulator of the response of vegetation spring phenology to change in temperature. This intricate linkage between spring phenology and precipitation must be taken into account in current phenological models which are mostly driven by temperature.",
"title": ""
},
{
"docid": "3b9b49f8c2773497f8e05bff4a594207",
"text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%AP@0.5 on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.",
"title": ""
},
{
"docid": "feef714b024ad00086a5303a8b74b0a4",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.",
"title": ""
}
] | scidocsrr |
ae915c34345204fff23600f7737930a7 | Treatment planning of the edentulous mandible | [
{
"docid": "0ad4432a79ea6b3eefbe940adf55ff7b",
"text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.",
"title": ""
}
] | [
{
"docid": "525f9a7321a7b45111a19f458c9b976a",
"text": "This paper provides a literature review on Adaptive Line Enhancer (ALE) methods based on adaptive noise cancellation systems. Such methods have been used in various applications, including communication systems, biomedical engineering, and industrial applications. Developments in ALE in noise cancellation are reviewed, including the principles, adaptive algorithms, and recent modifications on the filter design proposed to increase the convergence rate and reduce the computational complexity for future implementation. The advantages and drawbacks of various adaptive algorithms, such as the Least Mean Square, Recursive Least Square, Affine Projection Algorithm, and their variants, are discussed in this review. Design modifications of filter structures used in ALE are also evaluated. Such filters include Finite Impulse Response, Infinite Impulse Response, lattice, and nonlinear adaptive filters. These structural modifications aim to achieve better adaptive filter performance in ALE systems. Finally, a perspective of future research on ALE systems is presented for further consideration.",
"title": ""
},
{
"docid": "188df015d60168b57f37e39089f3b14e",
"text": "Implementation of a nutrition programme for team sports involves application of scientific research together with the social skills necessary to work with a sports medicine and coaching staff. Both field and court team sports are characterized by intermittent activity requiring a heavy reliance on dietary carbohydrate sources to maintain and replenish glycogen. Energy and substrate demands are high during pre-season training and matches, and moderate during training in the competitive season. Dietary planning must include enough carbohydrate on a moderate energy budget, while also meeting protein needs. Strength and power team sports require muscle-building programmes that must be accompanied by adequate nutrition, and simple anthropometric measurements can help the nutrition practitioner monitor and assess body composition periodically. Use of a body mass scale and a urine specific gravity refractometer can help identify athletes prone to dehydration. Sports beverages and caffeine are the most common supplements, while opinion on the practical effectiveness of creatine is divided. Late-maturing adolescent athletes become concerned about gaining size and muscle, and assessment of maturity status can be carried out with anthropometric procedures. An overriding consideration is that an individual approach is needed to meet each athlete's nutritional needs.",
"title": ""
},
{
"docid": "1d3eb22e6f244fbe05d0cc0f7ee37b84",
"text": "Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when queried with inputs that are very different from their training data. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the prior behavior in novel environments so that it can safely collect additional training data and continually improve. A video illustrating our approach is available at: http://groups.csail.mit.edu/rrg/videos/safe visual navigation.",
"title": ""
},
{
"docid": "bf08bc98eb9ef7a18163fc310b10bcf6",
"text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.",
"title": ""
},
{
"docid": "5f5258cec772f97c18a5ccda25f7a617",
"text": "While most prognostics approaches focus on accurate computation of the degradation rate and the Remaining Useful Life (RUL) of individual components, it is the rate at which the performance of subsystems and systems degrade that is of greater interest to the operators and maintenance personnel of these systems. Accurate and reliable predictions make it possible to plan the future operations of the system, optimize maintenance scheduling activities, and maximize system life. In system-level prognostics, we are interested in determining when the performance of a system will fall below pre-defined levels of acceptable performance. Our focus in this paper is on developing a comprehensive methodology for system-level prognostics under uncertainty that combines the use of an estimation scheme that tracks system state and degradation parameters, along with a prediction scheme that computes the RUL as a stochastic distribution over the life of the system. Two parallel methods have been developed for prediction: (1) methods based on stochastic simulation and (2) optimization methods, such as first order reliability method (FORM). We compare the computational complexity and the accuracy of the two prediction approaches using a case study of a system with several degrading components.",
"title": ""
},
{
"docid": "80fd067dd6cf2fe85ade3c632e82c04c",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.03.046 * Corresponding author. Tel.: +98 09126121921. E-mail address: shahbazi_mo@yahoo.com (M. Sha Recommender systems are powerful tools that allow companies to present personalized offers to their customers and defined as a system which recommends an appropriate product or service after learning the customers’ preferences and desires. Extracting users’ preferences through their buying behavior and history of purchased products is the most important element of such systems. Due to users’ unlimited and unpredictable desires, identifying their preferences is very complicated process. In most researches, less attention has been paid to user’s preferences varieties in different product categories. This may decrease quality of recommended items. In this paper, we introduced a technique of recommendation in the context of online retail store which extracts user preferences in each product category separately and provides more personalized recommendations through employing product taxonomy, attributes of product categories, web usage mining and combination of two well-known filtering methods: collaborative and content-based filtering. Experimental results show that proposed technique improves quality, as compared to similar approaches. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ef1f901e0fb01a99728282f743cc1c65",
"text": "Matching facial sketches to digital face images has widespread application in law enforcement scenarios. Recent advancements in technology have led to the availability of sketch generation tools, minimizing the requirement of a sketch artist. While these sketches have helped in manual authentication, matching composite sketches with digital mugshot photos automatically show high modality gap. This research aims to address the task of matching a composite face sketch image to digital images by proposing a transfer learning based evolutionary algorithm. A new feature descriptor, Histogram of Image Moments, has also been presented for encoding features across modalities. Moreover, IIITD Composite Face Sketch Database of 150 subjects is presented to fill the gap due to limited availability of databases in this problem domain. Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition.",
"title": ""
},
{
"docid": "1cbc333cce4870cc0f465bb76b6e4d3c",
"text": "This note attempts to raise awareness within the network research community about the security of the interdomain routing infrastructure. We identify several attack objectives and mechanisms, assuming that one or more BGP routers have been compromised. Then, we review the existing and proposed countermeasures, showing that they are either generally ineffective (route filtering), or probably too heavyweight to deploy (S-BGP). We also review several recent proposals, and conclude by arguing that a significant research effort is urgently needed in the area of routing security.",
"title": ""
},
{
"docid": "04476184ca103b9d8012827615fc84a5",
"text": "In order to investigate the local filtering behavior of the Retinex model, we propose a new implementation in which paths are replaced by 2-D pixel sprays, hence the name \"random spray Retinex.\" A peculiar feature of this implementation is the way its parameters can be controlled to perform spatial investigation. The parameters' tuning is accomplished by an unsupervised method based on quantitative measures. This procedure has been validated via user panel tests. Furthermore, the spray approach has faster performances than the path-wise one. Tests and results are presented and discussed",
"title": ""
},
{
"docid": "760edd83045a80dbb2231c0ffbef2ea7",
"text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.",
"title": ""
},
{
"docid": "8785e51ebe39057012b81c37a6ddc097",
"text": "In this paper, we present a set of distributed algorithms for estimating the electro-mechanical oscillation modes of large power system networks using synchrophasors. With the number of phasor measurement units (PMUs) in the North American grid scaling up to the thousands, system operators are gradually inclining toward distributed cyber-physical architectures for executing wide-area monitoring and control operations. Traditional centralized approaches, in fact, are anticipated to become untenable soon due to various factors such as data volume, security, communication overhead, and failure to adhere to real-time deadlines. To address this challenge, we propose three different communication and computational architectures by which estimators located at the control centers of various utility companies can run local optimization algorithms using local PMU data, and thereafter communicate with other estimators to reach a global solution. Both synchronous and asynchronous communications are considered. Each architecture integrates a centralized Prony-based algorithm with several variants of alternating direction method of multipliers (ADMM). We discuss the relative advantages and bottlenecks of each architecture using simulations of IEEE 68-bus and IEEE 145-bus power system, as well as an Exo-GENI-based software defined network.",
"title": ""
},
{
"docid": "2eac0a94204b24132e496639d759f545",
"text": "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.",
"title": ""
},
{
"docid": "6aebae4d8ed0af23a38a945b85c3b6ff",
"text": "Modern web applications are conglomerations of JavaScript written by multiple authors: application developers routinely incorporate code from third-party libraries, and mashup applications synthesize data and code hosted at different sites. In current browsers, a web application’s developer and user must trust third-party code in libraries not to leak the user’s sensitive information from within applications. Even worse, in the status quo, the only way to implement some mashups is for the user to give her login credentials for one site to the operator of another site. Fundamentally, today’s browser security model trades privacy for flexibility because it lacks a sufficient mechanism for confining untrusted code. We present COWL, a robust JavaScript confinement system for modern web browsers. COWL introduces label-based mandatory access control to browsing contexts in a way that is fully backwardcompatible with legacy web content. We use a series of case-study applications to motivate COWL’s design and demonstrate how COWL allows both the inclusion of untrusted scripts in applications and the building of mashups that combine sensitive information from multiple mutually distrusting origins, all while protecting users’ privacy. Measurements of two COWL implementations, one in Firefox and one in Chromium, demonstrate a virtually imperceptible increase in page-load latency.",
"title": ""
},
{
"docid": "4a9474c0813646708400fc02c344a976",
"text": "Over the years, the Web has shrunk the world, allowing individuals to share viewpoints with many more people than they are able to in real life. At the same time, however, it has also enabled anti-social and toxic behavior to occur at an unprecedented scale. Video sharing platforms like YouTube receive uploads from millions of users, covering a wide variety of topics and allowing others to comment and interact in response. Unfortunately, these communities are periodically plagued with aggression and hate attacks. In particular, recent work has showed how these attacks often take place as a result of “raids,” i.e., organized efforts coordinated by ad-hoc mobs from third-party communities. Despite the increasing relevance of this phenomenon, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human reviews. In this paper, we propose an automated solution to identify videos that are likely to be targeted by coordinated harassers. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of raid victims. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with high accuracy (AUC up to 94%). Overall, our work paves the way for providing video platforms like YouTube with proactive systems to detect and mitigate coordinated hate attacks.",
"title": ""
},
{
"docid": "5e2c4ebf3c2b4f0e9aabc5eacd2d4b80",
"text": "Manually annotating object bounding boxes is central to building computer vision datasets, and it is very time consuming (annotating ILSVRC [53] took 35s for one high-quality box [62]). It involves clicking on imaginary comers of a tight box around the object. This is difficult as these comers are often outside the actual object and several adjustments are required to obtain a tight box. We propose extreme clicking instead: we ask the annotator to click on four physical points on the object: the top, bottom, left- and right-most points. This task is more natural and these points are easy to find. We crowd-source extreme point annotations for PASCAL VOC 2007 and 2012 and show that (1) annotation time is only 7s per box, 5 × faster than the traditional way of drawing boxes [62]: (2) the quality of the boxes is as good as the original ground-truth drawn the traditional way: (3) detectors trained on our annotations are as accurate as those trained on the original ground-truth. Moreover, our extreme clicking strategy not only yields box coordinates, but also four accurate boundary points. We show (4) how to incorporate them into GrabCut to obtain more accurate segmentations than those delivered when initializing it from bounding boxes: (5) semantic segmentations models trained on these segmentations outperform those trained on segmentations derived from bounding boxes.",
"title": ""
},
{
"docid": "0453d395af40160b4f66787bb9ac8e96",
"text": "Two aspect of programming languages, recursive definitions and type declarations are analyzed in detail. Church's %-calculus is used as a model of a programming language for purposes of the analysis. The main result on recursion is an analogue to Kleene's first recursion theorem: If A = FA for any %-expressions A and F, then A is an extension of YF in the sense that if E[YF], any expression containing YF, has a normal form then E[YF] = E[A]. Y is Curry's paradoxical combinator. The result is shown to be invariant for many different versions of Y. A system of types and type declarations is developed for the %-calculus and its semantic assumptions are identified. The system is shown to be adequate in the sense that it permits a preprocessor to check formulae prior to evaluation to prevent type errors. It is shown that any formula with a valid assignment of types to all its subexpressions must have a normal form. Thesis Supervisor: John M. Wozencraft Title: Professor of Electrical Engineering",
"title": ""
},
{
"docid": "5809c27155986612b0e4a9ef48b3b930",
"text": "Using the same technologies for both work and private life is an intensifying phenomenon. Mostly driven by the availability of consumer IT in the marketplace, individuals—more often than not—are tempted to use privately-owned IT rather than enterprise IT in order to get their job done. However, this dual-use of technologies comes at a price. It intensifies the blurring of the boundaries between work and private life—a development in stark contrast to the widely spread desire of employees to segment more clearly between their two lives. If employees cannot follow their segmentation preference, it is proposed that this misfit will result in work-to-life conflict (WtLC). This paper investigates the relationship between organizational encouragement for dual use and WtLC. Via a quantitative survey, we find a significant relationship between the two concepts. In line with boundary theory, the effect is stronger for people that strive for work-life segmentation.",
"title": ""
},
{
"docid": "5cc07ca331deb81681b3f18355c0e586",
"text": "BACKGROUND\nHyaluronic acid (HA) formulations are used for aesthetic applications. Different cross-linking technologies result in HA dermal fillers with specific characteristic visco-elastic properties.\n\n\nOBJECTIVE\nBio-integration of three CE-marked HA dermal fillers, a cohesive (monophasic) polydensified, a cohesive (monophasic) monodensified and a non-cohesive (biphasic) filler, was analysed with a follow-up of 114 days after injection. Our aim was to study the tolerability and inflammatory response of these fillers, their patterns of distribution in the dermis, and influence on tissue integrity.\n\n\nMETHODS\nThree HA formulations were injected intradermally into the iliac crest region in 15 subjects. Tissue samples were analysed after 8 and 114 days by histology and immunohistochemistry, and visualized using optical and transmission electron microscopy.\n\n\nRESULTS\nHistological results demonstrated that the tested HA fillers showed specific characteristic bio-integration patterns in the reticular dermis. Observations under the optical and electron microscopes revealed morphological conservation of cutaneous structures. Immunohistochemical results confirmed absence of inflammation, immune response and granuloma.\n\n\nCONCLUSION\nThe three tested dermal fillers show an excellent tolerability and preservation of the dermal cells and matrix components. Their tissue integration was dependent on their visco-elastic properties. The cohesive polydensified filler showed the most homogeneous integration with an optimal spreading within the reticular dermis, which is achieved by filling even the smallest spaces between collagen bundles and elastin fibrils, while preserving the structural integrity of the latter. Absence of adverse reactions confirms safety of the tested HA dermal fillers.",
"title": ""
},
{
"docid": "d646a27556108caebd7ee5691c98d642",
"text": "■ Abstract Theory and research on small group performance and decision making is reviewed. Recent trends in group performance research have found that process gains as well as losses are possible, and both are frequently explained by situational and procedural contexts that differentially affect motivation and resource coordination. Research has continued on classic topics (e.g., brainstorming, group goal setting, stress, and group performance) and relatively new areas (e.g., collective induction). Group decision making research has focused on preference combination for continuous response distributions and group information processing. New approaches (e.g., group-level signal detection) and traditional topics (e.g., groupthink) are discussed. New directions, such as nonlinear dynamic systems, evolutionary adaptation, and technological advances, should keep small group research vigorous well into the future.",
"title": ""
},
{
"docid": "9adb3374f58016ee9bec1daf7392a64e",
"text": "To develop a less genotype-dependent maize-transformation procedure, we used 10-month-old Type I callus as target tissue for microprojectile bombardment. Twelve transgenic callus lines were obtained from two of the three anther-culture-derived callus cultures representing different gentic backgrounds. Multiple fertile transgenic plants (T0) were regenerated from each transgenic callus line. Transgenic leaves treated with the herbicide Basta showed no symptoms, indicating that one of the two introduced genes, bar, was functionally expressing. Data from DNA hybridization analysis confirmed that the introduced genes (bar and uidA) were integrated into the plant genome and that all lines derived from independent transformation events. Transmission of the introduced genes and the functional expression of bar in T1 progeny was also confirmed. Germination of T1 immature embryos in the presence of bialaphos was used as a screen for functional expression of bar; however, leaf painting of T1 plants proved a more accurate predictor of bar expression in plants. This study suggests that maize Type I callus can be transformed efficiently through microprojectile bombardment and that fertile transgenic plants can be recovered. This system should facilitate the direct introduction of agronomically important genes in to commercial genotypes.",
"title": ""
}
] | scidocsrr |
9c84b6482fb0227033fad7f84394f593 | Surveying instructor and learner attitudes toward e-learning | [
{
"docid": "6e70435f2d434581f00962b5677facfa",
"text": "Many institutions of Higher Education and Corporate Training Institutes are resorting to e-Learning as a means of solving authentic learning and performance problems, while other institutions are hopping onto the bandwagon simply because they do not want to be left behind. Success is crucial because an unsuccessful effort to implement e-Learning will be clearly reflected in terms of the return of investment. One of the most crucial prerequisites for successful implementation of e-Learning is the need for careful consideration of the underlying pedagogy, or how learning takes place online. In practice, however, this is often the most neglected aspect in any effort to implement e-Learning. The purpose of this paper is to identify the pedagogical principles underlying the teaching and learning activities that constitute effective e-Learning. An analysis and synthesis of the principles and ideas by the practicing e-Learning company employing the author will also be presented, in the perspective of deploying an effective Learning Management Systems (LMS). D 2002 Published by Elsevier Science Inc.",
"title": ""
},
{
"docid": "57b945df75d8cd446caa82ae02074c3a",
"text": "A key issue facing information systems researchers and practitioners has been the difficulty in creating favorable user reactions to new technologies. Insufficient or ineffective training has been identified as one of the key factors underlying this disappointing reality. Among the various enhancements to training being examined in research, the role of intrinsic motivation as a lever to create favorable user perceptions has not been sufficiently exploited. In this research, two studies were conducted to compare a traditional training method with a training method that included a component aimed at enhancing intrinsic motivation. The results strongly favored the use of an intrinsic motivator during training. Key implications for theory and practice are discussed. 1Allen Lee was the accepting senior editor for this paper. Sometimes when I am at my computer, I say to my wife, \"1'11 be done in just a minute\" and the next thing I know she's standing over me saying, \"It's been an hour!\" (Collins 1989, p. 11). Investment in emerging information technology applications can lead to productivity gains only if they are accepted and used. Several theoretical perspectives have emphasized the importance of user perceptions of ease of use as a key factor affecting acceptance of information technology. Favorable ease of use perceptions are necessary for initial acceptance (Davis et al. 1989), which of course is essential for adoption and continued use. During the early stages of learning and use, ease of use perceptions are significantly affected by training (e.g., Venkatesh and Davis 1996). Investments in training by organizations have been very high and have continued to grow rapidly. Kelly (1982) reported a figure of $100B, which doubled in about a decade (McKenna 1990). In spite of such large investments in training , only 10% of training leads to a change in behavior On trainees' jobs (Georgenson 1982). Therefore, it is important to understand the most effective training methods (e.g., Facteau et al. 1995) and to improve existing training methods in order to foster favorable perceptions among users about the ease of use of a technology, which in turn should lead to acceptance and usage. Prior research in psychology (e.g., Deci 1975) suggests that intrinsic motivation during training leads to beneficial outcomes. However, traditional training methods in information systems research have tended to emphasize imparting knowledge to potential users (e.g., Nelson and Cheney 1987) while not paying Sufficient attention to intrinsic motivation during training. The two field …",
"title": ""
}
] | [
{
"docid": "a1317e75e1616b2922e5df02f69076d9",
"text": "Fixed-length embeddings of words are very useful for a variety of tasks in speech and language processing. Here we systematically explore two methods of computing fixed-length embeddings for variable-length sequences. We evaluate their susceptibility to phonetic and speaker-specific variability on English, a high resource language, and Xitsonga, a low resource language, using two evaluation metrics: ABX word discrimination and ROC-AUC on same-different phoneme n-grams. We show that a simple downsampling method supplemented with length information can be competitive with the variable-length input feature representation on both evaluations. Recurrent autoencoders trained without supervision can yield even better results at the expense of increased computational complexity.",
"title": ""
},
{
"docid": "a258c6b5abf18cb3880e4bc7a436c887",
"text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.",
"title": ""
},
{
"docid": "fa294b6e6cc033474a9bd6671299088a",
"text": "A compact microstrip bandpass filter is proposed in this paper for millimeter-wave applications. The filter consists of three new non-uniform coupled-line modified hairpin resonators. The filter is designed to exhibit a fractional bandwidth of about 5.0 % at a center of frequency of approximately 34 GHz. The shapes of the hairpin resonators are modified to suppress the unwanted spurious harmonic response. As a result, the filter exhibits very wide stopband with a rejection level better than 10-dB. In addition, the filter shows a transmission zero close to upper edge of the desired passband which enhances the selectivity of the passband. The filter design is successfully realized on a RT/Duroid 6002 with a dielectric constant of 2.94 and a thickness of 127μm. The filter is simulated, and fabricated to demonstrate the proposed technique where excellent agreement is obtained.",
"title": ""
},
{
"docid": "b9a32c7b3e56174016d920c9ec4c1456",
"text": "When an individual has been inoculated with a plasmodium parasite, a variety of clinical effects may follow, within the sequence: Infection?asymptomatic parasitaemia?uncomplicated illness?severe malaria?death. Many factors influence the disease manifestations of the infection and the likelihood of progression to the last two categories. These factors include the species of the infecting parasite, the levels of innate and acquired immunity of the host, and the timing and efficacy of treatment, if any.",
"title": ""
},
{
"docid": "333bffc73983bc159248420d76afc7e6",
"text": "In this paper we study approximate landmark-based methods for point-to-point distance estimation in very large networks. These methods involve selecting a subset of nodes as landmarks and computing offline the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, it can be estimated quickly by combining the precomputed distances. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. We therefore explore theoretical insights to devise a variety of simple methods that scale well in very large networks. The efficiency of the suggested techniques is tested experimentally using five real-world graphs having millions of edges. While theoretical bounds support the claim that random landmarks work well in practice, our extensive experimentation shows that smart landmark selection can yield dramatically more accurate results: for a given target accuracy, our methods require as much as 250 times less space than selecting landmarks at random. In addition, we demonstrate that at a very small accuracy loss our techniques are several orders of magnitude faster than the state-of-the-art exact methods. Finally, we study an application of our methods to the task of social search in large graphs.",
"title": ""
},
{
"docid": "8ff903bfdc620639013c62a7e123ef54",
"text": "A new aging model for Lithium Ion batteries is proposed based on theoretical models of crack propagation. This provides an exponential dependence of aging on stress such as depth of discharge. A measure of stress is derived from arbitrary charge and discharge histories to include mixed use in vehicles or vehicle to grid operations. This aging model is combined with an empirical equivalent circuit model, to provide time and state of charge dependent charge and discharge characteristics at any rate and temperature. This choice of model results in a cycle life prediction with few parameters to be fitted to a particular cell.",
"title": ""
},
{
"docid": "365f1706f0eb13b2ca34227960d430be",
"text": "This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence. We investigate two typical and well-studied tasks: semantic role labeling (SRL) which identifies the relations between predicates and arguments, and relation classification (RC) which focuses on the relation between two entities or nominals. While mostly studied separately in prior work, we show that the two tasks can be effectively connected and modeled using a general architecture. Experiments on CoNLL-2009 benchmark datasets show that our SRL models significantly outperform state-of-the-art approaches. Our RC models also yield competitive performance with the best published records. Furthermore, we show that the two tasks can be trained jointly with multi-task learning, resulting in additive significant improvements for SRL.",
"title": ""
},
{
"docid": "29822df06340218a43fbcf046cbeb264",
"text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.",
"title": ""
},
{
"docid": "b17325ed7fd45b6e8bd47303dbc52fb7",
"text": "The compute-intensive and power-efficient brain has been a source of inspiration for a broad range of neural networks to solve recognition and classification tasks. Compared to the supervised deep neural networks (DNNs) that have been very successful on well-defined labeled datasets, bio-plausible spiking neural networks (SNNs) with unsupervised learning rules could be well-suited for training and learning representations from the massive amount of unlabeled data. To design dense and low-power hardware for such unsupervised SNNs, we employ digital CMOS circuits for neuromorphic processors, which can exploit transistor scaling and dynamic voltage scaling to the utmost. As exemplary works, we present two neuromorphic processor designs. First, a 45nm neuromorphic chip is designed for a small-scale network of spiking neurons. Through tight integration of memory (64k SRAM synapses) and computation (256 digital neurons), the chip demonstrates on-chip learning on pattern recognition tasks down to 0.53V supply. Secondly, a 65nm neuromorphic processor that performs unsupervised on-line spike-clustering for brain sensing applications is implemented with 1.2k digital neurons and 4.7k latch-based synapses. The processor exhibits a power consumption of 9.3μW/ch at 0.3V supply. Synapse hardware precision, efficient synapse memory array access, overfitting, and voltage scaling will be discussed for dense and power-efficient on-chip learning for CMOS spiking neural networks.",
"title": ""
},
{
"docid": "f7e63d994615d0a2902483bb2409f653",
"text": "A novel half-rate source-series-terminated (SST) transmitter in 65nm bulk CMOS technology is presented in this paper. Compared to previous half-rate SST transmitters, the proposed one consists of four binary-weighted slices increasing proportionally as 1x, 2x, 4x and 8x and the range of pre-emphasis level is increased greatly by the clock-match block to adapt to different channel. The half-rate transmitter can adjust the pre-emphasis level from 1.2dB to 23dB. The transmitter output impedance is adjustable from 33ohms to 64ohms. A power consumption of 24mW is measured at a transmit rate of 6 GB/s which is power-efficient compared to previous half-rate SST transmitter.",
"title": ""
},
{
"docid": "2d3bcc3c6759c584c2deaa8b99fcbfb0",
"text": "We develop a dynamic programming algorithm for haplotype block partitioning to minimize the number of representative single nucleotide polymorphisms (SNPs) required to account for most of the common haplotypes in each block. Any measure of haplotype quality can be used in the algorithm and of course the measure should depend on the specific application. The dynamic programming algorithm is applied to analyze the chromosome 21 haplotype data of Patil et al. [Patil, N., Berno, A. J., Hinds, D. A., Barrett, W. A., Doshi, J. M., Hacker, C. R., Kautzer, C. R., Lee, D. H., Marjoribanks, C., McDonough, D. P., et al. (2001) Science 294, 1719-1723], who searched for blocks of limited haplotype diversity. Using the same criteria as in Patil et al., we identify a total of 3,582 representative SNPs and 2,575 blocks that are 21.5% and 37.7% smaller, respectively, than those identified using a greedy algorithm of Patil et al. We also apply the dynamic programming algorithm to the same data set based on haplotype diversity. A total of 3,982 representative SNPs and 1,884 blocks are identified to account for 95% of the haplotype diversity in each block.",
"title": ""
},
{
"docid": "a84b5b928c5082185e658a9cbcce0e45",
"text": "Failures of Internet services and enterprise systems lead to user dissatisfaction and considerable loss of revenue. Since manual diagnosis is often laborious and slow, there is considerable interest in tools that can diagnose the cause of failures quickly and automatically from system-monitoring data. This paper identifies two key data-mining problems arising in a platform for automated diagnosis called {\\em Fa}. Fa uses monitoring data to construct a database of{\\em failure signatures} against which data from undiagnosed failures can be matched. Two novel challenges we address are to make signatures robust to the noisy monitoring data in production systems, and to generate reliable confidence estimates for matches. Fa uses a new technique called {\\em anomaly-based clustering} when the signature database has no high-confidence match for an undiagnosed failure. This technique clusters monitoring data based on how it differs from the failure data, and pinpoints attributes linked to the failure. We show the effectiveness of Fa through a comprehensive experimental evaluation based on failures from a production setting, a variety of failures injected in a testbed, and synthetic data.",
"title": ""
},
{
"docid": "26439bd538c8f0b5d6fba3140e609aab",
"text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.",
"title": ""
},
{
"docid": "c57d9c4f62606e8fccef34ddd22edaec",
"text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.",
"title": ""
},
{
"docid": "863fbc4e33b1af53dd89e237d4c00ccd",
"text": "BACKGROUND\nRhesus macaques are widely used in biomedical research. Automated behavior monitoring can be useful in various fields (including neuroscience), as well as having applications to animal welfare but current technology lags behind that developed for other species. One difficulty facing developers is the reliable identification of individual macaques within a group especially as pair- and group-housing of macaques becomes standard. Current published methods require either implantation or wearing of a tracking device.\n\n\nNEW METHOD\nI present face recognition, in combination with face detection, as a method to non-invasively identify individual rhesus macaques in videos. The face recognition method utilizes local-binary patterns in combination with a local discriminant classification algorithm.\n\n\nRESULTS\nA classification accuracy of between 90 and 96% was achieved for four different groups. Group size, number of training images and challenging image conditions such as high contrast all had an impact on classification accuracy. I demonstrate that these methods can be applied in real time using standard affordable hardware and a potential application to studies of social structure.\n\n\nCOMPARISON WITH EXISTING METHOD(S)\nFace recognition methods have been reported for humans and other primate species such as chimpanzees but not rhesus macaques. The classification accuracy with this method is comparable to that for chimpanzees. Face recognition has the advantage over other methods for identifying rhesus macaques such as tags and collars of being non-invasive.\n\n\nCONCLUSIONS\nThis is the first reported method for face recognition of rhesus macaques, has high classification accuracy and can be implemented in real time.",
"title": ""
},
{
"docid": "5f57fdeba1afdfb7dcbd8832f806bc48",
"text": "OBJECTIVES\nAdolescents spend increasingly more time on electronic devices, and sleep deficiency rising in adolescents constitutes a major public health concern. The aim of the present study was to investigate daytime screen use and use of electronic devices before bedtime in relation to sleep.\n\n\nDESIGN\nA large cross-sectional population-based survey study from 2012, the youth@hordaland study, in Hordaland County in Norway.\n\n\nSETTING\nCross-sectional general community-based study.\n\n\nPARTICIPANTS\n9846 adolescents from three age cohorts aged 16-19. The main independent variables were type and frequency of electronic devices at bedtime and hours of screen-time during leisure time.\n\n\nOUTCOMES\nSleep variables calculated based on self-report including bedtime, rise time, time in bed, sleep duration, sleep onset latency and wake after sleep onset.\n\n\nRESULTS\nAdolescents spent a large amount of time during the day and at bedtime using electronic devices. Daytime and bedtime use of electronic devices were both related to sleep measures, with an increased risk of short sleep duration, long sleep onset latency and increased sleep deficiency. A dose-response relationship emerged between sleep duration and use of electronic devices, exemplified by the association between PC use and risk of less than 5 h of sleep (OR=2.70, 95% CI 2.14 to 3.39), and comparable lower odds for 7-8 h of sleep (OR=1.64, 95% CI 1.38 to 1.96).\n\n\nCONCLUSIONS\nUse of electronic devices is frequent in adolescence, during the day as well as at bedtime. The results demonstrate a negative relation between use of technology and sleep, suggesting that recommendations on healthy media use could include restrictions on electronic devices.",
"title": ""
},
{
"docid": "a620202abaa0f11d2d324b05a29986dd",
"text": "Haze is an atmospheric phenomenon that significantly degrades the visibility of outdoor scenes. This is mainly due to the atmosphere particles that absorb and scatter the light. This paper introduces a novel single image approach that enhances the visibility of such degraded images. Our method is a fusion-based strategy that derives from two original hazy image inputs by applying a white balance and a contrast enhancing procedure. To blend effectively the information of the derived inputs to preserve the regions with good visibility, we filter their important features by computing three measures (weight maps): luminance, chromaticity, and saliency. To minimize artifacts introduced by the weight maps, our approach is designed in a multiscale fashion, using a Laplacian pyramid representation. We are the first to demonstrate the utility and effectiveness of a fusion-based technique for dehazing based on a single degraded image. The method performs in a per-pixel fashion, which is straightforward to implement. The experimental results demonstrate that the method yields results comparative to and even better than the more complex state-of-the-art techniques, having the advantage of being appropriate for real-time applications.",
"title": ""
},
{
"docid": "17935656561dd7af52b8aa2bf9d0fbf8",
"text": "In this paper, we present a new method to build public-key Cryptosystem. The method is based on the state explosion problem occurred in the computing of average number of tokens in the places of Stochastic Petri Net (SPN). The reachable markings in the coverability tree of SPN are used as the encryption keys. Accordingly, multiple encryption keys can be generated, thus we can perform multiple encryption to get as strong security as we expect. The decryption is realized through solving a group of ordinary differential equations from Continuous Petri Net (CPN), which has the same underlying Petri net as that of SPN. The decipherment difficulty for attackers is in exponential order. The contribution of this paper is that we can use continuous mathematics to design cryptosystems besides discrete mathematics.",
"title": ""
},
{
"docid": "97f8aac5ba3c218542e35d17e7bb46e4",
"text": "MOTIVATION\nComparison of multimegabase genomic DNA sequences is a popular technique for finding and annotating conserved genome features. Performing such comparisons entails finding many short local alignments between sequences up to tens of megabases in length. To process such long sequences efficiently, existing algorithms find alignments by expanding around short runs of matching bases with no substitutions or other differences. Unfortunately, exact matches that are short enough to occur often in significant alignments also occur frequently by chance in the background sequence. Thus, these algorithms must trade off between efficiency and sensitivity to features without long exact matches.\n\n\nRESULTS\nWe introduce a new algorithm, LSH-ALL-PAIRS, to find ungapped local alignments in genomic sequence with up to a specified fraction of substitutions. The length and substitution rate of these alignments can be chosen so that they appear frequently in significant similarities yet still remain rare in the background sequence. The algorithm finds ungapped alignments efficiently using a randomized search technique, locality-sensitive hashing. We have found LSH-ALL-PAIRS to be both efficient and sensitive for finding local similarities with as little as 63% identity in mammalian genomic sequences up to tens of megabases in length",
"title": ""
},
{
"docid": "9bd7df9356b87225948cf42bf3ea4604",
"text": "Machine learning techniques work best when the data used for training resembles the data used for evaluation. This holds true for learned single-image denoising algorithms, which are applied to real raw camera sensor readings but, due to practical constraints, are often trained on synthetic image data. Though it is understood that generalizing from synthetic to real images requires careful consideration of the noise properties of camera sensors, the other aspects of an image processing pipeline (such as gain, color correction, and tone mapping) are often overlooked, despite their significant effect on how raw measurements are transformed into finished images. To address this, we present a technique to “unprocess” images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available Internet photos. We additionally model the relevant components of an image processing pipeline when evaluating our loss function, which allows training to be aware of all relevant photometric processing that will occur after denoising. By unprocessing and processing training data and model outputs in this way, we are able to train a simple convolutional neural network that has 14%-38% lower error rates and is 9×-18× faster than the previous state of the art on the Darmstadt Noise Dataset [30], and generalizes to sensors outside of that dataset as well.",
"title": ""
}
] | scidocsrr |
3d555d54137c0098a464f1a3dd17c129 | Combinatorial Energy Learning for Image Segmentation | [
{
"docid": "305084bdd1a4a33c8d9fd102f864fb52",
"text": "We present a method for hierarchical image segmentation that defines a disaffinity graph on the image, over-segments it into watershed basins, defines a new graph on the basins, and then merges basins with a modified, size-dependent version of single linkage clustering. The quasilinear runtime of the method makes it suitable for segmenting large images. We illustrate the method on the challenging problem of segmenting 3D electron microscopic brain images.",
"title": ""
},
{
"docid": "ef3ec9af6f5fe3ff71f5c54a1de262d8",
"text": "This paper proposes an information theoretic criterion for comparing two partitions, or clusterings, of the same data set. The criterion, called variation of information (VI), measures the amount of information lost and gained in changing from clustering C to clustering C′. The basic properties of VI are presented and discussed. We focus on two kinds of properties: (1) those that help one build intuition about the new criterion (in particular, it is shown the VI is a true metric on the space of clusterings), and (2) those that pertain to the comparability of VI values over different experimental conditions. As the latter properties have rarely been discussed explicitly before, other existing comparison criteria are also examined in their light. Finally we present the VI from an axiomatic point of view, showing that it is the only “sensible” criterion for comparing partitions that is both aligned to the lattice and convexely additive. As a consequence, we prove an impossibility result for comparing partitions: there is no criterion for comparing partitions that simultaneoulsly satisfies the above two desirable properties and is bounded.",
"title": ""
}
] | [
{
"docid": "bed6069b49afd9c238267c6a276f1ede",
"text": "Today's top high performance computing systems run applications with hundreds of thousands of processes, contain hundreds of storage nodes, and must meet massive I/O requirements for capacity and performance. These leadership-class systems face daunting challenges to deploying scalable I/O systems. In this paper we present a case study of the I/O challenges to performance and scalability on Intrepid, the IBM Blue Gene/P system at the Argonne Leadership Computing Facility. Listed in the top 5 fastest supercomputers of 2008, Intrepid runs computational science applications with intensive demands on the I/O system. We show that Intrepid's file and storage system sustain high performance under varying workloads as the applications scale with the number of processes.",
"title": ""
},
{
"docid": "17f8affa7807932f58950303c3b62296",
"text": "The Internet of Things (IoT) has grown in recent years to a huge branch of research: RFID, sensors and actuators as typical IoT devices are increasingly used as resources integrated into new value added applications of the Future Internet and are intelligently combined using standardised software services. While most of the current work on IoT integration focuses on areas of the actual technical implementation, little attention has been given to the integration of the IoT paradigm and its devices coming with native software components as resources in business processes of traditional enterprise resource planning systems. In this paper, we identify and integrate IoT resources as a novel automatic resource type on the business process layer beyond the classical human resource task-centric view of the business process model in order to face expanding resource planning challenges of future enterprise environments.",
"title": ""
},
{
"docid": "d652a2ffb4708b76d8fa70d7a452ae9f",
"text": "If we are to achieve natural human–robot interaction, we may need to complement current vision and speech interfaces. Touch may provide us with an extra tool in this quest. In this paper we demonstrate the role of touch in interaction between a robot and a human. We show how infrared sensors located on robots can be easily used to detect and distinguish human interaction, in this case interaction with individual children. This application of infrared sensors potentially has many uses; for example, in entertainment or service robotics. This system could also benefit therapy or rehabilitation, where the observation and recording of movement and interaction is important. In the long term, this technique might enable robots to adapt to individuals or individual types of user. c © 2006 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "99f57f28f8c262d4234d07deb9dcf49d",
"text": "Historically, conversational systems have focused on goal-directed interaction and this focus defined much of the work in the field of spoken dialog systems. More recently researchers have started to focus on nongoal-oriented dialog systems often referred to as ”chat” systems. We can refer to these as Chat-oriented Dialog (CHAD)systems. CHAD systems are not task-oriented and focus on what can be described as social conversation where the goal is to interact while maintaining an appropriate level of engagement with a human interlocutor. Work to date has identified a number of techniques that can be used to implement working CHADs but it has also highlighted important limitations. This note describes CHAD characteristics and proposes a research agenda.",
"title": ""
},
{
"docid": "5d6553e68c63827fba9a2bf237e596a3",
"text": "Risk management has become a vital topic both in academia and practice during the past several decades. Most business intelligence tools have been used to enhance risk management, and the risk management tools have benefited from business intelligence approaches. This introductory article provides a review of the state-ofthe-art research in business intelligence in risk management, and of the work that has been accepted for publication in this issue.",
"title": ""
},
{
"docid": "77016e263b6b864536dc9b017e822cd0",
"text": "Recent findings identified electroencephalography (EEG) microstates as the electrophysiological correlates of fMRI resting-state networks. Microstates are defined as short periods (100 ms) during which the EEG scalp topography remains quasi-stable; that is, the global topography is fixed but strength might vary and polarity invert. Microstates represent the subsecond coherent activation within global functional brain networks. Surprisingly, these rapidly changing EEG microstates correlate significantly with activity in fMRI resting-state networks after convolution with the hemodynamic response function that constitutes a strong temporal smoothing filter. We postulate here that microstate sequences should reveal scale-free, self-similar dynamics to explain this remarkable effect and thus that microstate time series show dependencies over long time ranges. To that aim, we deploy wavelet-based fractal analysis that allows determining scale-free behavior. We find strong statistical evidence that microstate sequences are scale free over six dyadic scales covering the 256-ms to 16-s range. The degree of long-range dependency is maintained when shuffling the local microstate labels but becomes indistinguishable from white noise when equalizing microstate durations, which indicates that temporal dynamics are their key characteristic. These results advance the understanding of temporal dynamics of brain-scale neuronal network models such as the global workspace model. Whereas microstates can be considered the \"atoms of thoughts,\" the shortest constituting elements of cognition, they carry a dynamic signature that is reminiscent at characteristic timescales up to multiple seconds. The scale-free dynamics of the microstates might be the basis for the rapid reorganization and adaptation of the functional networks of the brain.",
"title": ""
},
{
"docid": "57ffea840501c5e9a77a2c7e0d609d07",
"text": "Datasets power computer vison research and drive breakthroughs. Larger and larger datasets are needed to better utilize the exponentially increasing computing power. However, datasets generation is both time consuming and expensive as human beings are required for image labelling. Human labelling cannot scale well. How can we generate larger image datasets easier and faster? In this paper, we provide a new approach for large scale datasets generation. We generate images from 3D object models directly. The large volume of freely available 3D CAD models and mature computer graphics techniques make generating large scale image datasets from 3D models very efficient. As little human effort involved in this process, it can scale very well. Rather than releasing a static dataset, we will also provide a software library for dataset generation so that the computer vision community can easily extend or modify the datasets accordingly.",
"title": ""
},
{
"docid": "1e4a96b76c36ffd9a54d161c85699722",
"text": "Apoptosis and autophagy are both tightly regulated biological processes that play a central role in tissue homeostasis, development, and disease. The anti-apoptotic protein, Bcl-2, interacts with the evolutionarily conserved autophagy protein, Beclin 1. However, little is known about the functional significance of this interaction. Here, we show that wild-type Bcl-2 antiapoptotic proteins, but not Beclin 1 binding defective mutants of Bcl-2, inhibit Beclin 1-dependent autophagy in yeast and mammalian cells and that cardiac Bcl-2 transgenic expression inhibits autophagy in mouse heart muscle. Furthermore, Beclin 1 mutants that cannot bind to Bcl-2 induce more autophagy than wild-type Beclin 1 and, unlike wild-type Beclin 1, promote cell death. Thus, Bcl-2 not only functions as an antiapoptotic protein, but also as an antiautophagy protein via its inhibitory interaction with Beclin 1. This antiautophagy function of Bcl-2 may help maintain autophagy at levels that are compatible with cell survival, rather than cell death.",
"title": ""
},
{
"docid": "3e767477d7b2f36badd1f581262794cd",
"text": "Inspired by the path transform (PT) algorithm of Zelinsky et al. the novel algorithm of complete coverage called complete coverage D* (CCD*) algorithm is developed, based on the D* search of the two-dimensional occupancy grid map of the environment. Unlike the original PT algorithm the CCD* algorithm takes the robot’s dimension into account, with emphasis on safety of motion and reductions of path length and search time. Additionally, the proposed CCD* algorithm has ability to produce new complete coverage path as the environment changes. The algorithms were tested on a Pioneer 3DX mobile robot equipped with a laser range finder.",
"title": ""
},
{
"docid": "a25145ff3cee8f8b3e590e803e651294",
"text": "Search personalization aims to tailor search results to each specific user based on the user’s personal interests and preferences (i.e., the user profile). Recent research approaches to search personalization by modelling the potential 3-way relationship between the submitted query, the user and the search results (i.e., documents). That relationship is then used to personalize the search results to that user. In this paper, we introduce a novel embedding model based on capsule network, which recently is a breakthrough in deep learning, to model the 3-way relationships for search personalization. In the model, each user (submitted query or returned document) is embedded by a vector in the same vector space. The 3-way relationship is described as a triple of (query, user, document) which is then modeled as a 3-column matrix containing the three embedding vectors. After that, the 3-column matrix is fed into a deep learning architecture to re-rank the search results returned by a basis ranker. Experimental results on query logs from a commercial web search engine show that our model achieves better performances than the basis ranker as well as strong search personalization baselines.",
"title": ""
},
{
"docid": "3d3101e08720513e1b7891cddead8967",
"text": "Conclusion A new model with the self-adaptive attention temperature for the softness of attention distribution; Improved results on the datasets and showed that attention temperature differs for decoding diverse words; Try to figure out better demonstration for the effects of temperature. Abstract A new NMT model with self-adaptive attention temperature; Attention varies at each time step based on the temperature; Improved results on the benchmark datasets; Analysis shows that temperatures vary when translating words of different types.",
"title": ""
},
{
"docid": "c5d54aec7e0a3189143f86d4574d0829",
"text": "Donald Scavia, J. David Allan, Kristin K. Arend, Steven Bartell, Dmitry Beletsky, Nate S. Bosch, Stephen B. Brandt, Ruth D. Briland, Irem Daloğlu, Joseph V. DePinto, David M. Dolan, Mary Anne Evans, Troy M. Farmer, Daisuke Goto, Haejin Han, Tomas O. Höök, Roger Knight, Stuart A. Ludsin, Doran Mason, Anna M. Michalak, R. Peter Richards, James J. Roberts, Daniel K. Rucinski, Edward Rutherford, David J. Schwab, Timothy M. Sesterhenn, Hongyan Zhang, Yuntao Zhou",
"title": ""
},
{
"docid": "9f81e82aa60f06f3eac37d9bce3c9707",
"text": "Active contours are image segmentation methods that minimize the total energy of the contour to be segmented. Among the active contour methods, the radial methods have lower computational complexity and can be applied in real time. This work aims to present a new radial active contour technique, called pSnakes, using the 1D Hilbert transform as external energy. The pSnakes method is based on the fact that the beams in ultrasound equipment diverge from a single point of the probe, thus enabling the use of polar coordinates in the segmentation. The control points or nodes of the active contour are obtained in pairs and are called twin nodes. The internal energies as well as the external one, Hilbertian energy, are redefined. The results showed that pSnakes can be used in image segmentation of short-axis echocardiogram images and that they were effective in image segmentation of the left ventricle. The echo-cardiologist's golden standard showed that the pSnakes was the best method when compared with other methods. The main contributions of this work are the use of pSnakes and Hilbertian energy, as the external energy, in image segmentation. The Hilbertian energy is calculated by the 1D Hilbert transform. Compared with traditional methods, the pSnakes method is more suitable for ultrasound images because it is not affected by variations in image contrast, such as noise. The experimental results obtained by the left ventricle segmentation of echocardiographic images demonstrated the advantages of the proposed model. The results presented in this paper are justified due to an improved performance of the Hilbert energy in the presence of speckle noise.",
"title": ""
},
{
"docid": "d18d4780cc259da28da90485bd3f0974",
"text": "L'ostéogenèse imparfaite (OI) est un groupe hétérogène de maladies affectant le collagène de type I et caractérisées par une fragilité osseuse. Les formes létales sont rares et se caractérisent par une micromélie avec déformation des membres. Un diagnostic anténatal d'OI létale a été fait dans deux cas, par échographie à 17 et à 25 semaines d'aménorrhée, complélées par un scanner du squelette fœtal dans un cas. Une interruption thérapeutique de grossesse a été indiquée dans les deux cas. Pan African Medical Journal. 2016; 25:88 doi:10.11604/pamj.2016.25.88.5871 This article is available online at: http://www.panafrican-med-journal.com/content/article/25/88/full/ © Houda EL Mhabrech et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access",
"title": ""
},
{
"docid": "195a57e6aaf0e8496e808366ff4d1bca",
"text": "BACKGROUND AND PURPOSE\nThe Mini-Mental State Examination (MMSE) is insensitive to mild cognitive impairment and executive function. The more recently developed Montreal Cognitive Assessment (MoCA), an alternative, brief 30-point global cognitive screen, might pick up more cognitive abnormalities in patients with cerebrovascular disease.\n\n\nMETHODS\nIn a population-based study (Oxford Vascular Study) of transient ischemic attack and stroke, the MMSE and MoCA were administered to consecutive patients at 6-month or 5-year follow-up. Accepted cutoffs of MMSE <27 and MoCA <26 were taken to indicate cognitive impairment.\n\n\nRESULTS\nOf 493 patients, 413 (84%) were testable. Untestable patients were older (75.5 versus 69.9 years, P<0.001) and often had dysphasia (24%) or dementia (15%). Although MMSE and MoCA scores were highly correlated (r(2)=0.80, P<0.001), MMSE scores were skewed toward higher values, whereas MoCA scores were normally distributed: median and interquartile range 28 (26 to 29) and 23 (20 to 26), respectively. Two hundred ninety-one of 413 (70%) patients had MoCA <26 of whom 162 had MMSE > or =27, whereas only 5 patients had MoCA > or =26 and MMSE <27 (P<0.0001). In patients with MMSE > or =27, MoCA <26 was associated with higher Rankin scores (P=0.0003) and deficits in delayed recall, abstraction, visuospatial/executive function, and sustained attention.\n\n\nCONCLUSIONS\nThe MoCA picked up substantially more cognitive abnormalities after transient ischemic attack and stroke than the MMSE, demonstrating deficits in executive function, attention, and delayed recall.",
"title": ""
},
{
"docid": "608ff895020f03ce670ea4acb875daa1",
"text": "String diagrams turn algebraic equations into topological moves. These moves have often-recurring shapes, involving the sliding of one diagram past another. In the past, this fact has mostly been considered in terms of its computational convenience; this thesis investigates its deeper reasons. In the first part of the thesis, we individuate, at its root, the dual nature of polygraphs — freely generated higher categories — as presentations of higher algebraic theories, and as combinatorial descriptions of “directed spaces”: CW complexes whose cells have their boundary subdivided into an input and an output region. Operations of polygraphs modelled on operations of topological spaces, including an asymmetric tensor product refining the topological product of spaces, can then be used as the foundation of a compositional universal algebra, where sliding moves of string diagrams for an algebraic theory arise from tensor products of sub-theories. Such compositions come automatically with higher-dimensional coherence cells. We provide several examples of compositional reconstructions of higher algebraic theories, including theories of braids, homomorphisms of algebras, and distributive laws of monads, from operations on polygraphs. In this regard, the standard formalism of polygraphs, based on strict ω-categories, suffers from some technical problems and inadequacies, including the difficulty of computing tensor products. As a solution, we propose a notion of regular polygraph, barring cell boundaries that are degenerate, that is, not homeomorphic to a disk of the appropriate dimension. We develop the theory of globular posets, based on ideas of poset topology, in order to specify a category of shapes for cells of regular polygraphs. We prove that these shapes satisfy our non-degeneracy requirement, and show how to calculate their tensor products. Then, we introduce a notion of weak unit for regular polygraphs, allowing us to recover weakly degenerate boundaries in low dimensions. We prove that the existence of weak units is equivalent to the existence of cells satisfying certain divisibility properties — an elementary notion of equivalence cell — which prompts new questions on the relation between units and equivalences in higher categories. In the second part of the thesis, we turn to specific applications of diagrammatic algebra to quantum theory. First, we re-evaluate certain aspects of quantum theory from the point of view of categorical universal algebra, which leads us to define a 2-dimensional refinement of the category of Hilbert spaces. Then, we focus on the problem of axiomatising fragments of quantum theory, and present the ZW calculus, the first complete diagrammatic axiomatisation of the theory of qubits. The ZW calculus has specific advantages over its predecessors, the ZX calculi, including the existence of a computationally meaningful normal form, and of a fragment whose diagrams can be interpreted physically as setups of fermionic oscillators. Moreover, the choice of its generators reflects an operational classification of entangled quantum states, which is not well-understood for states of more than 3 qubits: our result opens the way for a compositional understanding of entanglement for generic multipartite states. We conclude with preliminary results on generalisations of the ZW calculus to quantum systems of arbitrary finite dimension, including the definition of a universal set of generators in each dimension.",
"title": ""
},
{
"docid": "bd0b233e4f19abaf97dcb85042114155",
"text": "BACKGROUND/PURPOSE\nHair straighteners are very popular around the world, although they can cause great damage to the hair. Thus, the characterization of the mechanical properties of curly hair using advanced techniques is very important to clarify how hair straighteners act on hair fibers and to contribute to the development of effective products. On this basis, we chose two nonconventional hair straighteners (formaldehyde and glyoxylic acid) to investigate how hair straightening treatments affect the mechanical properties of curly hair.\n\n\nMETHODS\nThe mechanical properties of curly hair were evaluated using a tensile test, differential scanning calorimetry (DSC) measurements, scanning electronic microscopy (SEM), a torsion modulus, dynamic vapor sorption (DVS), and Fourier transform infrared spectroscopy (FTIR) analysis.\n\n\nRESULTS\nThe techniques used effectively helped the understanding of the influence of nonconventional hair straighteners on hair properties. For the break stress and the break extension tests, formaldehyde showed a marked decrease in these parameters, with great hair damage. Glyoxylic acid had a slight effect compared to formaldehyde treatment. Both treatments showed an increase in shear modulus, a decrease in water sorption and damage to the hair surface.\n\n\nCONCLUSIONS\nA combination of the techniques used in this study permitted a better understanding of nonconventional hair straightener treatments and also supported the choice of the better treatment, considering a good relationship between efficacy and safety. Thus, it is very important to determine the properties of hair for the development of cosmetics used to improve the beauty of curly hair.",
"title": ""
},
{
"docid": "d3e8dce306eb20a31ac6b686364d0415",
"text": "Lung diseases are the deadliest disease in the world. The computer aided detection system in lung diseases needed accurate lung segmentation to preplan the pulmonary treatment and surgeries. The researchers undergone the lung segmentation need a deep study and understanding of the traditional and recent papers developed in the lung segmentation field so that they can continue their research journey in an efficient way with successful outcomes. The need of reviewing the research papers is now a most wanted one for researches so this paper makes a survey on recent trends of pulmonary lung segmentation. Seven recent papers are carried out to analyze the performance characterization of themselves. The working methods, purpose for development, name of algorithm and drawbacks of the method are taken into consideration for the survey work. The tables and charts are drawn based on the reviewed papers. The study of lung segmentation research is more helpful to new and fresh researchers who are committed their research in lung segmentation.",
"title": ""
},
{
"docid": "424f871e0e2eabf8b1e636f73d0b1c7d",
"text": "Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.",
"title": ""
},
{
"docid": "7d3b8f381710cb196ba126f2b1942d57",
"text": "Radar devices can be used in nonintrusive situations to monitor vital sign, through clothes or behind walls. By detecting and extracting body motion linked to physiological activity, accurate simultaneous estimations of both heart rate (HR) and respiration rate (RR) is possible. However, most research to date has focused on front monitoring of superficial motion of the chest. In this paper, body penetration of electromagnetic (EM) wave is investigated to perform back monitoring of human subjects. Using body-coupled antennas and an ultra-wideband (UWB) pulsed radar, in-body monitoring of lungs and heart motion was achieved. An optimised location of measurement in the back of a subject is presented, to enhance signal-to-noise ratio and limit attenuation of reflected radar signals. Phase-based detection techniques are then investigated for back measurements of vital sign, in conjunction with frequency estimation methods that reduce the impact of parasite signals. Finally, an algorithm combining these techniques is presented to allow robust and real-time estimation of both HR and RR. Static and dynamic tests were conducted, and demonstrated the possibility of using this sensor in future health monitoring systems, especially in the form of a smart car seat for driver monitoring.",
"title": ""
}
] | scidocsrr |
b30189c05d6d8215f8eaaa093a562443 | Computer aided clothing pattern design with 3D editing and pattern alteration | [
{
"docid": "84e0768338a7c643dc93fb6fbdc16ac4",
"text": "Clothing computer design systems include three integrated parts: garment pattern design in 2D/3D, virtual try-on and realistic clothing simulation. Some important results have been obtained in pattern design and clothing simulation since the 1980s. However, in the area of virtual try-on, only limited methods have been proposed which are applicable to some defined garment styles or under restrictive sewing assumptions. This paper presents a series of new techniques from virtually sewing up complex garment patterns on human models to visualizing design effects through physical-based real-time simulation. We first employ an hierarchy of ellipsoids to approximate human models in which the bounding ellipsoids are optimized recursively.Wealso present a newscheme for including contact friction and resolving collisions. Four types of user interactive operation are introduced to manipulate cloth patterns for pre-positioning, virtual sewing and later obtaining cloth simulation. In the cloth simulation, we propose a simplified cloth dynamic model and an integration scheme to realize a high quality realtime cloth simulation.We demonstrate the robustness of our proposed systems by complex garment style virtual try-on and cloth simulation. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9197a5d92bd19ad29a82679bb2a94285",
"text": "Equation (1.1) expresses v0 as a convex combination of the neighbouring points v1, . . . , vk. In the simplest case k = 3, the weights λ1, λ2, λ3 are uniquely determined by (1.1) and (1.2) alone; they are the barycentric coordinates of v0 with respect to the triangle [v1, v2, v3], and they are positive. This motivates calling any set of non-negative weights satisfying (1.1–1.2) for general k, a set of coordinates for v0 with respect to v1, . . . , vk. There has long been an interest in generalizing barycentric coordinates to k-sided polygons with a view to possible multisided extensions of Bézier surfaces; see for example [8 ]. In this setting, one would normally be free to choose v1, . . . , vk to form a convex polygon but would need to allow v0 to be any point inside the polygon or on the polygon, i.e. on an edge or equal to a vertex. More recently, the need for such coordinates arose in methods for parameterization [2 ] and morphing [5 ], [6 ] of triangulations. Here the points v0, v1, . . . , vk will be vertices of a (planar) triangulation and so the point v0 will never lie on an edge of the polygon formed by v1, . . . , vk. If we require no particular properties of the coordinates, the problem is easily solved. Because v0 lies in the convex hull of v1, . . . , vk, there must exist at least one triangle T = [vi1 , vi2 , vi3 ] which contains v0, and so we can take λi1 , λi2 , λi3 to be the three barycentric coordinates of v0 with respect to T , and make the remaining coordinates zero. However, these coordinates depend randomly on the choice of triangle. An improvement is to take an average of such coordinates over certain covering triangles, as proposed in [2 ]. The resulting coordinates depend continuously on v0, v1, . . . , vk, yet still not smoothly. The",
"title": ""
}
] | [
{
"docid": "097fd4372f5a17c5de5c6a6a3fdaeaa8",
"text": "Discriminative training in query spelling correction is difficult due to the complex internal structures of the data. Recent work on query spelling correction suggests a two stage approach a noisy channel model that is used to retrieve a number of candidate corrections, followed by discriminatively trained ranker applied to these candidates. The ranker, however, suffers from the fact the low recall of the first, suboptimal, search stage. This paper proposes to directly optimize the search stage with a discriminative model based on latent structural SVM. In this model, we treat query spelling correction as a multiclass classification problem with structured input and output. The latent structural information is used to model the alignment of words in the spelling correction process. Experiment results show that as a standalone speller, our model outperforms all the baseline systems. It also attains a higher recall compared with the noisy channel model, and can therefore serve as a better filtering stage when combined with a ranker.",
"title": ""
},
{
"docid": "322161b4a43b56e4770d239fe4d2c4c0",
"text": "Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.",
"title": ""
},
{
"docid": "6e30387a3706dea2b7d18668c08bb31b",
"text": "The semantic web vision is one in which rich, ontology-based semantic markup will become widely available. The availability of semantic arkup on the web opens the way to novel, sophisticated forms of question answering. AquaLog is a portable question-answering system which akes queries expressed in natural language and an ontology as input, and returns answers drawn from one or more knowledge bases (KBs). We ay that AquaLog is portable because the configuration time required to customize the system for a particular ontology is negligible. AquaLog resents an elegant solution in which different strategies are combined together in a novel way. It makes use of the GATE NLP platform, string etric algorithms, WordNet and a novel ontology-based relation similarity service to make sense of user queries with respect to the target KB. oreover it also includes a learning component, which ensures that the performance of the system improves over the time, in response to the articular community jargon used by end users. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "be82ba26b91658ee90b6075c75c5f7bd",
"text": "In this paper, we propose a content-based recommendation Algorithm which extends and updates the Minkowski distance in order to address the challenge of matching people and jobs. The proposed algorithm FoDRA (Four Dimensions Recommendation Algorithm) quantifies the suitability of a job seeker for a job position in a more flexible way, using a structured form of the job and the candidate's profile, produced from a content analysis of the unstructured form of the job description and the candidate's CV. We conduct an experimental evaluation in order to check the quality and the effectiveness of FoDRA. Our primary study shows that FoDRA produces promising results and creates new prospects in the area of Job Recommender Systems (JRSs).",
"title": ""
},
{
"docid": "53e6fe645eb83bcc0f86638ee7ce5578",
"text": "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.",
"title": ""
},
{
"docid": "e507c60b8eb437cbd6ca9692f1bf8727",
"text": "We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.",
"title": ""
},
{
"docid": "9cf5fc6b50010d1489f12d161f302428",
"text": "With the advent of large code repositories and sophisticated search capabilities, code search is increasingly becoming a key software development activity. In this work we shed some light into how developers search for code through a case study performed at Google, using a combination of survey and log-analysis methodologies. Our study provides insights into what developers are doing and trying to learn when per- forming a search, search scope, query properties, and what a search session under different contexts usually entails. Our results indicate that programmers search for code very frequently, conducting an average of five search sessions with 12 total queries each workday. The search queries are often targeted at a particular code location and programmers are typically looking for code with which they are somewhat familiar. Further, programmers are generally seeking answers to questions about how to use an API, what code does, why something is failing, or where code is located.",
"title": ""
},
{
"docid": "2dc24d2ecaf2494543128f5e9e5f4864",
"text": "Design of a multiphase hybrid permanent magnet (HPM) generator for series hybrid electric vehicle (SHEV) application is presented in this paper. The proposed hybrid excitation topology together with an integral passive rectifier replaces the permanent magnet (PM) machine and active power electronics converter in hybrid/electric vehicles, facilitating the control over constant PM flux-linkage. The HPM topology includes two rotor elements: a PM and a wound field (WF) rotor with a 30% split ratio, coupled on the same shaft in one machine housing. Both rotors share a nine-phase stator that results in higher output voltage and power density when compared to three-phase design. The HPM generator design is based on a 3-kW benchmark PM machine to ensure the feasibility and validity of design tools and procedures. The WF rotor is designed to realize the same pole shape and number as in the PM section and to obtain the same flux-density in the air-gap while minimizing the WF input energy. Having designed and analyzed the machine using equivalent magnetic circuit and finite element analysis, a laboratory prototype HPM generator is built and tested with the measurements compared to predicted results confirming the designed characteristics and machine performance. The paper also presents comprehensive machine loss and mass audits.",
"title": ""
},
{
"docid": "5a9209f792ddd738d44f17b1175afe64",
"text": "PURPOSE\nIncrease in muscle force, endurance, and flexibility is desired in elite athletes to improve performance and to avoid injuries, but it is often hindered by the occurrence of myofascial trigger points. Dry needling (DN) has been shown effective in eliminating myofascial trigger points.\n\n\nMETHODS\nThis randomized controlled study in 30 elite youth soccer players of a professional soccer Bundesliga Club investigated the effects of four weekly sessions of DN plus water pressure massage on thigh muscle force and range of motion of hip flexion. A group receiving placebo laser plus water pressure massage and a group with no intervention served as controls. Data were collected at baseline (M1), treatment end (M2), and 4 wk follow-up (M3). Furthermore, a 5-month muscle injury follow-up was performed.\n\n\nRESULTS\nDN showed significant improvement of muscular endurance of knee extensors at M2 (P = 0.039) and M3 (P = 0.008) compared with M1 (M1:294.6 ± 15.4 N·m·s, M2:311 ± 25 N·m·s; M3:316.0 ± 28.6 N·m·s) and knee flexors at M2 compared with M1 (M1:163.5 ± 10.9 N·m·s, M2:188.5 ± 16.3 N·m·s) as well as hip flexion (M1: 81.5° ± 3.3°, M2:89.8° ± 2.8°; M3:91.8° ± 3.8°). Compared with placebo (3.8° ± 3.8°) and control (1.4° ± 2.9°), DN (10.3° ± 3.5°) showed a significant (P = 0.01 and P = 0.0002) effect at M3 compared with M1 on hip flexion; compared with nontreatment control (-10 ± 11.9 N·m), DN (5.2 ± 10.2 N·m) also significantly (P = 0.049) improved maximum force of knee extensors at M3 compared with M1. During the rest of the season, muscle injuries were less frequent in the DN group compared with the control group.\n\n\nCONCLUSION\nDN showed a significant effect on muscular endurance and hip flexion range of motion that persisted 4 wk posttreatment. Compared with placebo, it showed a significant effect on hip flexion that persisted 4 wk posttreatment, and compared with nonintervention control, it showed a significant effect on maximum force of knee extensors 4 wk posttreatment in elite soccer players.",
"title": ""
},
{
"docid": "7a52fecf868040da5db3bd6fcbdcc0b2",
"text": "Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.",
"title": ""
},
{
"docid": "b18f53b2a33546a361d3efa1787510ef",
"text": "How do International Monetary Fund (IMF) policy reforms-so-called 'conditionalities'-affect government health expenditures? We collected archival documents on IMF programmes from 1995 to 2014 to identify the pathways and impact of conditionality on government health spending in 16 West African countries. Based on a qualitative analysis of the data, we find that IMF policy reforms reduce fiscal space for investment in health, limit staff expansion of doctors and nurses, and lead to budget execution challenges in health systems. Further, we use cross-national fixed effects models to evaluate the relationship between IMF-mandated policy reforms and government health spending, adjusting for confounding economic and demographic factors and for selection bias. Each additional binding IMF policy reform reduces government health expenditure per capita by 0.248 percent (95% CI -0.435 to -0.060). Overall, our findings suggest that IMF conditionality impedes progress toward the attainment of universal health coverage.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "203c797bea19fa0d4d66d65832ccbded",
"text": "In soccer, scoring goals is a fundamental objective which depends on many conditions and constraints. Considering the RoboCup soccer 2D-simulator, this paper presents a data mining-based decision system to identify the best time and direction to kick the ball towards the goal to maximize the overall chances of scoring during a simulated soccer match. Following the CRISP-DM methodology, data for modeling were extracted from matches of major international tournaments (10691 kicks), knowledge about soccer was embedded via transformation of variables and a Multilayer Perceptron was used to estimate the scoring chance. Experimental performance assessment to compare this approach against previous LDA-based approach was conducted from 100 matches. Several statistical metrics were used to analyze the performance of the system and the results showed an increase of 7.7% in the number of kicks, producing an overall increase of 78% in the number of goals scored.",
"title": ""
},
{
"docid": "9f16e90dc9b166682ac9e2a8b54e611a",
"text": "Lua is a programming language designed as scripting language, which is fast, lightweight, and suitable for embedded applications. Due to its features, Lua is widely used in the development of games and interactive applications for digital TV. However, during the development phase of such applications, some errors may be introduced, such as deadlock, arithmetic overflow, and division by zero. This paper describes a novel verification approach for software written in Lua, using as backend the Efficient SMTBased Context-Bounded Model Checker (ESBMC). Such an approach, called bounded model checking - Lua (BMCLua), consists in translating Lua programs into ANSI-C source code, which is then verified with ESBMC. Experimental results show that the proposed verification methodology is effective and efficient, when verifying safety properties in Lua programs. The performed experiments have shown that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches. To the best of our knowledge, this work is the first that applies bounded model checking to the verification of Lua programs.",
"title": ""
},
{
"docid": "300b599e2e3cc3b63bc38276f9621a16",
"text": "Swarm intelligence (SI) is based on collective behavior of selforganized systems. Typical swarm intelligence schemes include Particle Swarm Optimization (PSO), Ant Colony System (ACS), Stochastic Diffusion Search (SDS), Bacteria Foraging (BF), the Artificial Bee Colony (ABC), and so on. Besides the applications to conventional optimization problems, SI can be used in controlling robots and unmanned vehicles, predicting social behaviors, enhancing the telecommunication and computer networks, etc. Indeed, the use of swarm optimization can be applied to a variety of fields in engineering and social sciences. In this paper, we review some popular algorithms in the field of swarm intelligence for problems of optimization. The overview and experiments of PSO, ACS, and ABC are given. Enhanced versions of these are also introduced. In addition, some comparisons are made between these algorithms.",
"title": ""
},
{
"docid": "28e1c4c2622353fc87d3d8a971b9e874",
"text": "In-memory key/value store (KV-store) is a key building block for many systems like databases and large websites. Two key requirements for such systems are efficiency and availability, which demand a KV-store to continuously handle millions of requests per second. A common approach to availability is using replication, such as primary-backup (PBR), which, however, requires M+1 times memory to tolerate M failures. This renders scarce memory unable to handle useful user jobs.\n This article makes the first case of building highly available in-memory KV-store by integrating erasure coding to achieve memory efficiency, while not notably degrading performance. A main challenge is that an in-memory KV-store has much scattered metadata. A single KV put may cause excessive coding operations and parity updates due to excessive small updates to metadata. Our approach, namely Cocytus, addresses this challenge by using a hybrid scheme that leverages PBR for small-sized and scattered data (e.g., metadata and key), while only applying erasure coding to relatively large data (e.g., value). To mitigate well-known issues like lengthy recovery of erasure coding, Cocytus uses an online recovery scheme by leveraging the replicated metadata information to continuously serve KV requests. To further demonstrate the usefulness of Cocytus, we have built a transaction layer by using Cocytus as a fast and reliable storage layer to store database records and transaction logs. We have integrated the design of Cocytus to Memcached and extend it to support in-memory transactions. Evaluation using YCSB with different KV configurations shows that Cocytus incurs low overhead for latency and throughput, can tolerate node failures with fast online recovery, while saving 33% to 46% memory compared to PBR when tolerating two failures. A further evaluation using the SmallBank OLTP benchmark shows that in-memory transactions can run atop Cocytus with high throughput, low latency, and low abort rate and recover fast from consecutive failures.",
"title": ""
},
{
"docid": "b5fd22854e75a29507cde380999705a2",
"text": "This study presents a high-efficiency-isolated single-input multiple-output bidirectional (HISMB) converter for a power storage system. According to the power management, the proposed HISMB converter can operate at a step-up state (energy release) and a step-down state (energy storage). At the step-up state, it can boost the voltage of a low-voltage input power source to a high-voltage-side dc bus and middle-voltage terminals. When the high-voltage-side dc bus has excess energy, one can reversely transmit the energy. The high-voltage dc bus can take as the main power, and middle-voltage output terminals can supply powers for individual middle-voltage dc loads or to charge auxiliary power sources (e.g., battery modules). In this study, a coupled-inductor-based HISMB converter accomplishes the bidirectional power control with the properties of voltage clamping and soft switching, and the corresponding device specifications are adequately designed. As a result, the energy of the leakage inductor of the coupled inductor can be recycled and released to the high-voltage-side dc bus and auxiliary power sources, and the voltage stresses on power switches can be greatly reduced. Moreover, the switching losses can be significantly decreased because of all power switches with zero-voltage-switching features. Therefore, the objectives of high-efficiency power conversion, electric isolation, bidirectional energy transmission, and various output voltage with different levels can be obtained. The effectiveness of the proposed HISMB converter is verified by experimental results of a kW-level prototype in practical applications.",
"title": ""
},
{
"docid": "b9d9fc6782c6ed9952d28309199e141d",
"text": "Recently, Edge Computing has emerged as a new computing paradigm dedicated for mobile applications for performance enhancement and energy efficiency purposes. Specifically, it benefits today's interactive applications on power-constrained devices by offloading compute-intensive tasks to the edge nodes which is in close proximity. Meanwhile, Field Programmable Gate Array (FPGA) is well known for its excellence in accelerating compute-intensive tasks such as deep learning algorithms in a high performance and energy efficiency manner due to its hardware-customizable nature. In this paper, we make the first attempt to leverage and combine the advantages of these two, and proposed a new network-assisted computing model, namely FPGA-based edge computing. As a case study, we choose three computer vision (CV)-based interactive mobile applications, and implement their backend computation parts on FPGA. By deploying such application-customized accelerator modules for computation offloading at the network edge, we experimentally demonstrate that this approach can effectively reduce response time for the applications and energy consumption for the entire system in comparison with traditional CPU-based edge/cloud offloading approach.",
"title": ""
}
] | scidocsrr |
c70bb419225959cbe49a6461f384f56b | Brain tumor segmentation using Cuckoo Search optimization for Magnetic Resonance Images | [
{
"docid": "9f76ca13fd4e61905f82a1009982adb9",
"text": "Image segmentation is an important processing step in many image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation, but it is still difficult to assess whether one algorithm produces more accurate segmentations than another, whether it be for a particular image or set of images, or more generally, for a whole class of images. To date, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which a human visually compares the image segmentation results for separate segmentation algorithms, which is a tedious process and inherently limits the depth of evaluation to a relatively small number of segmentation comparisons over a predetermined set of images. Another common evaluation alternative is supervised evaluation, in which a segmented image is compared against a manuallysegmented or pre-processed reference image. Evaluation methods that require user assistance, such as subjective evaluation and supervised evaluation, are infeasible in many vision applications, so unsupervised methods are necessary. Unsupervised evaluation enables the objective comparison of both different segmentation methods and different parameterizations of a single method, without requiring human visual comparisons or comparison with a manually-segmented or pre-processed reference image. Additionally, unsupervised methods generate results for individual images and images whose characteristics may not be known until evaluation time. Unsupervised methods are crucial to real-time segmentation evaluation, and can furthermore enable self-tuning of algorithm parameters based on evaluation results. In this paper, we examine the unsupervised objective evaluation methods that have been proposed in the literature. An extensive evaluation of these methods are presented. The advantages and shortcomings of the underlying design mechanisms in these methods are discussed and analyzed through analytical evaluation and empirical evaluation. Finally, possible future directions for research in unsupervised evaluation are proposed. 2007 Elsevier Inc. All rights reserved.",
"title": ""
}
] | [
{
"docid": "a38e863016bfcead5fd9af46365d4d5c",
"text": "Social networks generate a large amount of text content over time because of continuous interaction between participants. The mining of such social streams is more challenging than traditional text streams, because of the presence of both text content and implicit network structure within the stream. The problem of event detection is also closely related to clustering, because the events can only be inferred from aggregate trend changes in the stream. In this paper, we will study the two related problems of clustering and event detection in social streams. We will study both the supervised and unsupervised case for the event detection problem. We present experimental results illustrating the effectiveness of incorporating network structure in event discovery over purely content-based",
"title": ""
},
{
"docid": "2272d3ac8770f456c1cf2e461eba2da9",
"text": "EXECUTiVE SUMMARY This quarter, work continued on the design and construction of a robotic fingerspelling hand. The hand is being designed to aid in communication for individuals who are both deaf and blind. In the winter quarter, research was centered on determining an effective method of actuation for the robotic hand. This spring 2008 quarter, time was spent designing the mechanisms needed to mimic the size and motions of a human hand. Several methods were used to determine a proper size for the robotic hand, including using the ManneQuinPro human modeling system to approximate the size of an average male human hand and using the golden ratio to approximate the length of bone sections within the hand. After a proper average hand size was determined, a finger mechanism was designed in the SolidWorks design program that could be built and used in the robotic hand.",
"title": ""
},
{
"docid": "1212637c91d8c57299c922b6bde91ce8",
"text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.",
"title": ""
},
{
"docid": "9554640e49aea8bec5283463d5a2be1f",
"text": "In this paper, we study the problem of packing unequal circle s into a2D rectangular container. We solve this problem by proposing two greedy algorithms. Th e first algorithm, denoted by B1.0, selects the next circle to place according to the maximum hole degree rule , which is inspired from human activity in packing. The second algorithm, denoted by B1.5, improves B1.0 with aself look-ahead search strategy . The comparisons with the published methods on several inst ances taken from the literature show the good performance of our ap p oach.",
"title": ""
},
{
"docid": "0a263c6abbfc97faa169b95d415c9896",
"text": "We introduce ChronoStream, a distributed system specifically designed for elastic stateful stream computation in the cloud. ChronoStream treats internal state as a first-class citizen and aims at providing flexible elastic support in both vertical and horizontal dimensions to cope with workload fluctuation and dynamic resource reclamation. With a clear separation between application-level computation parallelism and OS-level execution concurrency, ChronoStream enables transparent dynamic scaling and failure recovery by eliminating any network I/O and state-synchronization overhead. Our evaluation on dozens of computing nodes shows that ChronoStream can scale linearly and achieve transparent elasticity and high availability without sacrificing system performance or affecting collocated tenants.",
"title": ""
},
{
"docid": "8a56dfbe83fbdd45d85c6b2ac793338b",
"text": "Idioms of distress communicate suffering via reference to shared ethnopsychologies, and better understanding of idioms of distress can contribute to effective clinical and public health communication. This systematic review is a qualitative synthesis of \"thinking too much\" idioms globally, to determine their applicability and variability across cultures. We searched eight databases and retained publications if they included empirical quantitative, qualitative, or mixed-methods research regarding a \"thinking too much\" idiom and were in English. In total, 138 publications from 1979 to 2014 met inclusion criteria. We examined the descriptive epidemiology, phenomenology, etiology, and course of \"thinking too much\" idioms and compared them to psychiatric constructs. \"Thinking too much\" idioms typically reference ruminative, intrusive, and anxious thoughts and result in a range of perceived complications, physical and mental illnesses, or even death. These idioms appear to have variable overlap with common psychiatric constructs, including depression, anxiety, and PTSD. However, \"thinking too much\" idioms reflect aspects of experience, distress, and social positioning not captured by psychiatric diagnoses and often show wide within-cultural variation, in addition to between-cultural differences. Taken together, these findings suggest that \"thinking too much\" should not be interpreted as a gloss for psychiatric disorder nor assumed to be a unitary symptom or syndrome within a culture. We suggest five key ways in which engagement with \"thinking too much\" idioms can improve global mental health research and interventions: it (1) incorporates a key idiom of distress into measurement and screening to improve validity of efforts at identifying those in need of services and tracking treatment outcomes; (2) facilitates exploration of ethnopsychology in order to bolster cultural appropriateness of interventions; (3) strengthens public health communication to encourage engagement in treatment; (4) reduces stigma by enhancing understanding, promoting treatment-seeking, and avoiding unintentionally contributing to stigmatization; and (5) identifies a key locally salient treatment target.",
"title": ""
},
{
"docid": "d06393c467e19b0827eea5f86bbf4e98",
"text": "This paper presents the results of a systematic review of existing literature on the integration of agile software development with user-centered design approaches. It shows that a common process model underlies such approaches and discusses which artifacts are used to support the collaboration between designers and developers.",
"title": ""
},
{
"docid": "02e1994a5f6ecd3f6f4cc362b6e5af3b",
"text": "Risk management has been recognized as an effective way to reduce system development failure. Information system development (ISD) is a highly complex and unpredictable activity associated with high risks. With more and more organizations outsource or offshore substantial resources in system development, organizations face up new challenges and risks not common to traditional development models. Classical risk management approaches have relied on tactical, bottomup analysis, which do not readily scale to distributed environment. Therefore, risk management in distributed environment is becoming a critical area of concern. This paper uses a systemic approach developed by Software Engineering Institute to identify risks of ISD in distributed environment. Four key risk factors were identified from prior literature: objective, preparation, execution, and environment. In addition, the impact of these four risk factors on the success of information system development will also be examined.",
"title": ""
},
{
"docid": "db3f317940f308407d217bbedf14aaf0",
"text": "Imagine your daily activities. Perhaps you will be at home today, relaxing and completing chores. Maybe you are a scientist, and plan to conduct a long series of experiments in a laboratory. You might work in an office building: you walk about your floor, greeting others, getting coffee, preparing documents, etc. There are many activities you perform regularly in large environments. If a system understood your intentions it could help you achieve your goals, or automate aspects of your environment. More generally, an understanding of human intentions would benefit, and is perhaps prerequisite for, AI systems that assist and augment human capabilities. We present a framework that continuously forecasts long-term spatial and semantic intentions (what you will do and where you will go) of a first-person camera wearer. We term our algorithm “Demonstrating Agent Rewards for K-futures Online” (DARKO). We use a first-person camera to meet the challenge of observing the wearer’s behavior everywhere. In Figure 1, DARKO forecasts multiple quantities: (1) the user intends to go to the shower (out of all possible destinations in their house), (2) their trajectory through Figure 1: Forecasting future behavior from first-person video. The overhead map shows where the person is likely to go, predicted from the first frame. Each s",
"title": ""
},
{
"docid": "3dcf758545558c5d3c98947c30f99842",
"text": "Problematic smartphone use is an important public health challenge and is linked with poor mental health outcomes. However, little is known about the mechanisms that maintain this behavior. We recruited a sample of 308 participants from Amazon’s Mechanical Turk labor market. Participants responded to standardized measures of problematic smartphone use, and frequency of smartphone use, depression and anxiety and possible mechanisms including behavioral activation, need for touch, fear of missing out (FoMO), and emotion regulation. Problematic smartphone use was most correlated with anxiety, need for touch and FoMO. The frequency of use was most correlated (inversely) with depression. In regression models, problematic smartphone use was associated with FoMO, depression (inversely), anxiety, and need for touch. Frequency of use was associated with need for touch, and (inversely) with depressive symptoms. Behavioral activation mediated associations between smartphone use (both problematic and usage frequency) and depression and anxiety symptoms. Emotional suppression also mediated the association between problematic smartphone use and anxiety. Results demonstrate the importance of social and tactile need fulfillment variables such as FoMO and need for touch as critical mechanisms that can explain problematic smartphone use and its association with depression and",
"title": ""
},
{
"docid": "7655ddc0c703bb96df16b8a67958c34e",
"text": "This paper describes the design and experiment results of 25 Gbps, 4 channels optical transmitter which consist of a vertical-cavity surface emitting laser (VCSEL) driver with an asymmetric pre-emphasis circuit and an electrical receiver. To make data transfers faster in directly modulated a VCSEL-based optical communications, the driver circuit requires an asymmetric pre-emphasis signal to compensate for the nonlinear characteristics of VCSEL. An asymmetric pre-emphasis signal can be created by the adjusting a duty ratio with a delay circuit. A test chip was fabricated in the 65-nm standard CMOS process and demonstrated. An experimental evaluation showed that this transmitter enlarged the eye opening of a 25 Gbps, PRBS=29-1 test signal by 8.8% and achieve four channels fully optical link with an optical receiver at a power of 10.3 mW=Gbps=ch at 25 Gbps.",
"title": ""
},
{
"docid": "0f5bbaeb27ef89892ce2125a8cc94af7",
"text": "Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) are the two most common types of acoustic models used in statistical parametric approaches for generating low-level speech waveforms from high-level symbolic inputs via intermediate acoustic feature sequences. However, these models have their limitations in representing complex, nonlinear relationships between the speech generation inputs and the acoustic features. Inspired by the intrinsically hierarchical process of human speech production and by the successful application of deep neural networks (DNNs) to automatic speech recognition (ASR), deep learning techniques have also been applied successfully to speech generation, as reported in recent literature. This article systematically reviews these emerging speech generation approaches, with the dual goal of helping readers gain a better understanding of the existing techniques as well as stimulating new work in the burgeoning area of deep learning for parametric speech generation.",
"title": ""
},
{
"docid": "bc28f28d21605990854ac9649d244413",
"text": "Mobile devices can provide people with contextual information. This information may benefit a primary activity, assuming it is easily accessible. In this paper, we present DisplaySkin, a pose-aware device with a flexible display circling the wrist. DisplaySkin creates a kinematic model of a user's arm and uses it to place information in view, independent of body pose. In doing so, DisplaySkin aims to minimize the cost of accessing information without being intrusive. We evaluated our pose-aware display with a rotational pointing task, which was interrupted by a notification on DisplaySkin. Results show that a pose-aware display reduces the time required to respond to notifications on the wrist.",
"title": ""
},
{
"docid": "1a1077a20e261e6a846706720a567094",
"text": "Proposed new actuation mechanism realizes active or semi-active mobility for flexible long cables such as fiberscopes and scope cameras. A ciliary vibration mechanism was developed using flexible ciliary tapes that can be attached easily to existing cables. Driving characteristics of the active cables were confirmed through experiments and numerical analyses. Finally, the actuation mechanism was applied for an advanced scope camera that can reduce friction with obstacles and avoid stuck or tangled cables",
"title": ""
},
{
"docid": "872d589cd879dee7d88185851b9546ab",
"text": "Considering few treatments are available to slow or stop neurodegenerative disorders, such as Alzheimer’s disease and related dementias (ADRD), modifying lifestyle factors to prevent disease onset are recommended. The Voice, Activity, and Location Monitoring system for Alzheimer’s disease (VALMA) is a novel ambulatory sensor system designed to capture natural behaviours across multiple domains to profile lifestyle risk factors related to ADRD. Objective measures of physical activity and sleep are provided by lower limb accelerometry. Audio and GPS location records provide verbal and mobility activity, respectively. Based on a familiar smartphone package, data collection with the system has proven to be feasible in community-dwelling older adults. Objective assessments of everyday activity will impact diagnosis of disease and design of exercise, sleep, and social interventions to prevent and/or slow disease progression.",
"title": ""
},
{
"docid": "a1f930147ad3c3ef48b6352e83d645d0",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
},
{
"docid": "408f58b7dd6cb1e6be9060f112773888",
"text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.",
"title": ""
},
{
"docid": "37b5ab95b1b488c5aee9a5cfed87c095",
"text": "A key step in the understanding of printed documents is their classification based on the nature of information they contain and their layout. In this work we consider a dynamic scenario in which document classes are not known a priori and new classes can appear at any time. This open world setting is both realistic and highly challenging. We use an SVM-based classifier based only on image-level features and use a nearest-neighbor approach for detecting new classes. We assess our proposal on a real-world dataset composed of 562 invoices belonging to 68 different classes. These documents were digitalized after being handled by a corporate environment, thus they are quite noisy---e.g., big stamps and handwritten signatures at unfortunate positions and alike. The experimental results are highly promising.",
"title": ""
},
{
"docid": "bd9009579020d6ed1b4de90d41f1c353",
"text": "The design, prototyping, and characterization of a radiation pattern reconfigurable antenna (RA) targeting 5G communications are presented. The RA is based on a reconfigurable parasitic layer technique in which a driven dipole antenna is located along the central axis of a 3-D parasitic layer structure enclosing it. The reconfigurable parasitic structure is similar to a hexagonal prism, where the top/bottom bases are formed by a hexagonal domed structure. The surfaces of the parasitic structure house electrically small metallic pixels with various geometries. The adjacent pixels are connected by PIN diode switches to change the geometry of the parasitic surface, thus providing reconfigurability in the radiation pattern. This RA is designed to operate over a 4.8–5.2 GHz frequency band, producing various radiation patterns with a beam-steering capability in both the azimuth (<inline-formula> <tex-math notation=\"LaTeX\">$0 {^{\\circ }} <\\phi < 360 {^{\\circ }}$ </tex-math></inline-formula>) and elevation planes (<inline-formula> <tex-math notation=\"LaTeX\">$-18 {^{\\circ }} <\\theta < 18 {^{\\circ }}$ </tex-math></inline-formula>). Small-cell access points equipped with RAs are used to investigate the system level performances for 5G heterogeneous networks. The results show that using distributed mode optimization, RA equipped small-cell systems could provide up to 29% capacity gains and 13% coverage improvements as compared to legacy omnidirectional antenna equipped systems.",
"title": ""
},
{
"docid": "226607ad7be61174871fcab384ac31b4",
"text": "Traditional image stitching using parametric transforms such as homography, only produces perceptually correct composites for planar scenes or parallax free camera motion between source frames. This limits mosaicing to source images taken from the same physical location. In this paper, we introduce a smoothly varying affine stitching field which is flexible enough to handle parallax while retaining the good extrapolation and occlusion handling properties of parametric transforms. Our algorithm which jointly estimates both the stitching field and correspondence, permits the stitching of general motion source images, provided the scenes do not contain abrupt protrusions.",
"title": ""
}
] | scidocsrr |
7c864c37d20aa08948af106b46b42ca3 | UA-DETRAC 2017 : Report of AVSS 2017 & IT 4 S Challenge on Advance Traffic Monitoring | [
{
"docid": "198311a68ad3b9ee8020b91d0b029a3c",
"text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.",
"title": ""
},
{
"docid": "a77eddf9436652d68093946fbe1d2ed0",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
}
] | [
{
"docid": "5a8d4bfb89468d432b7482062a0cbf2d",
"text": "While “no one size fits all” is a sound philosophy for system designers to follow, it poses multiple challenges for application developers and system administrators. It can be hard for an application developer to pick one system when the needs of her application match the features of multiple “one size” systems. The choice becomes considerably harder when different components of an application fit the features of different “one size” systems. Considerable manual effort goes into creating and tuning such multi-system applications. An application’s data and workload properties may change over time, often in unpredictable and bursty ways. Consequently, the “one size” system that is best for an application can change over time. Adapting to change can be hard when application development is coupled tightly with any individual “one size” system. In this paper, we make the case for developing a new breed of Database Management Systems that we term DBMS. A DBMS contains multiple “one size” systems internally. An application specifies its execution requirements on aspects like performance, availability, consistency, change, and cost to the DBMS declaratively. For all requests (e.g., queries) made by the application, the DBMS will select the execution plan that meets the application’s requirements best. A unique aspect of the execution plan in a DBMS is that the plan includes the selection of one or more “one size” systems. The plan is then deployed and managed automatically on the selected system(s). If application requirements change beyond what was planned for originally by the DBMS, then the application can be reoptimized and redeployed; usually with no additional effort required from the application developer. The DBMS approach has the potential to address the challenges that application developers and system administrators face from the vast and growing number of “one size” systems today. However, this approach poses many research challenges that we discuss in this paper. We are taking the DBMS approach in a platform, called Cyclops, that we are building for continuous query execution. We will use Cyclops throughout the paper to give concrete illustrations of the benefits and challenges of the DBMS approach. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6 Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
},
{
"docid": "17e761f30e9f8cffa84a5a2c142e4665",
"text": "In this paper, a neural-dynamic optimization-based nonlinear model predictive control (NMPC) is developed for controlling leader-follower mobile robots formation. Consider obstacles in the environments, a control strategy is proposed for the formations which includes separation-bearing-orientation scheme (SBOS) for regular leader-follower formation and separation-distance scheme (SDS) for obstacle avoidance. During the formation motion, the leader robot shall track a desired trajectory and the desire leader-follower relationship can be maintained through SBOS method; meanwhile, the followers can avoid the collision by applying the SDS. The formation-error kinematics of both SBOS and SDS are derived and a constrained quadratic programming (QP) can be obtained by transforming the MPC method. Then, over a finite-receding horizon, the QP problem can be solved by utilizing the primal-dual neural network (PDNN) with parallel capability. The computation complexity can be greatly reduced by the implemented neural-dynamic optimization. Compared with other existing formation control approaches, the developed solution in this paper is rooted in NMPC techniques with input constraints and the novel QP problem formulation. Finally, experimental studies of the proposed formation control approach have been performed on several mobile robots to verify the effectiveness.",
"title": ""
},
{
"docid": "41c718697d19ee3ca0914255426a38ab",
"text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.",
"title": ""
},
{
"docid": "338efe667e608779f4f41d1cdb1839bb",
"text": "In ASP.NET, Programmers maybe use POST or GET to pass parameter's value. Two methods are easy to come true. But In ASP.NET, It is not easy to pass parameter's value. In ASP.NET, Programmers maybe use many methods to pass parameter's value, such as using Application, Session, Querying, Cookies, and Forms variables. In this paper, by way of pass value from WebForm1.aspx to WebForm2.aspx and show out the value on WebForm2. We can give and explain actually examples in ASP.NET language to introduce these methods.",
"title": ""
},
{
"docid": "643be78202e4d118e745149ed389b5ef",
"text": "Little clinical research exists on the contribution of the intrinsic foot muscles (IFM) to gait or on the specific clinical evaluation or retraining of these muscles. The purpose of this clinical paper is to review the potential functions of the IFM and their role in maintaining and dynamically controlling the medial longitudinal arch. Clinically applicable methods of evaluation and retraining of these muscles for the effective management of various foot and ankle pain syndromes are discussed.",
"title": ""
},
{
"docid": "8f0073815a64e4f5d3e4e8cb9290fa65",
"text": "In this paper, we investigate the benefits of applying a form of network coding known as random linear coding (RLC) to unicast applications in disruption-tolerant networks (DTNs). Under RLC, nodes store and forward random linear combinations of packets as they encounter each other. For the case of a single group of packets originating from the same source and destined for the same destination, we prove a lower bound on the probability that the RLC scheme achieves the minimum time to deliver the group of packets. Although RLC significantly reduces group delivery delays, it fares worse in terms of average packet delivery delay and network transmissions. When replication control is employed, RLC schemes reduce group delivery delays without increasing the number of transmissions. In general, the benefits achieved by RLC are more significant under stringent resource (bandwidth and buffer) constraints, limited signaling, highly dynamic networks, and when applied to packets in the same flow. For more practical settings with multiple continuous flows in the network, we show the importance of deploying RLC schemes with a carefully tuned replication control in order to achieve reduction in average delay, which is observed to be as large as 20% when buffer space is constrained.",
"title": ""
},
{
"docid": "21511302800cd18d21dbc410bec3cbb2",
"text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.",
"title": ""
},
{
"docid": "9696e2f6ff6e16f378ae377798ee3332",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.06.054 * Corresponding author. Address: School of Compu ogy, Beijing Jiaotong University, Beijing 100044, Chin E-mail address: jnchen06@163.com (J. Chen). As an important preprocessing technology in text classification, feature selection can improve the scalability, efficiency and accuracy of a text classifier. In general, a good feature selection method should consider domain and algorithm characteristics. As the Naïve Bayesian classifier is very simple and efficient and highly sensitive to feature selection, so the research of feature selection specially for it is significant. This paper presents two feature evaluation metrics for the Naïve Bayesian classifier applied on multiclass text datasets: Multi-class Odds Ratio (MOR), and Class Discriminating Measure (CDM). Experiments of text classification with Naïve Bayesian classifiers were carried out on two multi-class texts collections. As the results indicate, CDM and MOR gain obviously better selecting effect than other feature selection approaches. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a7fe6b1ba27c13c95d1a48ca401e25fd",
"text": "BACKGROUND\nselecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD).\n\n\nORDINAL-TO-INTERVAL SCALE CONVERSION EXAMPLE\na breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests.\n\n\nRESULTS\nthe sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable.\n\n\nCONCLUSION\nby using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "e457ab9e14f6fa104a15421d9263815a",
"text": "Many aquaculture systems generate high amounts of wastewater containing compounds such as suspended solids, total nitrogen and total phosphorus. Today, aquaculture is imperative because fish demand is increasing. However, the load of waste is directly proportional to the fish production. Therefore, it is necessary to develop more intensive fish culture with efficient systems for wastewater treatment. A number of physical, chemical and biological methods used in conventional wastewater treatment have been applied in aquaculture systems. Constructed wetlands technology is becoming more and more important in recirculating aquaculture systems (RAS) because wetlands have proven to be well-established and a cost-effective method for treating wastewater. This review gives an overview about possibilities to avoid the pollution of water resources; it focuses initially on the use of systems combining aquaculture and plants with a historical review of aquaculture and the treatment of its effluents. It discusses the present state, taking into account the load of pollutants in wastewater such as nitrates and phosphates, and finishes with recommendations to prevent or at least reduce the pollution of water resources in the future.",
"title": ""
},
{
"docid": "a2d699f3c600743c732b26071639038a",
"text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.",
"title": ""
},
{
"docid": "29d08d266bc84ba761283bb8ae827d0b",
"text": "Statistical classifiers typically build (parametric) probabilistic models of the training data, and compute the probability that an unknown sample belongs to each of the possible classes using these models. We utilize two established measures to compare the performance of statistical classifiers namely; classification accuracy (or error rate) and the area under ROC. Naïve Bayes has obtained much relevance in data classification for machine learning and datamining. In our work, a comparative analysis of the accuracy performance of statistical classifiers namely Naïve Bayes (NB), MDL discretized NB, 4 different variants of NB and 8 popular non-NB classifiers was carried out on 21 medical datasets using classification accuracy and true positive rate. Our results indicate that the classification accuracy of Naïve Bayes (MDL discretized) on the average is the best performer. The significance of this work through the results of the comparative analysis, we are of the opinion that medical datamining with generative methods like Naïve Bayes is computationally simple yet effective and are to be used whenever possible as the benchmark for statistical classifiers.",
"title": ""
},
{
"docid": "f18a19159e71e4d2a92a465217f93366",
"text": "Extra-linguistic factors influence language use, and are accounted for by speakers and listeners. Most natural language processing (NLP) tasks to date, however, treat language as uniform. This assumption can harm performance. We investigate the effect of including demographic information on performance in a variety of text-classification tasks. We find that by including age or gender information, we consistently and significantly improve performance over demographic-agnostic models. These results hold across three text-classification tasks in five languages.",
"title": ""
},
{
"docid": "eb083b4c46d49a6cc639a89b74b1f269",
"text": "ROC analyses generated low area under the curve (.695, 95% confidence interval (.637.752)) and cutoff scores with poor sensitivity/specificity balance. BDI-II. Because the distribution of BDI-II scores was not normal, percentile ranks for raw scores were provided for the total sample and separately by gender. symptoms two scales were used: The Beck Depression Inventory-II (BDIII) smokers and non smokers, we found that the mean scores on the BDI-II (9.21 vs.",
"title": ""
},
{
"docid": "4855ecd626160518339ee2caf8f9c2cf",
"text": "The Metamorphoses Greek myth includes a story about a woman raised as a male falling in love with another woman, and being transformed into a man prior to a wedding ceremony and staying with her. It is therefore considered that people who desire to live as though they have the opposite gender have existed since ancient times. People who express a sense of discomfort with their anatomical sex and related roles have been reported in the medical literature since the middle of the 19th century. However, homosexual, fetishism, gender identity disorder, and associated conditions were mixed together and regarded as types of sexual perversion that were considered ethically objectionable until the 1950s. The first performance of sex-reassignment surgery in 1952 attracted considerable attention, and the sexologist Harry Benjamin reported a case of 'a woman kept in the body of a man', which was called transsexualism. John William Money studied the sexual consciousness about disorders of sex development and advocated the concept of gender in 1957. Thereafter the disparity between anatomical sex and gender identity was referred to as the psychopathological condition of gender identity disorder, and this was used for its diagnostic name when it was introduced into DSM-III in 1980. However, gender identity disorder encompasses a spectrum of conditions, and DSM-III -R categorized it into three types: transsexualism, nontranssexualism, and not otherwise specified. The first two types were subsequently combined and standardized into the official diagnostic name of 'gender identity disorder' in DSM-IV. In contrast, gender identity disorder was categorized into four groups (including transsexualism and dual-role transvestism) in ICD-10. A draft proposal of DSM-5 has been submitted, in which the diagnostic name of gender identity disorder has been changed to gender dysphoria. Also, it refers to 'assigned gender' rather than to 'sex', and includes disorders of sexual development. Moreover, the subclassifications regarding sexual orientation have been deleted. The proposed DSM-5 reflects an attempt to include only a medical designation of people who have suffered due to the gender disparity, thereby respecting the concept of transgender in accepting the diversity of the role of gender. This indicates that transgender issues are now at a turning point.",
"title": ""
},
{
"docid": "f715f471118b169502941797d17ceac6",
"text": "Software is a knowledge intensive product, which can only evolve if there is effective and efficient information exchange between developers. Complying to coding conventions improves information exchange by improving the readability of source code. However, without some form of enforcement, compliance to coding conventions is limited. We look at the problem of information exchange in code and propose gamification as a way to motivate developers to invest in compliance. Our concept consists of a technical prototype and its integration into a Scrum environment. By means of two experiments with agile software teams and subsequent surveys, we show that gamification can effectively improve adherence to coding conventions.",
"title": ""
},
{
"docid": "7e8b58b88a1a139f9eb6642a69eb697a",
"text": "We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers. The complex structure of the light field is thus reduced to a comparatively low-dimensional representation, which can be decoded in a variety of ways. The different pathways of upconvolution we currently support are for disparity estimation and separation of the lightfield into diffuse and specular intrinsic components. The key idea is that we can jointly perform unsupervised training for the autoencoder path of the network, and supervised training for the other decoders. This way, we find features which are both tailored to the respective tasks and generalize well to datasets for which only example light fields are available. We provide an extensive evaluation on synthetic light field data, and show that the network yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0b9dde7982cf2b99a979dbc0d6dfceba",
"text": "PURPOSE\nTo develop a reliable and valid questionnaire of bilingual language status with predictable relationships between self-reported and behavioral measures.\n\n\nMETHOD\nIn Study 1, the internal validity of the Language Experience and Proficiency Questionnaire (LEAP-Q) was established on the basis of self-reported data from 52 multilingual adult participants. In Study 2, criterion-based validity was established on the basis of standardized language tests and self-reported measures from 50 adult Spanish-English bilinguals. Reliability and validity of the questionnaire were established on healthy adults whose literacy levels were equivalent to that of someone with a high school education or higher.\n\n\nRESULTS\nFactor analyses revealed consistent factors across both studies and suggested that the LEAP-Q was internally valid. Multiple regression and correlation analyses established criterion-based validity and suggested that self-reports were reliable indicators of language performance. Self-reported reading proficiency was a more accurate predictor of first-language performance, and self-reported speaking proficiency was a more accurate predictor of second-language performance. Although global measures of self-reported proficiency were generally predictive of language ability, deriving a precise estimate of performance on a particular task required that specific aspects of language history be taken into account.\n\n\nCONCLUSION\nThe LEAP-Q is a valid, reliable, and efficient tool for assessing the language profiles of multilingual, neurologically intact adult populations in research settings.",
"title": ""
}
] | scidocsrr |
5c04bb6549016dbcaf0b700e1a48b69b | Time-series data mining | [
{
"docid": "96be7a58f4aec960e2ad2273dea26adb",
"text": "Because time series are a ubiquitous and increasingly prevalent type of data, there has been much research effort devoted to time series data mining recently. As with all data mining problems, the key to effective and scalable algorithms is choosing the right representation of the data. Many high level representations of time series have been proposed for data mining. In this work, we introduce a new technique based on a bit level approximation of the data. The representation has several important advantages over existing techniques. One unique advantage is that it allows raw data to be directly compared to the reduced representation, while still guaranteeing lower bounds to Euclidean distance. This fact can be exploited to produce faster exact algorithms for similarly search. In addition, we demonstrate that our new representation allows time series clustering to scale to much larger datasets.",
"title": ""
}
] | [
{
"docid": "00129c31c4f37d3d44540c4ad97e5cca",
"text": "To understand how function arises from the interactions between neurons, it is necessary to use methods that allow the monitoring of brain activity at the single-neuron, single-spike level and the targeted manipulation of the diverse neuron types selectively in a closed-loop manner. Large-scale recordings of neuronal spiking combined with optogenetic perturbation of identified individual neurons has emerged as a suitable method for such tasks in behaving animals. To fully exploit the potential power of these methods, multiple steps of technical innovation are needed. We highlight the current state of the art in electrophysiological recording methods, combined with optogenetics, and discuss directions for progress. In addition, we point to areas where rapid development is in progress and discuss topics where near-term improvements are possible and needed.",
"title": ""
},
{
"docid": "a537edc6579892249d157e2dc2f31077",
"text": "An efficient decoupling feeding network is proposed in this letter. It is composed of two directional couplers and two sections of transmission line for connection use. By connecting the two couplers, an indirect coupling with controlled magnitude and phase is introduced, which can be used to cancel out the direct coupling caused by space waves and surface waves between array elements. To demonstrate the method, a two-element microstrip antenna array with the proposed network has been designed, fabricated and measured. Both simulated and measured results have simultaneously proved that the proposed method presents excellent decoupling performance. The measured mutual coupling can be reduced to below -58 dB at center frequency. Meanwhile it has little influence on return loss and radiation patterns. The decoupling mechanism is simple and straightforward which can be easily applied in phased array antennas and MIMO systems.",
"title": ""
},
{
"docid": "ed2464f8cf0495e10d8b2a75a7d8bc3b",
"text": "Personalized services such as news recommendations are becoming an integral part of our digital lives. The problem is that they extract a steep cost in terms of privacy. The service providers collect and analyze user's personal data to provide the service, but can infer sensitive information about the user in the process. In this work we ask the question \"How can we provide personalized news recommendation without sharing sensitive data with the provider?\"\n We propose a local private intelligence assistance framework (PrIA), which collects user data and builds a profile about the user and provides recommendations, all on the user's personal device. It decouples aggregation and personalization: it uses the existing aggregation services on the cloud to obtain candidate articles but makes the personalized recommendations locally. Our proof-of-concept implementation and small scale user study shows the feasibility of a local news recommendation system. In building a private profile, PrIA avoids sharing sensitive information with the cloud-based recommendation service. However, the trade-off is that unlike cloud-based services, PrIA cannot leverage collective knowledge from large number of users. We quantify this trade-off by comparing PrIA with Google's cloud-based recommendation service. We find that the average precision of PrIA's recommendation is only 14% lower than that of Google's service. Rather than choose between privacy or personalization, this result motivates further study of systems that can provide both with acceptable trade-offs.",
"title": ""
},
{
"docid": "64bb57e2cc7d278b490b3cd7389585b2",
"text": "Prior data pertaining to transient entrainment and associated phenomena have been best explained by pacing capture of a reentrant circuit. On this basis, we hypothesized that rapid pacing from a single site of two different constant pacing rates could constantly capture an appropriately selected bipolar electrogram recording site from one direction with a constant stimulus-to-electrogram interval during pacing at one rate, yet be constantly captured from another direction with a different constant stimulus-to-electrogram interval when pacing at a different constant pacing rate. To test this hypothesis, we studied a group of patients, each with a representative tachycardia (ventricular tachycardia, circus-movement tachycardia involving an atrioventricular bypass pathway, atrial tachycardia, and atrial flutter). For each tachycardia, pacing was performed from a single site for at least two different constant rates faster than the spontaneous rate of the tachycardia. We observed in these patients that a local bipolar recording site was constantly captured from different directions at two different pacing rates without interrupting the tachycardia at pacing termination. The evidence that the same site was captured from a different direction at two different pacing rates was supported by demonstrating a change in conduction time to that site associated with a change in the bipolar electrogram morphology at that site when comparing pacing at each rate. The mean conduction time (stimulus-to-recording site electrogram interval) was 319 +/- 69 msec while pacing at a mean cycle length of 265 +/- 50 msec, yet only 81 +/- 38 msec while pacing at a second mean cycle length of 233 +/- 51 msec, a mean change in conduction time of 238 +/- 56 msec. Remarkably, the faster pacing rate resulted in a shorter conduction time. The fact that the same electrode recording site was activated from different directions without interruption of the spontaneous tachycardia at pacing termination is difficult to explain on any mechanistic basis other than reentry. Also, these changes in conduction time and electrogram morphology occurred in parallel with the demonstration of progressive fusion beats on the electrocardiogram, the latter being an established criterion for transient entrainment.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "caead07ebeea66cb5d8e57c956a11289",
"text": "End-to-end bandwidth estimation tools like Iperf though fairly accurate are intrusive. In this paper, we describe how with an instrumented TCP stack (Web100), we can estimate the end-to-end bandwidth accurately, while consuming significantly less network bandwidth and time. We modified Iperf to use Web100 to detect the end of slow-start and estimate the end-toend bandwidth by measuring the amount of data sent for a short period (1 second) after the slow-start, when the TCP throughput is relatively stable. We obtained bandwidth estimates differing by less than 10% when compared to running Iperf for 20 seconds, and savings in bandwidth estimation time of up to 94% and savings in network traffic of up to 92%.",
"title": ""
},
{
"docid": "f34562a98d4a9768f08bc607aec796a5",
"text": "The greyfin croaker Pennahia anea is one of the most common croakers currently on retail sale in Hong Kong, but there are no regional studies on its biology or fishery. The reproductive biology of the species, based on 464 individuals obtained from local wet markets, was studied over 16 months (January 2008–April 2009) using gonadosomatic index (GSI) and gonad histology. Sizes used in this study ranged from 8.0 to 19.0 cm in standard length (SL). Both the larger and smaller size classes were missing from samples, implying that they are infrequently caught in the fishery. Based on GSI data, the approximate minimum sizes for male and female maturation were 12 cm SL. The size at 50% maturity for females was 14.3 cm SL, while all males in the samples were mature. Both GSI and gonad histology suggest that spawning activity occurred from March–April to June, with a peak in May. Since large croakers are declining in the local and regional fisheries, small species such as P. anea are becoming important, although they are mostly taken as bycatch. In view of unmanaged fishing pressure, and given the decline in large croakers and sizes of P. anea presently caught, proper management of the species is suggested.",
"title": ""
},
{
"docid": "ac0b86c5a0e7949c5e77610cee865e2b",
"text": "BACKGROUND\nDegenerative lumbosacral stenosis is a common problem in large breed dogs. For severe degenerative lumbosacral stenosis, conservative treatment is often not effective and surgical intervention remains as the last treatment option. The objective of this retrospective study was to assess the middle to long term outcome of treatment of severe degenerative lumbosacral stenosis with pedicle screw-rod fixation with or without evidence of radiological discospondylitis.\n\n\nRESULTS\nTwelve client-owned dogs with severe degenerative lumbosacral stenosis underwent pedicle screw-rod fixation of the lumbosacral junction. During long term follow-up, dogs were monitored by clinical evaluation, diagnostic imaging, force plate analysis, and by using questionnaires to owners. Clinical evaluation, force plate data, and responses to questionnaires completed by the owners showed resolution (n = 8) or improvement (n = 4) of clinical signs after pedicle screw-rod fixation in 12 dogs. There were no implant failures, however, no interbody vertebral bone fusion of the lumbosacral junction was observed in the follow-up period. Four dogs developed mild recurrent low back pain that could easily be controlled by pain medication and an altered exercise regime.\n\n\nCONCLUSIONS\nPedicle screw-rod fixation offers a surgical treatment option for large breed dogs with severe degenerative lumbosacral stenosis with or without evidence of radiological discospondylitis in which no other treatment is available. Pedicle screw-rod fixation alone does not result in interbody vertebral bone fusion between L7 and S1.",
"title": ""
},
{
"docid": "988c161ceae388f5dbcdcc575a9fa465",
"text": "This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.",
"title": ""
},
{
"docid": "e3663ed1bea4b2639369146db302d1bd",
"text": "In recent years, heterogeneous face biometrics has attracted more attentions in the face recognition community. After published in 2009, the HFB database has been applied by tens of research groups and widely used for Near infrared vs. Visible light (NIR-VIS) face recognition. Despite its success the HFB database has two disadvantages: a limited number of subjects, lacking specific evaluation protocols. To address these issues we collected the NIR-VIS 2.0 database. It contains 725 subjects, imaged by VIS and NIR cameras in four recording sessions. Because the 3D modality in the HFB database was less used in the literature, we don't consider it in the current version. In this paper, we describe the composition of the database, evaluation protocols and present the baseline performance of PCA on the database. Moreover, two interesting tricks, the facial symmetry and heterogeneous component analysis (HCA) are also introduced to improve the performance.",
"title": ""
},
{
"docid": "f028a403190899f96fcd6d6f9efbd2f1",
"text": "It is aimed to design a X-band monopulse microstrip antenna array that can be used almost in all modern tracking radars and having superior properties in angle detection and angular accuracy than the classical ones. In order to create a monopulse antenna array, a rectangular microstrip antenna is designed and 16 of it gathered together using the nonlinear central feeding to suppress the side lobe level (SLL) of the antenna. The monopulse antenna is created by the combining 4 of these 4×4 array antennas with a microstrip comparator designed using four branch line coupler. Good agreement is noted between the simulation and measurement results.",
"title": ""
},
{
"docid": "588129d869fefae4abb657a8396232e0",
"text": "A cold-adapted lipase producing bacterium, designated SS-33T, was isolated from sea sediment collected from the Bay of Bengal, India, and subjected to a polyphasic taxonomic study. Strain SS-33T exhibited the highest 16S rRNA gene sequence similarity with Staphylococcus cohnii subsp. urealyticus (97.18 %), Staphylococcus saprophyticus subsp. bovis (97.16 %) and Staphylococcus cohnii subsp. cohnii (97.04 %). Phylogenetic analysis based on the 16S rRNA gene sequences showed that strain SS-33T belongs to the genus Staphylococcus. Cells of strain SS-33T were Gram-positive, coccus-shaped, non-spore-forming, non-motile, catalase-positive and oxidase-negative. The major fatty acid detected in strain SS-33T was anteiso-C15:0 and the menaquinone was MK-7. The genomic DNA G + C content was 33 mol%. The DNA-DNA hybridization among strain SS-33T and the closely related species indicated that strain SS-33T represents a novel species of the genus Staphylococcus. On the basis of the morphological, physiological and chemotaxonomic characteristics, the results of phylogenetic analysis and the DNA-DNA hybridization, a novel species is proposed for strain SS-33T, with the name Staphylococcus lipolyticus sp. nov. The strain type is SS-33T (=MTCC 10101T = JCM 16560T). Staphylococcus lipolyticus SS-33T hydrolyzed various substrates including tributyrin, olive oil, Tween 20, Tween 40, Tween 60, and Tween 80 at low temperatures, as well as mesophilic temperatures. Lipase from strain SS-33T was partially purified by acetone precipitation. The molecular weight of lipase protein was determined 67 kDa by SDS-PAGE. Zymography was performed to monitor the lipase activity in Native-PAGE. Calcium ions increased lipase activity twofold. The optimum pH of lipase was pH 7.0 and optimum temperature was 30 °C. However, lipase exhibited 90 % activity of its optimum temperature at 10 °C and became more stable at 10 °C as compared to 30 °C. The lipase activity and stability at low temperature has wide ranging applications in various industrial processes. Therefore, cold-adapted mesophilic lipase from strain SS-33T may be used for industrial applications. This is the first report of the production of cold-adapted mesophilic lipase by any Staphylococcus species.",
"title": ""
},
{
"docid": "d7dbaa82fcabd2071d59cb0847a583a0",
"text": "CONTEXT\nA number of studies suggest a positive association between breastfeeding and cognitive development in early and middle childhood. However, the only previous study that investigated the relationship between breastfeeding and intelligence in adults had several methodological shortcomings.\n\n\nOBJECTIVE\nTo determine the association between duration of infant breastfeeding and intelligence in young adulthood.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nProspective longitudinal birth cohort study conducted in a sample of 973 men and women and a sample of 2280 men, all of whom were born in Copenhagen, Denmark, between October 1959 and December 1961. The samples were divided into 5 categories based on duration of breastfeeding, as assessed by physician interview with mothers at a 1-year examination.\n\n\nMAIN OUTCOME MEASURES\nIntelligence, assessed using the Wechsler Adult Intelligence Scale (WAIS) at a mean age of 27.2 years in the mixed-sex sample and the Børge Priens Prøve (BPP) test at a mean age of 18.7 years in the all-male sample. Thirteen potential confounders were included as covariates: parental social status and education; single mother status; mother's height, age, and weight gain during pregnancy and cigarette consumption during the third trimester; number of pregnancies; estimated gestational age; birth weight; birth length; and indexes of pregnancy and delivery complications.\n\n\nRESULTS\nDuration of breastfeeding was associated with significantly higher scores on the Verbal, Performance, and Full Scale WAIS IQs. With regression adjustment for potential confounding factors, the mean Full Scale WAIS IQs were 99.4, 101.7, 102.3, 106.0, and 104.0 for breastfeeding durations of less than 1 month, 2 to 3 months, 4 to 6 months, 7 to 9 months, and more than 9 months, respectively (P =.003 for overall F test). The corresponding mean scores on the BPP were 38.0, 39.2, 39.9, 40.1, and 40.1 (P =.01 for overall F test).\n\n\nCONCLUSION\nIndependent of a wide range of possible confounding factors, a significant positive association between duration of breastfeeding and intelligence was observed in 2 independent samples of young adults, assessed with 2 different intelligence tests.",
"title": ""
},
{
"docid": "56c7c065c390d1ed5f454f663289788d",
"text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"title": ""
},
{
"docid": "cd59460d293aa7ecbb9d7b96ed451b9a",
"text": "PURPOSE\nThe prevalence of work-related upper extremity musculoskeletal disorders and visual symptoms reported in the USA has increased dramatically during the past two decades. This study examined the factors of computer use, workspace design, psychosocial factors, and organizational ergonomics resources on musculoskeletal and visual discomfort and their impact on the safety and health of computer work employees.\n\n\nMETHODS\nA large-scale, cross-sectional survey was administered to a US manufacturing company to investigate these relationships (n = 1259). Associations between these study variables were tested along with moderating effects framed within a conceptual model.\n\n\nRESULTS\nSignificant relationships were found between computer use and psychosocial factors of co-worker support and supervisory relations with visual and musculoskeletal discomfort. Co-worker support was found to be significantly related to reports of eyestrain, headaches, and musculoskeletal discomfort. Supervisor relations partially moderated the relationship between workspace design satisfaction and visual and musculoskeletal discomfort.\n\n\nCONCLUSION\nThis study provides guidance for developing systematic, preventive measures and recommendations in designing office ergonomics interventions with the goal of reducing musculoskeletal and visual discomfort while enhancing office and computer workers' performance and safety.",
"title": ""
},
{
"docid": "b6c9844bdad60c5373cac2bcd018d899",
"text": "Cloud computing is currently gaining enormous momentum due to a number of promised benefits: ease of use in terms of deployment, administration, and maintenance, along with high scalability and flexibility to create new services. However, as more personal and business applications migrate to the cloud, service quality will become an important differentiator between providers. In particular, quality of experience as perceived by users has the potential to become the guiding paradigm for managing quality in the cloud. In this article, we discuss technical challenges emerging from shifting services to the cloud, as well as how this shift impacts QoE and QoE management. Thereby, a particular focus is on multimedia cloud applications. Together with a novel QoE-based classification scheme of cloud applications, these challenges drive the research agenda on QoE management for cloud applications.",
"title": ""
},
{
"docid": "48ce635355fbb5ffb7d6166948b4f135",
"text": "Computational generation of literary artifacts very often resorts to template-like schemas that can be instantiated into complex structures. With this view in mind, the present paper reviews a number of existing attempts to provide an elementary set of patterns for basic plots. An attempt is made to formulate these descriptions of possible plots in terms of character functions, an abstraction of plot-bearing elements of a story originally formulated by Vladimir Propp. These character functions act as the building blocks of the Propper system, an existing framework for computational story generation. The paper explores the set of extensions required to the original set of character functions to allow for a basic representation of the analysed schemata, and a solution for automatic generation of stories based on this formulation of the narrative schemas. This solution uncovers important insights on the relative expressive power of the representation of narrative in terms of character functions, and their impact on the generative potential of the framework is discussed. 1998 ACM Subject Classification F.4.1 Knowledge Representation Formalisms and Methods",
"title": ""
},
{
"docid": "7b5f90b4b0b11ffdb25ececb2eaf56f6",
"text": "The human ABO(H) blood group phenotypes arise from the evolutionarily oldest genetic system found in primate populations. While the blood group antigen A is considered the ancestral primordial structure, under the selective pressure of life-threatening diseases blood group O(H) came to dominate as the most frequently occurring blood group worldwide. Non-O(H) phenotypes demonstrate impaired formation of adaptive and innate immunoglobulin specificities due to clonal selection and phenotype formation in plasma proteins. Compared with individuals with blood group O(H), blood group A individuals not only have a significantly higher risk of developing certain types of cancer but also exhibit high susceptibility to malaria tropica or infection by Plasmodium falciparum. The phenotype-determining blood group A glycotransferase(s), which affect the levels of anti-A/Tn cross-reactive immunoglobulins in phenotypic glycosidic accommodation, might also mediate adhesion and entry of the parasite to host cells via trans-species O-GalNAc glycosylation of abundantly expressed serine residues that arise throughout the parasite's life cycle, while excluding the possibility of antibody formation against the resulting hybrid Tn antigen. In contrast, human blood group O(H), lacking this enzyme, is indicated to confer a survival advantage regarding the overall risk of developing cancer, and individuals with this blood group rarely develop life-threatening infections involving evolutionarily selective malaria strains.",
"title": ""
},
{
"docid": "9d803b0ce1f1af621466b1d7f97b7edf",
"text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.",
"title": ""
},
{
"docid": "f7a6102ec2ebab9970233e90060bfb9c",
"text": "The malar region has been a crucial target in many facial rejuvenation techniques because the beauty and youthful contour of a convex midface and a smooth eyelid–cheek sulcus are key features of a pleasing face-lift result. The full midface subperiosteal lift has helped to address these issues. However, the desire of patients currently for a rapid recovery and return to work with a natural-looking result has influenced procedural selection. Another concern is for safer procedures with reduced potential risk. Progressively fewer invasive techniques, such as the minimal access cranial suspension (MACS) lift, have been a response to these core concerns. After 3 years of performing the conventional three purse-string suture MACS lift, the author developed a practical procedural modification. For a total of 17 patients, the author combined limited regional subperiosteal lift and periosteal fixation with a simple sling approach to the more fully released malar tissue mass to make a single-point suspension just above the lateral orbital rim. The percutaneous sling lift appears to offer a degree and naturalness of rejuvenation of the malar region similar to those of the MACS lift and the full subperiosteal midface lift, but with fewer suspension points and less undermining. Also, the author observed less ecchymosis and edema than with the full midface subperiosteal lift, as would be expected. In all 17 cases, the need for the second and third purse-string sutures was eliminated. The early results for the percutaneous sling lift indicate that it offers promising results, rapid recovery, and reduced risk of serious complications.",
"title": ""
},
{
"docid": "2db786fb0d27992950e7b8238a76226d",
"text": "Alberto González, Advisor This study draws concepts from rhetorical criticism, vernacular rhetoric, visual rhetoric, and whiteness studies, to investigate how Asian/Asian Americans’ online identities are being constructed and mediated by Internet memes. This study examines the use of Internet memes as persuasive discourses for entertainment, spreading stereotypes, and online activism by examining the meme images and texts, including their content, rhetorical components, and structure. Internet memes that directly depict Asian/Asian Americans are collected from three popular meme websites: Reddit, Know Your Meme, and Tumblr. The findings indicate that Internet memes complicate the construction of racial identity, invoking the negotiation and conflicts concerning racial identities described by dominant as well as vernacular discourses. They not only function as entertaining jokes but also reflect the social conflicts surrounding race. However, the prevalence and development of memes also bring new possibilities for social justice movements. Furthermore, the study provides implications of memes for users and anti-racist activities, as well as suggests future research directions mainly in the context of globalization.",
"title": ""
}
] | scidocsrr |
eb3ab27f99915abd020a21b269292bca | MahNMF: Manhattan Non-negative Matrix Factorization | [
{
"docid": "a21d1956026b29bc67b92f8508a62e1c",
"text": "We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.",
"title": ""
},
{
"docid": "9edfe5895b369c0bab8d83838661ea0a",
"text": "(57) Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By moni toring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (un structured data) into discrete-phase-space states, and hence into a graph (Structured data) for extraction of condition change. ABSTRACT",
"title": ""
},
{
"docid": "e2867713be67291ee8c25afa3e2d1319",
"text": "In recent years the <i>l</i><sub>1</sub>, <sub>∞</sub> norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the <i>l</i><sub>1</sub> framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of <i>l</i><sub>1</sub>, <sub>∞</sub> regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the <i>l</i><sub>1</sub>, <sub>∞</sub> ball. We present an algorithm that works in <i>O</i>(<i>n</i> log <i>n</i>) time and <i>O</i>(<i>n</i>) memory where <i>n</i> is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that <i>l</i><sub>1</sub>, <sub>∞</sub> leads to better performance than both <i>l</i><sub>2</sub> and <i>l</i><sub>1</sub> regularization and that it is is effective in discovering jointly sparse solutions.",
"title": ""
}
] | [
{
"docid": "2d87e26389b9d4ebf896bd9cbd281e69",
"text": "Finger-vein biometrics has been extensively investigated for personal authentication. One of the open issues in finger-vein verification is the lack of robustness against image-quality degradation. Spurious and missing features in poor-quality images may degrade the system’s performance. Despite recent advances in finger-vein quality assessment, current solutions depend on domain knowledge. In this paper, we propose a deep neural network (DNN) for representation learning to predict image quality using very limited knowledge. Driven by the primary target of biometric quality assessment, i.e., verification error minimization, we assume that low-quality images are falsely rejected in a verification system. Based on this assumption, the low- and high-quality images are labeled automatically. We then train a DNN on the resulting data set to predict the image quality. To further improve the DNN’s robustness, the finger-vein image is divided into various patches, on which a patch-based DNN is trained. The deepest layers associated with the patches form together a complementary and an over-complete representation. Subsequently, the quality of each patch from a testing image is estimated and the quality scores from the image patches are conjointly input to probabilistic support vector machines (P-SVM) to boost quality-assessment performance. To the best of our knowledge, this is the first proposed work of deep learning-based quality assessment, not only for finger-vein biometrics, but also for other biometrics in general. The experimental results on two public finger-vein databases show that the proposed scheme accurately identifies high- and low-quality images and significantly outperforms existing approaches in terms of the impact on equal error-rate decrease.",
"title": ""
},
{
"docid": "bb1554d174df80e7db20e943b4a69249",
"text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.",
"title": ""
},
{
"docid": "c7c5fde8197d87f2551a2897d5fd4487",
"text": "The Parallel Meaning Bank is a corpus of translations annotated with shared, formal meaning representations comprising over 11 million words divided over four languages (English, German, Italian, and Dutch). Our approach is based on cross-lingual projection: automatically produced (and manually corrected) semantic annotations for English sentences are mapped onto their word-aligned translations, assuming that the translations are meaning-preserving. The semantic annotation consists of five main steps: (i) segmentation of the text in sentences and lexical items; (ii) syntactic parsing with Combinatory Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and (v) compositional semantic analysis based on Discourse Representation Theory. These steps are performed using statistical models trained in a semisupervised manner. The employed annotation models are all language-neutral. Our first results are promising.",
"title": ""
},
{
"docid": "0efc0e61946979158277aa9314227426",
"text": "Many chronic diseases possess a shared biology. Therapies designed for patients at risk of multiple diseases need to account for the shared impact they may have on related diseases to ensure maximum overall well-being. Learning from data in this setting differs from classical survival analysis methods since the incidence of an event of interest may be obscured by other related competing events. We develop a semiparametric Bayesian regression model for survival analysis with competing risks, which can be used for jointly assessing a patient’s risk of multiple (competing) adverse outcomes. We construct a Hierarchical Bayesian Mixture (HBM) model to describe survival paths in which a patient’s covariates influence both the estimation of the type of adverse event and the subsequent survival trajectory through Multivariate Random Forests. In addition variable importance measures, which are essential for clinical interpretability are induced naturally by our model. We aim with this setting to provide accurate individual estimates but also interpretable conclusions for use as a clinical decision support tool. We compare our method with various state-of-the-art benchmarks on both synthetic and clinical data.",
"title": ""
},
{
"docid": "e28ba2ea209537cf9867428e3cf7fdd7",
"text": "People take their mobile phones everywhere they go. In Saudi Arabia, the mobile penetration is very high and students use their phones for different reasons in the classroom. The use of mobile devices in classroom triggers an alert of the impact it might have on students’ learning. This study investigates the association between the use of mobile phones during classroom and the learners’ performance and satisfaction. Results showed that students get distracted, and that this diversion of their attention is reflected in their academic success. However, this is not applicable for all. Some students received high scores even though they declared using mobile phones in classroom, which triggers a request for a deeper study.",
"title": ""
},
{
"docid": "443191f41aba37614c895ba3533f80ed",
"text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.",
"title": ""
},
{
"docid": "d6d275b719451982fa67d442c55c186c",
"text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.",
"title": ""
},
{
"docid": "17dce24f26d7cc196e56a889255f92a8",
"text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.",
"title": ""
},
{
"docid": "ee5b04d7b62186775a7b6ab77b8bbd60",
"text": "Answers submitted to CQA forums are often elaborate, contain spam, are marred by slurs and business promotions. It is difficult for a reader to go through numerous such answers to gauge community opinion. As a result summarization becomes a prioritized task. However, there is a dearth of neural approaches for CQA summarization due to the lack of large scale annotated dataset. We create CQASUMM, the first annotated CQA summarization dataset by filtering the 4.4 million Yahoo! Answers L6 dataset. We sample threads where the best answer can double up as a reference and build hundred word summaries from them. We provide scripts1 to reconstruct the dataset and introduce the new task of Community Question Answering Summarization.\n Multi document summarization(MDS) has been widely studied using news corpora. However documents in CQA have higher variance and contradicting opinion. We compare the popular MDS techniques and evaluate their performance on our CQA corpora. We find that most MDS workflows are built for the entirely factual news corpora, whereas our corpus has a fair share of opinion based instances too. We therefore introduce OpinioSumm, a new MDS which outperforms the best baseline by 4.6% w.r.t ROUGE-1 score.",
"title": ""
},
{
"docid": "51a750fcc6cff4e51095aa80ce25c7d2",
"text": "We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.",
"title": ""
},
{
"docid": "3770720cff3a36596df097835f4f10a9",
"text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.",
"title": ""
},
{
"docid": "b145483a8c91b846876f571f5a138f48",
"text": "Please cite this article in press as: N. Gra doi:10.1016/j.imavis.2008.04.014 This paper presents a novel approach for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion. To promote execution speed in building large area mosaics, the mosaic space is divided into disjoint regions of image intersection based on a geometric criterion. Pair-wise image blending is performed independently in each region by means of watershed segmentation and graph cut optimization. A contribution of this work – use of watershed segmentation on image differences to find possible cuts over areas of low photometric difference – allows for searching over a much smaller set of watershed segments, instead of over the entire set of pixels in the intersection zone. Watershed transform seeks areas of low difference when creating boundaries of each segment. Constraining the overall cutting lines to be a sequence of watershed segment boundaries results in significant reduction of search space. The solution is found efficiently via graph cut, using a photometric criterion. The proposed method presents several advantages. The use of graph cuts over image pairs guarantees the globally optimal solution for each intersection region. The independence of such regions makes the algorithm suitable for parallel implementation. The separated use of the geometric and photometric criteria leads to reduced memory requirements and a compact storage of the input data. Finally, it allows the efficient creation of large mosaics, without user intervention. We illustrate the performance of the approach on image sequences with prominent 3-D content and moving objects. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2f2c36452ab45c4234904d9b11f28eb7",
"text": "Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.",
"title": ""
},
{
"docid": "2265121606a423d581ca696a9b7cee31",
"text": "Heterochromatin protein 1 (HP1) was first described in Drosophila melanogaster as a heterochromatin associated protein with dose-dependent effect on gene silencing. The HP1 family is evolutionarily highly conserved and there are multiple members within the same species. The multi-functionality of HP1 reflects its ability to interact with diverse nuclear proteins, ranging from histones and transcriptional co-repressors to cohesion and DNA replication factors. As its name suggests, HP1 is well-known as a silencing protein found at pericentromeres and telomeres. In contrast to previous views that heterochromatin is transcriptionally inactive; noncoding RNAs transcribed from heterochromatic DNA repeats regulates the assembly and function of heterochromatin ranging from fission yeast to animals. Moreover, more recent progress has shed light on the paradoxical properties of HP1 in the nucleus and has revealed, unexpectedly, its existence in the euchromatin. Therefore, HP1 proteins might participate in both transcription repression in heterochromatin and euchromatin.",
"title": ""
},
{
"docid": "6b01a80b6502cb818024e0ac3b00114b",
"text": "BACKGROUND\nArithmetical skills are essential to the effective exercise of citizenship in a numerate society. How these skills are acquired, or fail to be acquired, is of great importance not only to individual children but to the organisation of formal education and its role in society.\n\n\nMETHOD\nThe evidence on the normal and abnormal developmental progression of arithmetical abilities is reviewed; in particular, evidence for arithmetical ability arising from innate specific cognitive skills (innate numerosity) vs. general cognitive abilities (the Piagetian view) is compared.\n\n\nRESULTS\nThese include evidence from infancy research, neuropsychological studies of developmental dyscalculia, neuroimaging and genetics. The development of arithmetical abilities can be described in terms of the idea of numerosity -- the number of objects in a set. Early arithmetic is usually thought of as the effects on numerosity of operations on sets such as set union. The child's concept of numerosity appears to be innate, as infants, even in the first week of life, seem to discriminate visual arrays on the basis of numerosity. Development can be seen in terms of an increasingly sophisticated understanding of numerosity and its implications, and in increasing skill in manipulating numerosities. The impairment in the capacity to learn arithmetic -- dyscalculia -- can be interpreted in many cases as a deficit in the concept in the child's concept of numerosity. The neuroanatomical bases of arithmetical development and other outstanding issues are discussed.\n\n\nCONCLUSIONS\nThe evidence broadly supports the idea of an innate specific capacity for acquiring arithmetical skills, but the effects of the content of learning, and the timing of learning in the course of development, requires further investigation.",
"title": ""
},
{
"docid": "3ba011d181a4644c8667b139c63f50ff",
"text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.",
"title": ""
},
{
"docid": "0e54be77f69c6afbc83dfabc0b8b4178",
"text": "Spinal muscular atrophy (SMA) is a neurodegenerative disease characterized by loss of motor neurons in the anterior horn of the spinal cord and resultant weakness. The most common form of SMA, accounting for 95% of cases, is autosomal recessive proximal SMA associated with mutations in the survival of motor neurons (SMN1) gene. Relentless progress during the past 15 years in the understanding of the molecular genetics and pathophysiology of SMA has resulted in a unique opportunity for rational, effective therapeutic trials. The goal of SMA therapy is to increase the expression levels of the SMN protein in the correct cells at the right time. With this target in sight, investigators can now effectively screen potential therapies in vitro, test them in accurate, reliable animal models, move promising agents forward to clinical trials, and accurately diagnose patients at an early or presymptomatic stage of disease. A major challenge for the SMA community will be to prioritize and develop the most promising therapies in an efficient, timely, and safe manner with the guidance of the appropriate regulatory agencies. This review will take a historical perspective to highlight important milestones on the road to developing effective therapies for SMA.",
"title": ""
},
{
"docid": "6bbbddca9ba258afb25d6e8af9bfec82",
"text": "With the ever increasing popularity of electronic commerce, the evaluation of antecedents and of customer satisfaction have become very important for the cyber shopping store (CSS) and for researchers. The various models of customer satisfaction that researchers have provided so far are mostly based on the traditional business channels and thus may not be appropriate for CSSs. This research has employed case and survey methods to study the antecedents of customer satisfaction. Though case methods a research model with hypotheses is developed. And through survey methods, the relationships between antecedents and satisfaction are further examined and analyzed. We find five antecedents of customer satisfaction to be more appropriate for online shopping on the Internet. Among them homepage presentation is a new and unique antecedent which has not existed in traditional marketing.",
"title": ""
},
{
"docid": "df5ef1235844aa1593203f96cd2130bd",
"text": "It is generally well acknowledged that humans are capable of having a theory of mind (ToM) of others. We present here a model which borrows mechanisms from three dissenting explanations of how ToM develops and functions, and show that our model behaves in accordance with various ToM experiments (Wellman, Cross, & Watson, 2001; Leslie, German, & Polizzi, 2005).",
"title": ""
},
{
"docid": "ed23845ded235d204914bd1140f034c3",
"text": "We propose a general framework to learn deep generative models via Variational Gradient Flow (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the f -divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including KullbackLeibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with stateof-the-art GANs. ∗Yuling Jiao (yulingjiaomath@whu.edu.cn) †Can Yang (macyang@ust.hk) 1 ar X iv :1 90 1. 08 46 9v 2 [ cs .L G ] 7 F eb 2 01 9",
"title": ""
}
] | scidocsrr |
90ac93734d1255e3fed9569138c05db8 | Generalizing the Convolution Operator to Extend CNNs to Irregular Domains | [
{
"docid": "be593352763133428b837f1c593f30cf",
"text": "Deep Learning’s recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.",
"title": ""
},
{
"docid": "645395d46f653358d942742711d50c0b",
"text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets",
"title": ""
}
] | [
{
"docid": "0cd96187b257ee09060768650432fe6d",
"text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.",
"title": ""
},
{
"docid": "ee5b46719023b5dbae96997bbf9925b0",
"text": "The teaching of reading in different languages should be informed by an effective evidence base. Although most children will eventually become competent, indeed skilled, readers of their languages, the pre-reading (e.g. phonological awareness) and language skills that they bring to school may differ in systematic ways for different language environments. A thorough understanding of potential differences is required if literacy teaching is to be optimized in different languages. Here we propose a theoretical framework based on a psycholinguistic grain size approach to guide the collection of evidence in different countries. We argue that the development of reading depends on children's phonological awareness in all languages studied to date. However, we propose that because languages vary in the consistency with which phonology is represented in orthography, there are developmental differences in the grain size of lexical representations, and accompanying differences in developmental reading strategies across orthographies.",
"title": ""
},
{
"docid": "5387c752db7b4335a125df91372099b3",
"text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.",
"title": ""
},
{
"docid": "91599bb49aef3e65ee158ced65277d80",
"text": "We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.",
"title": ""
},
{
"docid": "947bb564a2a4207d33ca545d8194add4",
"text": "Classical theories of the firm assume access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using twenty-five online field experiments (representing $2.8 million) with major U.S. retailers and brokerages, that this assumption typically does not hold. Statistical evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign—a “small” impact on a noisy dependent variable can generate positive returns. A concise statistical argument shows that the required sample size for an experiment to generate sufficiently informative confidence intervals is typically in excess of ten million person-weeks. This also implies that heterogeneity bias (or model misspecification) unaccounted for by observational methods only needs to explain a tiny fraction of the variation in sales to severely bias estimates. The weak informational feedback means most firms cannot even approach profit maximization.",
"title": ""
},
{
"docid": "553ec50cb948fb96d96b5481ada71399",
"text": "Enormous amount of online information, available in legal domain, has made legal text processing an important area of research. In this paper, we attempt to survey different text summarization techniques that have taken place in the recent past. We put special emphasis on the issue of legal text summarization, as it is one of the most important areas in legal domain. We start with general introduction to text summarization, briefly touch the recent advances in single and multi-document summarization, and then delve into extraction based legal text summarization. We discuss different datasets and metrics used in summarization and compare performances of different approaches, first in general and then focused to legal text. we also mention highlights of different summarization techniques. We briefly cover a few software tools used in legal text summarization. We finally conclude with some future research directions.",
"title": ""
},
{
"docid": "577b9ea82dd60b394ad3024452986d96",
"text": "Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods involving manual detection are not only time consuming, expensive and inaccurate, but in the age of big data they are also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive review of financial fraud detection research using such data mining methods, with a particular focus on computational intelligence (CI)-based techniques. Over fifty scientific literature, primarily spanning the period 2004-2014, were analysed in this study; literature that reported empirical studies focusing specifically on CI-based financial fraud detection were considered in particular. Research gap was identified as none of the existing review articles addresses the association among fraud types, CIbased detection algorithms and their performance, as reported in the literature. We have presented a comprehensive classification as well as analysis of existing fraud detection literature based on key aspects such as detection algorithm used, fraud type investigated, and performance of the detection methods for specific financial fraud types. Some of the key issues and challenges associated with the current practices and potential future direction of research have also",
"title": ""
},
{
"docid": "338d3b05db192186bb6caf6f36904dd0",
"text": "The threat of malicious insiders to organizations is persistent and increasing. We examine 15 real cases of insider threat sabotage of IT systems to identify several key points in the attack time-line, such as when the insider clearly became disgruntled, began attack preparations, and carried out the attack. We also determine when the attack stopped, when it was detected, and when action was taken on the insider. We found that 7 of the insiders we studied clearly became disgruntled more than 28 days prior to attack, but 9 did not carry out malicious acts until less than a day prior to attack. Of the 15 attacks, 8 ended within a day, 12 were detected within a week, and in 10 cases action was taken on the insider within a month. This exercise is a proof-of-concept for future work on larger data sets, and in this paper we detail our study methods and results, discuss challenges we faced, and identify potential new research directions.",
"title": ""
},
{
"docid": "3256b2050c603ca16659384a0e98a22c",
"text": "In this paper, we propose a Hough transform-based method to identify low-contrast defects in unevenly illuminated images, and especially focus on the inspection of mura defects in liquid crystal display (LCD) panels. The proposed method works on 1-D gray-level profiles in the horizontal and vertical directions of the surface image. A point distinctly deviated from the ideal line of a profile can be identified as a defect one. A 1-D gray-level profile in the unevenly illuminated image results in a nonstationary line signal. The most commonly used technique for straight line detection in a noisy image is Hough transform (HT). The standard HT requires a sufficient number of points lie exactly on the same straight line at a given parameter resolution so that the accumulator will show a distinct peak in the parameter space. It fails to detect a line in a nonstationary signal. In the proposed HT scheme, the points that contribute to the vote do not have to lie on a line. Instead, a distance tolerance to the line sought is first given. Any point with the distance to the line falls within the tolerance will be accumulated by taking the distance as the voting weight. A fast search procedure to tighten the possible ranges of line parameters is also proposed for mura detection in LCD images.",
"title": ""
},
{
"docid": "e775fbbad557e2335268111ab7fc1875",
"text": "In recent times the rate at which information is being processed and shared through the internet has tremendously increased. Internet users are in need of systems and tools that will help them manage this information overload. Search engines and recommendation systems have been recently adopted to help solve this problem. The aim of this research is to model a spontaneous research paper recommender system that recommends serendipitous research papers from two large normally mismatched information spaces or domains using BisoNets. Set and graph theory methods were employed to model the problem, whereas text mining methodologies were used to develop nodes and links of the BisoNets. Nodes were constructed from keywords, while links between nodes were established through weighting that was determined from the co-occurrence of corresponding keywords in the same title and domain. Preliminary results from the word clouds indicates that there is no obvious relationship between the two domains. The strongest links in the established information networks can be exploited to display associations that can be discovered between the two matrices. Research paper recommender systems exploit these latent relationships to recommend serendipitous articles when Bisociative Knowledge Discovery techniques and methodologies are utilized appropriately.",
"title": ""
},
{
"docid": "9d849042d1775cf9008678f98f1a3452",
"text": "Nonuniform sampling can be utilized to achieve certain desirable results. Periodic nonuniform sampling can decrease the required sampling rate for signals. Random sampling can be used as a digital alias-free signal processing method in analog-to-digital conversion. In this paper, we first present the fractional spectrum estimation of signals that are bandlimited in the fractional Fourier domain based on the general periodic random sampling approach. To show the estimation effect, the unbiasedness, the variance, and the optimal estimation condition are analyzed. The reconstruction of the fractional spectrum from the periodic random samples is also proposed. Second, the effects of sampling jitters and observation errors on the performance of the fractional spectrum estimation are analyzed, where the new defined fractional characteristic function is used to compensate the estimation bias from sampling jitters. Furthermore, we investigate the fractional spectral analysis from two widely used random sampling schemes, i.e., simple random sampling and stratified random sampling. Finally, all of the analysis results are applied and verified using a radar signal processing system.",
"title": ""
},
{
"docid": "5ccda95046b0e5d1cfc345011b1e350d",
"text": "Considerable emphasis is currently placed on reducing healthcare-associated infection through improving hand hygiene compliance among healthcare professionals. There is also increasing discussion in the lay media of perceived poor hand hygiene compliance among healthcare staff. Our aim was to report the outcomes of a systematic search for peer-reviewed, published studies - especially clinical trials - that focused on hand hygiene compliance among healthcare professionals. Literature published between December 2009, after publication of the World Health Organization (WHO) hand hygiene guidelines, and February 2014, which was indexed in PubMed and CINAHL on the topic of hand hygiene compliance, was searched. Following examination of relevance and methodology of the 57 publications initially retrieved, 16 clinical trials were finally included in the review. The majority of studies were conducted in the USA and Europe. The intensive care unit emerged as the predominant focus of studies followed by facilities for care of the elderly. The category of healthcare worker most often the focus of the research was the nurse, followed by the healthcare assistant and the doctor. The unit of analysis reported for hand hygiene compliance was 'hand hygiene opportunity'; four studies adopted the 'my five moments for hand hygiene' framework, as set out in the WHO guidelines, whereas other papers focused on unique multimodal strategies of varying design. We concluded that adopting a multimodal approach to hand hygiene improvement intervention strategies, whether guided by the WHO framework or by another tested multimodal framework, results in moderate improvements in hand hygiene compliance.",
"title": ""
},
{
"docid": "4fc67f5a4616db0906b943d7f13c856d",
"text": "Overview. A blockchain is best understood in the model of state-machine replication [8], where a service maintains some state and clients invoke operations that transform the state and generate outputs. A blockchain emulates a “trusted” computing service through a distributed protocol, run by nodes connected over the Internet. The service represents or creates an asset, in which all nodes have some stake. The nodes share the common goal of running the service but do not necessarily trust each other for more. In a “permissionless” blockchain such as the one underlying the Bitcoin cryptocurrency, anyone can operate a node and participate through spending CPU cycles and demonstrating a “proof-of-work.” On the other hand, blockchains in the “permissioned” model control who participates in validation and in the protocol; these nodes typically have established identities and form a consortium. A report of Swanson compares the two models [9].",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
},
{
"docid": "27464fdcd9a56975bf381773fd4da76d",
"text": "Although evidence with respect to its prevalence is mixed, it is clear that fathers perpetrate a serious proportion of filicide. There also seems to be a consensus that paternal filicide has attracted less research attention than its maternal counterpart and is therefore less well understood. National registries are a very rich source of data, but they generally provide limited information about the perpetrator as psychiatric, psychological and behavioral data are often lacking. This paper presents a fully documented case of a paternal filicide. Noteworthy is that two motives were present: spousal revenge as well as altruism. The choice of the victim was in line with emerging evidence indicating that children with disabilities in general and with autism in particular are frequent victims of filicide-suicide. Finally, a schizoid personality disorder was diagnosed. Although research is quite scarce on that matter, some research outcomes have showed an association between schizoid personality disorder and homicide and violence.",
"title": ""
},
{
"docid": "7eac260700c56178533ec687159ac244",
"text": "Chat robot, a computer program that simulates human conversation, or chat, through artificial intelligence an intelligence chat bot will be used to give information or answers to any question asked by user related to bank. It is more like a virtual assistant, people feel like they are talking with real person. They speak the same language we do, can answer questions. In banks, at user care centres and enquiry desks, human is insufficient and usually takes long time to process the single request which results in wastage of time and also reduce quality of user service. The primary goal of this chat bot is user can interact with mentioning their queries in plain English and the chat bot can resolve their queries with appropriate response in return The proposed system would help duplicate the user utility experience with one difference that employee and yet get the queries attended and resolved. It can extend daily life, by providing solutions to help desks, telephone answering systems, user care centers. This paper defines the dataset that we have prepared from FAQs of bank websites, architecture and methodology used for developing such chatbot. Also this paper discusses the comparison of seven ML classification algorithm used for getting the class of input to chat bot.",
"title": ""
},
{
"docid": "9c09cf2c1fd62e7d24f472e03b615017",
"text": "Summarization is the process of reducing a text document to create a summary that retains the most important points of the original document. Extractive summarizers work on the given text to extract sentences that best convey the message hidden in the text. Most extractive summarization techniques revolve around the concept of finding keywords and extracting sentences that have more keywords than the rest. Keyword extraction usually is done by extracting relevant words having a higher frequency than others, with stress on important ones'. Manual extraction or annotation of keywords is a tedious process brimming with errors involving lots of manual effort and time. In this paper, we proposed an algorithm to extract keyword automatically for text summarization in e-newspaper datasets. The proposed algorithm is compared with the experimental result of articles having the similar title in four different e-Newspapers to check the similarity and consistency in summarized results.",
"title": ""
},
{
"docid": "9cf81f7fc9fdfcf5718aba0a67b89a45",
"text": "Many modern games provide environments in which agents perform decision making at several levels of granularity. In the domain of real-time strategy games, an effective agent must make high-level strategic decisions while simultaneously controlling individual units in battle. We advocate reactive planning as a powerful technique for building multi-scale game AI and demonstrate that it enables the specification of complex, real-time agents in a unified agent architecture. We present several idioms used to enable authoring of an agent that concurrently pursues strategic and tactical goals, and an agent for playing the real-time strategy game StarCraft that uses these design patterns.",
"title": ""
},
{
"docid": "ce0f21b03d669b72dd954352e2c35ab1",
"text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
}
] | scidocsrr |
65e66ad82fb578764ca436453dbc2756 | User acceptance of a G2B system: a case of electronic procurement system in Malaysia | [
{
"docid": "a4197ab8a70142ac331599c506996bc9",
"text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.",
"title": ""
},
{
"docid": "669fcb6f51aa8883d037e1de18b1513f",
"text": "Purpose – The purpose of this paper is to present a multi-faceted summary and classification of the existing literature in the field of quality of service for e-government and outline the main components of a quality model for e-government services. Design/methodology/approach – Starting with fundamental quality principles the paper examines and analyzes 36 different quality approaches concerning public sector services, e-services in general and more specifically e-government services. Based on the dimensions measured by each approach the paper classifies the approaches and concludes on the basic factors needed for the development of a complete quality model of e-government services. Findings – Based on the classification of literature approaches, the paper provides information about the main components of a quality model that may be used for the continuous monitoring and measuring of public e-services’ quality. The classification forms the basis for answering questions that must be addressed by the quality model, such as: What to assess?; Who will perform the assessment? and How the assessment will be done? Practical implications – This model can be used by the management of public organizations in order to measure and monitor the quality of e-services delivered to citizens. Originality/value – The results of the work presented in this paper form the basis for the development of a quality model for e-government services.",
"title": ""
}
] | [
{
"docid": "0ccf6d97ff8a6b664a73056ec8e39dc7",
"text": "1. Resilient healthcare This integrative review focuses on the methodological strategies employed by studies on resilient healthcare. Resilience engineering (RE), which involves the study of coping with complexity (Woods and Hollnagel, 2006) in modern socio-technical systems (Bergström et al., 2015); emerged in about 2000. The RE discipline is quickly developing, and it has been applied to healthcare, aviation, the petrochemical industry, nuclear power plants, railways, manufacturing, natural disasters and other fields (Righi et al., 2015). The term ‘resilient healthcare’ (RHC) refers to the application of the concepts and methods of RE in the healthcare field, specifically regarding patient safety (Hollnagel et al., 2013a). Instead of the traditional risk management approach based on retrospective analyses of errors, RHC focuses on ‘everyday clinical work’, specifically on the ways it unfolds in practice (Braithwaite et al., 2017). Wears et al. (2015) defined RHC as follows. The ability of the health care system (a clinic, a ward, a hospital, a county) to adjust its functioning prior to, during, or following events (changes, disturbances or opportunities), and thereby sustain required operations under both expected and unexpected conditions. (p. xxvii) After more than a decade of theoretical development in the field of resilience, scholars are beginning to identify its methodological challenges (Woods, 2015; Nemeth and Herrera, 2015). The lack of welldefined constructs to conceptualize resilience challenges the ability to operationalize those constructs in empirical research (Righi et al., 2015; Wiig and Fahlbruch, forthcoming). Further, studying complexity requires challenging methodological designs to obtain evidence about the tested constructs to inform and further develop theory (Bergström and Dekker, 2014). It is imperative to gather emerging knowledge on applied methodology in empirical RHC research to map and discuss the methodological strategies in the healthcare domain. The insights gained might create and refine methodological designs to enable further development of RHC concepts and theory. This study aimed to describe and synthesize the methodological strategies currently applied in https://doi.org/10.1016/j.ssci.2018.08.025 Received 10 October 2016; Received in revised form 13 August 2018; Accepted 27 August 2018 ⁎ Corresponding author. E-mail addresses: siv.hilde.berg@sus.no (S.H. Berg), Kristin.akerjordet@uis.no (K. Akerjordet), mirjam.ekstedt@lnu.se (M. Ekstedt), karina.aase@uis.no (K. Aase). Safety Science 110 (2018) 300–312 Available online 05 September 2018 0925-7535/ © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/BY-NC-ND/4.0/). T empirical RHC research in terms of the empirical fields, applied research designs, methods, analytical strategies, main topics and data collection sources at different systemic levels, and to assess the quality of those studies. We argue that one implication of studying sociotechnical systems is that multiple levels in a given system must be addressed, as proposed by, for example, Rasmussen (1997). As such, this study synthesized the ways that RHC studies have approached empirical data at various systemic levels. 2. Methodology in resilient healthcare research ‘Research methodology’ is a strategy or plan of action that shapes the choices and uses of various methods and links them to desired outcomes (Crotty, 1998). This study broadly used the term ‘methodological strategy’ to denote an observed study’s overall research design, data collection sources, data collection methods and analytical methods at different systemic levels. The methodological issues discussed in the RHC literature to date have concerned the methods used to study everyday clinical practice, healthcare complexity and the operationalization of the constructs measuring resilience. 2.1. Methods of studying healthcare complexity RE research is characterized by its study of complexities. In a review of the rationale behind resilience research, Bergström et al. (2015) found that RE researchers typically justified their research by referring to the complexity of modern socio-technical systems that makes them inherently risky. Additionally, in the healthcare field, references are made to the complex adaptive system (CAS) perspective (Braithwaite et al., 2013). CAS emerged from complexity theory, and it takes a dynamic approach to human and nonhuman agents (Urry, 2003). Healthcare is part of a complex socio-technical system and an example of a CAS comprising professionals, patients, managers, policymakers and technologies, all of which interact with and rely on trade-offs and adjustments to succeed in everyday clinical work (Braithwaite et al., 2013). Under complexity theory, complex systems are viewed as open systems that interact with their environments, implying a need to understand the systems’ environments before understanding the systems. Because these environments are complex, no standard methodology can provide a complete understanding (Bergström and Dekker, 2014), and the opportunities for experimental research are limited. Controlled studies might not be able to identify the complex interconnections and multiple variables that influence care; thus, non-linear methods are necessary to describe and understand those systems. Consequently, research on complexity imposes methodological challenges related to the development of valid evidence (Braithwaite et al., 2013). It has been argued that triangulation is necessary to study complex work settings in order to reveal actual phenomena and minimize bias leading to misinterpretation (Nemeth et al., 2011). Methodological triangulation has been suggested, as well as data triangulation, as a strategic way to increase the internal and external validity of RE/RHC research (Nemeth et al., 2011; Mendonca, 2008). Data triangulation involves collecting data from various sources, such as reports, policy documents, multiple professional groups and patient feedback, whereas methodological triangulation involves combining different qualitative methods or mixing qualitative and quantitative methods. Multiple methods have been suggested for research on everyday clinical practice and healthcare complexity. Hollnagel (2014) suggested qualitative methods, such as qualitative interviews, field observations and organizational development techniques (e.g. appreciative inquiry and cooperative inquiry). Nemeth and Herrera (2015) proposed observation in actual settings as a core value of the RE field of practice. Drawing on the methods of cognitive system engineering, Nemeth et al. (2011) described the uses of cognitive task analysis (CTA) to study resilience. CTA comprises numerous methods, one of which is the critical decision method (CDM). CDM is a retrospective interview in which subjects are asked about critical events and decisions. Other proposed methods for studying complex work settings were work domain analysis (WDA), process tracing, artefact analysis and rapid prototyping. System modelling, using methods such as trend analysis, cluster analysis, social network analysis and log linear modelling, has been proposed as a way to study resilience from a socio-technical/CAS perspective (Braithwaite et al., 2013; Anderson et al., 2013). The functional resonance analysis method (FRAM) has been employed to study interactions and dependencies as they develop in specific situations. FRAM is presented as a way to study how complex and dynamic sociotechnical systems work (Hollnagel, 2012). In addition, Leveson et al. (2006) suggested STAMP, a model of accident causation based on systems theory, as a method to analyse resilience. 2.2. Operationalization of resilience A vast amount of the RE literature has been devoted to developing theories on resilience, emphasizing that the domain is in a theory development stage (Righi et al., 2015). This process of theory development is reflected in the diverse definitions and indicators of resilience proposed over the past decade e.g. 3, (Woods, 2006, 2011; Wreathall, 2006). Numerous constructs have been developed, such as resilient abilities (Woods, 2011; Hollnagel, 2008, 2010; Nemeth et al., 2008; Hollnagel et al., 2013b), Safety-II (Hollnagel, 2014), Work-as-done (WAD) and Work-as-imagined (WAI) (Hollnagel et al., 2015), and performance variability (Hollnagel, 2014). The operationalization of these constructs has been a topic of discussion. According to Westrum (2013), one challenge to determining measures of resilience in healthcare relates to the characteristics of resilience as a family of related ideas rather than as a single construct. The applied definitions of ‘resilience’ in RE research have focused on a given system’s adaptive capacities and its abilities to adopt or absorb disturbing conditions. This conceptual understanding of resilience has been applied to RHC [6, p. xxvii]. By understanding resilience as a ‘system’s ability’, the healthcare system is perceived as a separate ontological category. The system is regarded as a unit that might have individual goals, actions or abilities not necessarily shared by its members. Therefore, RHC is greater than the sum of its members’ individual actions, which is a perspective found in methodological holism (Ylikoski, 2012). The challenge is to operationalize the study of ‘the system as a whole’. Some scholars have advocated on behalf of locating the empirical basis of resilience by studying individual performances and aggregating those data to develop a theory of resilience (Mendonca, 2008; Furniss et al., 2011). This approach uses the strategy of finding the properties of the whole (the healthcare system) within the parts at the micro level, which is found in methodological individualism. The WAD and performance variability constructs bring resilience closer to an empirical ground by fr",
"title": ""
},
{
"docid": "a86114aeee4c0bc1d6c9a761b50217d4",
"text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.",
"title": ""
},
{
"docid": "f033c98f752c8484dc616425ebb7ce5b",
"text": "Ethnography is the study of social interactions, behaviours, and perceptions that occur within groups, teams, organisations, and communities. Its roots canbe traced back to anthropological studies of small, rural (andoften remote) societies thatwereundertaken in the early 1900s, when researchers such as Bronislaw Malinowski and Alfred Radcliffe-Brown participated in these societies over long periods and documented their social arrangements and belief systems. This approach was later adopted by members of the Chicago School of Sociology (for example, Everett Hughes, Robert Park, Louis Wirth) and applied to a variety of urban settings in their studies of social life. The central aim of ethnography is to provide rich, holistic insights into people’s views and actions, as well as the nature (that is, sights, sounds) of the location they inhabit, through the collection of detailed observations and interviews. As Hammersley states, “The task [of ethnographers] is to document the culture, the perspectives and practices, of the people in these settings.The aim is to ‘get inside’ theway each groupof people sees theworld.” Box 1 outlines the key features of ethnographic research. Examples of ethnographic researchwithin thehealth services literature include Strauss’s study of achieving and maintaining order between managers, clinicians, and patients within psychiatric hospital settings; Taxis and Barber’s exploration of intravenous medication errors in acute care hospitals; Costello’s examination of death and dying in elderly care wards; and Østerlund’s work on doctors’ and nurses’ use of traditional and digital information systems in their clinical communications. Becker and colleagues’ Boys in White, an ethnographic study of medical education in the late 1950s, remains a classic in this field. Newer developments in ethnographic inquiry include auto-ethnography, in which researchers’ own thoughts andperspectives fromtheir social interactions form the central element of a study; meta-ethnography, in which qualitative research texts are analysed and synthesised to empirically create new insights and knowledge; and online (or virtual) ethnography, which extends traditional notions of ethnographic study from situated observation and face to face researcher-participant interaction to technologically mediated interactions in online networks and communities.",
"title": ""
},
{
"docid": "cc12a6ccdfbe2242eb4f9f72d5a17cd2",
"text": "Software is everywhere, from mission critical systems such as industrial power stations, pacemakers and even household appliances. This growing dependence on technology and the increasing complexity software has serious security implications as it means we are potentially surrounded by software that contain exploitable vulnerabilities. These challenges have made binary analysis an important area of research in computer science and has emphasized the need for building automated analysis systems that can operate at scale, speed and efficacy; all while performing with the skill of a human expert. Though great progress has been made in this area of research, there remains limitations and open challenges to be addressed. Recognizing this need, DARPA sponsored the Cyber Grand Challenge (CGC), a competition to showcase the current state of the art in systems that perform; automated vulnerability detection, exploit generation and software patching. This paper is a survey of the vulnerability detection and exploit generation techniques, underlying technologies and related works of two of the winning systems Mayhem and Mechanical Phish. Keywords—Cyber reasoning systems, automated binary analysis, automated exploit generation, dynamic symbolic execution, fuzzing",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "3af338a01d1419189b7706375feec0c2",
"text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws",
"title": ""
},
{
"docid": "1657df28bba01b18fb26bb8c823ad4b4",
"text": "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough.",
"title": ""
},
{
"docid": "a9a7916c7cb3d2c56457b0cc5cb0471c",
"text": "In this paper, we propose a novel approach to integrating inertial sensor data into a pose-graph free dense mapping algorithm that we call GravityFusion. A range of dense mapping algorithms have recently been proposed, though few integrate inertial sensing. We build on ElasticFusion, a particularly elegant approach that fuses color and depth information directly into small surface patches called surfels. Traditional inertial integration happens at the level of camera motion, however, a pose graph is not available here. Instead, we present a novel approach that incorporates the gravity measurements directly into the map: Each surfel is annotated by a gravity measurement, and that measurement is updated with each new observation of the surfel. We use mesh deformation, the same mechanism used for loop closure in ElasticFusion, to enforce a consistent gravity direction among all the surfels. This eliminates drift in two degrees of freedom, avoiding the typical curving of maps that are particularly pronounced in long hallways, as we qualitatively show in the experimental evaluation.",
"title": ""
},
{
"docid": "585c589cdab52eaa63186a70ac81742d",
"text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).",
"title": ""
},
{
"docid": "2ce789863ff0d3359f741adddb09b9f1",
"text": "The largest source of sound events is web videos. Most videos lack sound event labels at segment level, however, a significant number of them do respond to text queries, from a match found using metadata by search engines. In this paper we explore the extent to which a search query can be used as the true label for detection of sound events in videos. We present a framework for large-scale sound event recognition on web videos. The framework crawls videos using search queries corresponding to 78 sound event labels drawn from three datasets. The datasets are used to train three classifiers, and we obtain a prediction on 3.7 million web video segments. We evaluated performance using the search query as true label and compare it with human labeling. Both types of ground truth exhibited close performance, to within 10%, and similar performance trend with increasing number of evaluated segments. Hence, our experiments show potential for using search query as a preliminary true label for sound event recognition in web videos.",
"title": ""
},
{
"docid": "38f19c7087d5529e2f6b84beca42de3a",
"text": "We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing twelve neural sequence labeling models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i.e. NER, Chunking, and POS tagging). Misconceptions and inconsistent conclusions in existing literature are examined and clarified under statistical experiments. In the comparison and analysis process, we reach several practical conclusions which can be useful to practitioners.",
"title": ""
},
{
"docid": "7fd21ee95850fec1f1e00b766eebbc06",
"text": "HP’s StoreAll with Express Query is a scalable commercial file archiving product that offers sophisticated file metadata management and search capabilities [3]. A new REST API enables fast, efficient searching to find all files that meet a given set of metadata criteria and the ability to tag files with custom metadata fields. The product brings together two significant systems: a scale out file system and a metadata database based on LazyBase [10]. In designing and building the combined product, we identified several real-world issues in using a pipelined database system in a distributed environment, and overcame several interesting design challenges that were not contemplated by the original research prototype. This paper highlights our experiences.",
"title": ""
},
{
"docid": "3d9f1288235847f6c4e9b2c0966c51e9",
"text": "Over the past decade, many laboratories have begun to explore brain-computer interface (BCI) technology as a radically new communication option for those with neuromuscular impairments that prevent them from using conventional augmentative communication methods. BCI's provide these users with communication channels that do not depend on peripheral nerves and muscles. This article summarizes the first international meeting devoted to BCI research and development. Current BCI's use electroencephalographic (EEG) activity recorded at the scalp or single-unit activity recorded from within cortex to control cursor movement, select letters or icons, or operate a neuroprosthesis. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI which recognizes the commands contained in the input and expresses them in device control. Current BCI's have maximum information transfer rates of 5-25 b/min. Achievement of greater speed and accuracy depends on improvements in signal processing, translation algorithms, and user training. These improvements depend on increased interdisciplinary cooperation between neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective methods for evaluating alternative methods. The practical use of BCI technology depends on the development of appropriate applications, identification of appropriate user groups, and careful attention to the needs and desires of individual users. BCI research and development will also benefit from greater emphasis on peer-reviewed publications, and from adoption of standard venues for presentations and discussion.",
"title": ""
},
{
"docid": "1d0a84f55e336175fa60d3fa9eec9664",
"text": "In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80% corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.",
"title": ""
},
{
"docid": "1590742097219610170bd62eb3799590",
"text": "In this paper, we develop a vision-based system that employs a combined RGB and depth descriptor to classify hand gestures. The method is studied for a human-machine interface application in the car. Two interconnected modules are employed: one that detects a hand in the region of interaction and performs user classification, and another that performs gesture recognition. The feasibility of the system is demonstrated using a challenging RGBD hand gesture data set collected under settings of common illumination variation and occlusion.",
"title": ""
},
{
"docid": "36867b8478a8bd6be79902efd5e9d929",
"text": "Most state-of-the-art commercial storage virtualization systems focus only on one particular storage attribute, capacity. This paper describes the design, implementation and evaluation of a multi-dimensional storage virtualization system called Stonehenge, which is able to virtualize a cluster-based physical storage system along multiple dimensions, including bandwidth, capacity, and latency. As a result, Stonehenge is able to multiplex multiple virtual disks, each with a distinct bandwidth, capacity, and latency attribute, on a single physical storage system as if they are separate physical disks. A key enabling technology for Stonehenge is an efficiency-aware real-time disk scheduling algorithm called dual-queue disk scheduling, which maximizes disk utilization efficiency while providing Quality of Service (QoS) guarantees. To optimize disk utilization efficiency, Stonehenge exploits run-time measurements extensively, for admission control, computing latency-derived bandwidth requirement, and predicting disk service time.",
"title": ""
},
{
"docid": "c743c63848ca96f0eb47090ea648d897",
"text": "Cyber-Physical Systems (CPSs) are the future generation of highly connected embedded systems having applications in diverse domains including Oil and Gas. Employing Product Line Engineering (PLE) is believed to bring potential benefits with respect to reduced cost, higher productivity, higher quality, and faster time-to-market. However, relatively few industrial field studies are reported regarding the application of PLE to develop large-scale systems, and more specifically CPSs. In this paper, we report about our experiences and insights gained from investigating the application of model-based PLE at a large international organization developing subsea production systems (typical CPSs) to manage the exploitation of oil and gas production fields. We report in this paper 1) how two systematic domain analyses (on requirements engineering and product configuration/derivation) were conducted to elicit CPS PLE requirements and challenges, 2) key results of the domain analysis (commonly observed in other domains), and 3) our initial experience of developing and applying two Model Based System Engineering (MBSE) PLE solution to address some of the requirements and challenges elicited during the domain analyses.",
"title": ""
},
{
"docid": "cbf10563c5eb251f765b93be554b7439",
"text": "BACKGROUND\nAlthough fine-needle aspiration (FNA) is a safe and accurate diagnostic procedure for assessing thyroid nodules, it has limitations in diagnosing follicular neoplasms due to its relatively high false-positive rate. The purpose of the present study was to evaluate the diagnostic role of core-needle biopsy (CNB) for thyroid nodules with follicular neoplasm (FN) in comparison with FNA.\n\n\nMETHODS\nA series of 107 patients (24 men, 83 women; mean age, 47.4 years) from 231 FNAs and 107 patients (29 men, 78 women; mean age, 46.3 years) from 186 CNBs with FN readings, all of whom underwent surgery, from October 2008 to December 2013 were retrospectively analyzed. The false-positive rate, unnecessary surgery rate, and malignancy rate for the FNA and CNB patients according to the final diagnosis following surgery were evaluated.\n\n\nRESULTS\nThe CNB showed a significantly lower false-positive and unnecessary surgery rate than the FNA (4.7% versus 30.8%, 3.7% versus 26.2%, p < 0.001, respectively). In the FNA group, 33 patients (30.8%) had non-neoplasms, including nodular hyperplasia (n = 32) and chronic lymphocytic thyroiditis (n = 1). In the CNB group, 5 patients (4.7%) had non-neoplasms, all of which were nodular hyperplasia. Moreover, the CNB group showed a significantly higher malignancy rate than FNA (57.9% versus 28%, p < 0.001).\n\n\nCONCLUSIONS\nCNB showed a significantly lower false-positive rate and a higher malignancy rate than FNA in diagnosing FN. Therefore, CNB could minimize unnecessary surgery and provide diagnostic confidence when managing patients with FN to perform surgery.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "59e49a798fed8479df98435003f4647e",
"text": "The recent advancement of motion recognition using Microsoft Kinect stimulates many new ideas in motion capture and virtual reality applications. Utilizing a pattern recognition algorithm, Kinect can determine the positions of different body parts from the user. However, due to the use of a single-depth camera, recognition accuracy drops significantly when the parts are occluded. This hugely limits the usability of applications that involve interaction with external objects, such as sport training or exercising systems. The problem becomes more critical when Kinect incorrectly perceives body parts. This is because applications have limited information about the recognition correctness, and using those parts to synthesize body postures would result in serious visual artifacts. In this paper, we propose a new method to reconstruct valid movement from incomplete and noisy postures captured by Kinect. We first design a set of measurements that objectively evaluates the degree of reliability on each tracked body part. By incorporating the reliability estimation into a motion database query during run time, we obtain a set of similar postures that are kinematically valid. These postures are used to construct a latent space, which is known as the natural posture space in our system, with local principle component analysis. We finally apply frame-based optimization in the space to synthesize a new posture that closely resembles the true user posture while satisfying kinematic constraints. Experimental results show that our method can significantly improve the quality of the recognized posture under severely occluded environments, such as a person exercising with a basketball or moving in a small room.",
"title": ""
}
] | scidocsrr |
ade1510581160486c98f3131a7f24f81 | Theia: A Fast and Scalable Structure-from-Motion Library | [
{
"docid": "bf1bd9bdbe8e4a93e814ea9dc91e6eb3",
"text": "A new robust matching method is proposed. The progressive sample consensus (PROSAC) algorithm exploits the linear ordering defined on the set of correspondences by a similarity function used in establishing tentative correspondences. Unlike RANSAC, which treats all correspondences equally and draws random samples uniformly from the full set, PROSAC samples are drawn from progressively larger sets of top-ranked correspondences. Under the mild assumption that the similarity measure predicts correctness of a match better than random guessing, we show that PROSAC achieves large computational savings. Experiments demonstrate it is often significantly faster (up to more than hundred times) than RANSAC. For the derived size of the sampled set of correspondences as a function of the number of samples already drawn, PROSAC converges towards RANSAC in the worst case. The power of the method is demonstrated on wide-baseline matching problems.",
"title": ""
},
{
"docid": "c1797ddf6dd23374e17490d09d6e70b2",
"text": "This paper presents a general solution to the determination of the pose of a perspective camera with unknown focal length from images of four 3D reference points. Our problem is a generalization of the P3P and P4P problems previously developed for fully calibrated cameras. Given four 2D-to-3D correspondences, we estimate camera position, orientation and recover the camera focal length. We formulate the problem and provide a minimal solution from four points by solving a system of algebraic equations. We compare the Hidden variable resultant and Grobner basis techniques for solving the algebraic equations of our problem. By evaluating them on synthetic and on real-data, we show that the Grobner basis technique provides stable results.",
"title": ""
}
] | [
{
"docid": "97353be7c54dd2ded69815bf93545793",
"text": "In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.",
"title": ""
},
{
"docid": "4287db8deb3c4de5d7f2f5695c3e2e70",
"text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.",
"title": ""
},
{
"docid": "58a75098bc32cb853504a91ddc53e1e8",
"text": "In this study, forest type mapping data set taken from UCI (University of California, Irvine) machine learning repository database has been classified using different machine learning algorithms including Multilayer Perceptron, k-NN, J48, Naïve Bayes, Bayes Net and KStar. In this dataset, there are 27 spectral values showing the type of three different forests (Sugi, Hinoki, mixed broadleaf). As the performance measure criteria, the classification accuracy has been used to evaluate the classifier algorithms and then to select the best method. The best classification rates have been obtained 90.43% with MLP, and 89.1013% with k-NN classifier (for k=5). As can be seen from the obtained results, the machine learning algorithms including MLP and k-NN classifier have obtained very promising results in the classification of forest type with 27 spectral features.",
"title": ""
},
{
"docid": "8171294a51cb3a83c43243ed96948c3d",
"text": "The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal -bound with finite number of snapshots even in cases where the signals are linearly dependent.",
"title": ""
},
{
"docid": "7af9293fbe12f3e859ee579d0f8739a5",
"text": "We present the findings from a Dutch field study of 30 outsourcing deals totaling to more than 100 million Euro, where both customers and corresponding IT-outsourcing providers participated. The main objective of the study was to examine from a number of well-known factors whether they discriminate between IT-outsourcing success and failure in the early phase of service delivery and to determine their impact on the chance on a successful deal. We investigated controllable factors to increase the odds during sourcing and rigid factors as a warning sign before closing a deal. Based on 250 interviews we collected 28 thousand data points. From the data and the perceived failure or success of the closed deals we investigated the discriminative power of the determinants (ex post). We found three statistically significant controllable factors that discriminated in an early phase between failure and success. They are: working according to the transition plan, demand management and, to our surprise, communication within the supplier organisation (so not between client and supplier). These factors also turned out to be the only significant factors for a (logistic) model predicting the chance of a successful IT-outsourcing. Improving demand management and internal communication at the supplier increases the odds the most. Sticking to the transition plan only modestly. Other controllable factors were not significant in our study. They are managing the business case, transfer of staff or assets, retention of expertise and communication within the client organisation. Of the rigid factors, the motive to outsource, cultural differences, and the type of work were insignificant. The motive of the supplier was significant: internal motivations like increasing profit margins or business volume decreased the chance of success while external motivations like increasing market share or becoming a player increased the success rate. From the data we inferred that the degree of experience with sourcing did not show to be a convincing factor of success. Hiring sourcing consultants worked contra-productive: it lowered chances of success.",
"title": ""
},
{
"docid": "b6d71f472848de18eadff0944eab6191",
"text": "Traditional approaches for object discovery assume that there are common characteristics among objects, and then attempt to extract features specific to objects in order to discriminate objects from background. However, the assumption “common features” may not hold, considering different variations between and within objects. Instead, we look at this problem from a different angle: if we can identify background regions, then the rest should belong to foreground. In this paper, we propose to model background to localize possible object regions. Our method is based on the observations: (1) background has limited categories, such as sky, tree, water, ground, etc., and can be easier to recognize, while there are millions of objects in our world with different shapes, colors and textures; (2) background is occluded because of foreground objects. Thus, we can localize objects based on voting from fore/background occlusion boundary. Our contribution lies: (1) we use graph-based image segmentation to yield high quality segments, which effectively leverages both flat segmentation and hierarchical segmentation approaches; (2) we model background to infer and rank object hypotheses. More specifically, we use background appearance and discriminative patches around fore/background boundary to build the background model. The experimental results show that our method can generate good quality object proposals and rank them where objects are covered highly within a small pool of proposed regions. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c4caf2968f7f2509b199d8d0ce5eec2d",
"text": "for competition that is based on information, their ability to exploit intangible assets has become far more decisive than their ability to invest in and manage physical assets. Several years ago, in recognition of this change, we introduced a concept we called the balanced scorecard. The balanced scorecard supplemented traditional fi nancial measures with criteria that measured performance from three additional perspectives – those of customers, internal business processes, and learning and growth. (See the exhibit “Translating Vision and Strategy: Four Perspectives.”) It therefore enabled companies to track fi nancial results while simultaneously monitoring progress in building the capabilities and acquiring the intangible assets they would need for future growth. The scorecard wasn’t Editor’s Note: In 1992, Robert S. Kaplan and David P. Norton’s concept of the balanced scorecard revolutionized conventional thinking about performance metrics. By going beyond traditional measures of fi nancial performance, the concept has given a generation of managers a better understanding of how their companies are really doing. These nonfi nancial metrics are so valuable mainly because they predict future fi nancial performance rather than simply report what’s already happened. This article, fi rst published in 1996, describes how the balanced scorecard can help senior managers systematically link current actions with tomorrow’s goals, focusing on that place where, in the words of the authors, “the rubber meets the sky.” Using the Balanced Scorecard as a Strategic Management System",
"title": ""
},
{
"docid": "734fc66c7c745498ca6b2b7fc6780919",
"text": "In this paper, we investigate the use of an unsupervised label clustering technique and demonstrate that it enables substantial improvements in visual relationship prediction accuracy on the Person in Context (PIC) dataset. We propose to group object labels with similar patterns of relationship distribution in the dataset into fewer categories. Label clustering not only mitigates both the large classification space and class imbalance issues, but also potentially increases data samples for each clustered category. We further propose to incorporate depth information as an additional feature into the instance segmentation model. The additional depth prediction path supplements the relationship prediction model in a way that bounding boxes or segmentation masks are unable to deliver. We have rigorously evaluated the proposed techniques and performed various ablation analysis to validate the benefits of them.",
"title": ""
},
{
"docid": "471f4399e42aa0b00effac824a309ad6",
"text": "Resource management in Cloud Computing has been dominated by system-level virtual machines to enable the management of resources using a coarse grained approach, largely in a manner independent from the applications running on these infrastructures. However, in such environments, although different types of applications can be running, the resources are delivered equally to each one, missing the opportunity to manage the available resources in a more efficient and application driven way. So, as more applications target managed runtimes, high level virtualization is a relevant abstraction layer that has not been properly explored to enhance resource usage, control, and effectiveness. We propose a VM economics model to manage cloud infrastructures, governed by a quality-of-execution (QoE) metric and implemented by an extended virtual machine. The Adaptive and Resource-Aware Java Virtual Machine (ARA-JVM) is a cluster-enabled virtual execution environment with the ability to monitor base mechanisms (e.g. thread cheduling, garbage collection, memory or network consumptions) to assess application's performance and reconfigure these mechanisms in runtime according to previously defined resource allocation policies. Reconfiguration is driven by incremental gains in quality-of-execution (QoE), used by the VM economics model to balance relative resource savings and perceived performance degradation. Our work in progress, aims to allow cloud providers to exchange resource slices among virtual machines, continually addressing where those resources are required, while being able to determine where the reduction will be more economically effective, i.e., will contribute in lesser extent to performance degradation.",
"title": ""
},
{
"docid": "e388d63d917358d6c3733c0b2e598511",
"text": "This paper integrates theory, ethnography, and collaborative artwork to explore improvisational activity as both topic and tool of multidisciplinary HCI inquiry. Building on theories of improvisation drawn from art, music, HCI and social science, and two ethnographic studies based on interviews, participant observation and collaborative art practice, we seek to elucidate the improvisational nature of practice in both art and ordinary action, including human-computer interaction. We identify five key features of improvisational action -- reflexivity, transgression, tension, listening, and interdependence -- and show how these can deepen and extend both linear and open-ended methodologies in HCI and design. We conclude by highlighting collaborative engagement based on 'intermodulation' as a tool of multidisciplinary inquiry for HCI research and design.",
"title": ""
},
{
"docid": "002b890e5a9065027bc8749487b208e7",
"text": "The Manuka rendering architecture has been designed in the spirit of the classic reyes rendering architecture: to enable the creation of visually rich computer generated imagery for visual effects in movie production. Following in the footsteps of reyes over the past 30 years, this means supporting extremely complex geometry, texturing, and shading. In the current generation of renderers, it is essential to support very accurate global illumination as a means to naturally tie together different assets in a picture.\n This is commonly achieved with Monte Carlo path tracing, using a paradigm often called shade on hit, in which the renderer alternates tracing rays with running shaders on the various ray hits. The shaders take the role of generating the inputs of the local material structure, which is then used by path-sampling logic to evaluate contributions and to inform what\n further rays to cast through the scene. We propose a shade before hit paradigm instead and minimise I/O strain on the system, leveraging locality of reference by running pattern generation shaders before we execute light transport simulation by path sampling.\n We describe a full architecture built around this approach, featuring spectral light transport and a flexible implementation of multiple importance sampling (mis), resulting in a system able to support a comparable amount of extensibility to what made the reyes rendering architecture successful over many decades.",
"title": ""
},
{
"docid": "aeadbf476331a67bec51d5d6fb6cc80b",
"text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance",
"title": ""
},
{
"docid": "cda6d8c94602170e2534fc29973ecff8",
"text": "In 1912, Max Wertheimer published his paper on phi motion, widely recognized as the start of Gestalt psychology. Because of its continued relevance in modern psychology, this centennial anniversary is an excellent opportunity to take stock of what Gestalt psychology has offered and how it has changed since its inception. We first introduce the key findings and ideas in the Berlin school of Gestalt psychology, and then briefly sketch its development, rise, and fall. Next, we discuss its empirical and conceptual problems, and indicate how they are addressed in contemporary research on perceptual grouping and figure-ground organization. In particular, we review the principles of grouping, both classical (e.g., proximity, similarity, common fate, good continuation, closure, symmetry, parallelism) and new (e.g., synchrony, common region, element and uniform connectedness), and their role in contour integration and completion. We then review classic and new image-based principles of figure-ground organization, how it is influenced by past experience and attention, and how it relates to shape and depth perception. After an integrated review of the neural mechanisms involved in contour grouping, border ownership, and figure-ground perception, we conclude by evaluating what modern vision science has offered compared to traditional Gestalt psychology, whether we can speak of a Gestalt revival, and where the remaining limitations and challenges lie. A better integration of this research tradition with the rest of vision science requires further progress regarding the conceptual and theoretical foundations of the Gestalt approach, which is the focus of a second review article.",
"title": ""
},
{
"docid": "2b00f2b02fa07cdd270f9f7a308c52c5",
"text": "A noninvasive and easy-operation measurement of the heart rate has great potential in home healthcare. We present a simple and high running efficiency method for measuring heart rate from a video. By only tracking one feature point which is selected from a small ROI (Region of Interest) in the head area, we extract trajectories of this point in both X-axis and Y-axis. After a series of processes including signal filtering, interpolation, the Independent Component Analysis (ICA) is used to obtain a periodic signal, and then the heart rate can be calculated. We evaluated on 10 subjects and compared to a commercial heart rate measuring instrument (YUYUE YE680B) and achieved high degree of agreement. A running time comparison experiment to the previous proposed motion-based method is carried out and the result shows that the time cost is greatly reduced in our method.",
"title": ""
},
{
"docid": "2fb41c981494c285f663f74e1dae6299",
"text": "OMNIGLOT is a dataset containing 1,623 hand-written characters from 50 various alphabets. Each character is represented by about 20 images that makes the problem very challenging. The dataset is split into 24,345 training datapoints and 8,070 test images. We randomly pick 1,345 training examples for validation. During training we applied dynamic binarization of data similarly to dynamic MNIST.",
"title": ""
},
{
"docid": "869748c038d81976938b50652827f89c",
"text": "Complex elbow fractures are exceedingly challenging to treat. Treatment of severe distal humeral fractures fails because of either displacement or nonunion at the supracondylar level or stiffness resulting from prolonged immobilization. Coronal shear fractures of the capitellum and trochlea are difficult to repair and may require extensile exposure. Olecranon fracture-dislocations are complex fractures of the olecranon associated with subluxation or dislocation of the radial head and/or the coronoid process. The radioulnar relationship usually is preserved in anterior but disrupted in posterior fracture-dislocations. A skeletal distractor can be useful in facilitating reduction. Coronoid fractures can be classified according to whether the fracture involves the tip, the anteromedial facet, or the base (body) of the coronoid. Anteromedial coronoid fractures are actually varus posteromedial rotatory fracture subluxations and are often serious injuries. These patterns of injury predict associated injuries and instability as well as surgical approach and treatment. The radial head is the bone most commonly fractured in the adult elbow. If the coronoid is fractured, the radial head becomes a critical factor in elbow stability. Its role becomes increasingly important as other soft-tissue and bony constraints are compromised. Articular injury to the radial head is commonly more severe than noted on plain radiographs. Fracture fragments are often anterior. Implants applied to the surface of the radial head must be placed in a safe zone.",
"title": ""
},
{
"docid": "7e1438d99cf737335fbdc871ecaa1486",
"text": "Based on LDA(Latent Dirichlet Allocation) topic model, a generative model for multi-document summarization, namely Titled-LDA that simultaneously models the content of documents and the titles of document is proposed. This generative model represents each document with a mixture of topics, and extends these approaches to title modeling by allowing the mixture weights for topics to be determined by the titles of the document. In the mixing stage, the algorithm can learn the weight in an adaptive asymmetric learning way based on two kinds of information entropies. In this way, the final model incorporated the title information and the content information appropriately, which helped the performance of summarization. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on DUC2002 corpus.",
"title": ""
},
{
"docid": "1d9b50bf7fa39c11cca4e864bbec5cf3",
"text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.",
"title": ""
},
{
"docid": "16b8a948e76a04b1703646d5e6111afe",
"text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.",
"title": ""
},
{
"docid": "acc960b2fd1066efce4655da837213f4",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.082 ⇑ Corresponding author. Tel.: +562 978 4834. E-mail addresses: goberreu@ing.uchile.cl (G. Ober (J.D. Velásquez). URL: http://wi.dii.uchile.cl/ (J.D. Velásquez). Plagiarism detection is of special interest to educational institutions, and with the proliferation of digital documents on the Web the use of computational systems for such a task has become important. While traditional methods for automatic detection of plagiarism compute the similarity measures on a document-to-document basis, this is not always possible since the potential source documents are not always available. We do text mining, exploring the use of words as a linguistic feature for analyzing a document by modeling the writing style present in it. The main goal is to discover deviations in the style, looking for segments of the document that could have been written by another person. This can be considered as a classification problem using self-based information where paragraphs with significant deviations in style are treated as outliers. This so-called intrinsic plagiarism detection approach does not need comparison against possible sources at all, and our model relies only on the use of words, so it is not language specific. We demonstrate that this feature shows promise in this area, achieving reasonable results compared to benchmark models. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | scidocsrr |
e5d837e197c7527a53d1a4487e340db0 | Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words | [
{
"docid": "96bb4155000096c1cba6285ad82c9a4d",
"text": "0747-5632/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.chb.2011.10.002 ⇑ Corresponding author. Tel.: +65 6790 6636; fax: + E-mail addresses: leecs@ntu.edu.sg (C.S. Lee), malo 1 Tel.: +65 67905772; fax: +65 6791 5214. Recent events indicate that sharing news in social media has become a phenomenon of increasing social, economic and political importance because individuals can now participate in news production and diffusion in large global virtual communities. Yet, knowledge about factors influencing news sharing in social media remains limited. Drawing from the uses and gratifications (U&G) and social cognitive theories (SCT), this study explored the influences of information seeking, socializing, entertainment, status seeking and prior social media sharing experience on news sharing intention. A survey was designed and administered to 203 students in a large local university. Results from structural equation modeling (SEM) analysis revealed that respondents who were driven by gratifications of information seeking, socializing, and status seeking were more likely to share news in social media platforms. Prior experience with social media was also a significant determinant of news sharing intention. Implications and directions for future work are discussed. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "f9876540ce148d7b27bab53839f1bf19",
"text": "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.",
"title": ""
},
{
"docid": "f5b3519d4ec0fd7f9cb67bf409bec5ac",
"text": "The AECOO industry is highly fragmented; therefore, efficient information sharing and exchange between various players are evidently needed. Furthermore, the information about facility components should be managed throughout the lifecycle and be easily accessible for all players in the AECOO industry. BIM is emerging as a method of creating, sharing, exchanging and managing the information throughout the lifecycle between all the stakeholders. RFID, on the other hand, has emerged as an automatic data collection and information storage technology, and has been used in different applications in AECOO. This research proposes permanently attaching RFID tags to facility components where the memory of the tags is populated with accumulated lifecycle information of the components taken from a standard BIM database. This information is used to enhance different processes throughout the lifecycle. A conceptual RFID-based system structure and data storage/retrieval design are elaborated. To explore the technical feasibility of the proposed approach, two case studies have been implemented and tested.",
"title": ""
},
{
"docid": "14fe4e2fb865539ad6f767b9fc9c1ff5",
"text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.",
"title": ""
},
{
"docid": "1557392e8482bafe53eb50fccfd60157",
"text": "A common practice among servers in restaurants is to give their dining parties an unexpected gift in the form of candy when delivering the check. Two studies were conducted to evaluate the impact of this gesture on the tip percentages received by servers. Study 1 found that customers who received a small piece of chocolate along with the check tipped more than did customers who received no candy. Study 2 found that tips varied with the amount of the candy given to the customers as well as with the manner in which it was offered. It is argued that reciprocity is a stronger explanation for these findings than either impression management or the good mood effect.",
"title": ""
},
{
"docid": "cb8dc6127632eb50f1a51a2ea115ad83",
"text": "This paper proposes a new design of a SPOKE-type permanent magnet brushless direct current (BLDC) motor by using pushing magnet. A numerical analysis is developed to calculate the maximum value of air-gap flux density. First, the analytical model of the SPOKE-type motor was established, and Laplace equations of magnetic scalar potential and a series of boundary conditions were given. Then, the analytical expressions of magnet field strength and magnet flux density were obtained in the air gap produced by ferrite permanent magnets. The developed analytical model was obtained by solving the magnetic scalar potential. Finally, the air-gap field distribution and back-electromotive force of spoke type machine was analyzed. The analysis works for internal rotor motor topologies, and either radial or parallel magnetized permanent magnets. This paper validates results of the analytical model by finite-element analysis as well as with the experimental analysis for SPOKE-type BLDC motors.",
"title": ""
},
{
"docid": "0724e800d88d1d7cd1576729f975b09a",
"text": "Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.",
"title": ""
},
{
"docid": "28ccab4b6b7c9c70bc07e4b3219d99d4",
"text": "The Wireless Networking After Next (WNaN) radio is a handheld-sized radio that delivers unicast, multicast, and disruption-tolerant traffic in networks of hundreds of radios. This paper addresses scalability of the network from the routing control traffic point of view. Starting from a basic version of an existing mobile ad-hoc network (MANET) proactive link-state routing protocol, we describe the enhancements that were necessary to provide good performance in these conditions. We focus on techniques to reduce control traffic while maintaining route integrity. We present simulation results from 250-node mobile networks demonstrating the effectiveness of the routing mechanisms. Any MANET with design parameters and constraints similar to the WNaN radio will benefit from these improvements.",
"title": ""
},
{
"docid": "56e7fba1f9730b85c52403c2ddad9417",
"text": "probe such as a category name, and that a norm is This work was supported by the Office of Naval Research under Grant NR 197-058, and by grants from the National Science and Engineering Research Council of Canada and from the Social Sciences and Humanities Research Council of Canada (410-68-0583). Dale Griffin, Leslie McPherson, and Daniel Read provided valuable assistance. Many friends and colleagues commented helpfully on earlier versions. The comments of Anne Treisman and Amos Tversky were especially influential. The preparation of the manuscript benefited greatly from a workshop entitled \"The Priority of the Specific,\" organized by Lee Brooks and Larry Jacoby at Elora, Ontario, in June of 1983. Correspondence concerning this article should be addressed to Daniel Kahneman, who is now at the Department of Psychology, University of California, Berkeley, California 94720, or Dale Miller, Department of Psychology, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada. produced by aggregating the set of recruited representations. The assumptions of distributed activation and rapid aggregation are not unique to this treatment. Related ideas have been advanced in theories of adaptation level (Helson, 1964; Restle, 1978a, 1978b) and other theories of context effects in judgment (N. H. Anderson, 1981; Birnbaum, 1982; Parducci, 1965, 1974); in connectionist models of distributed processing (Hinton & Anderson, 1981; McClelland, 1985; McClelland & Rumelhart, 1985); and in holographic models of memory (Eich, 1982; Metcalfe Eich, 1985; Murdock, 1982). The present analysis relates most closely to exemplar models of concepts (Brooks, 1978, in press; Hintzman, in press; Hintzman & Ludlam, 1980; Jacoby & Brooks, 1984; Medin & Schaffer, 1978; Smith & Medin, 1981). We were drawn to exemplar models in large part because they provide the only satisfactory account of the norms evoked by questions about arbitrary categories, such as \"Is this person friendlier than most other people on your block?\" Exemplar models assume that several representations are evoked at once and that activation varies in degree. They do not require the representations of exemplars to be accessible to conscious and explicit retrieval, and they allow representations to be fragmentary. The present model of norms adopts all of these assumptions. In addition, we propose that events are sometimes compared to counterfactual alternatives that are constructed ad hoc rather than retrieved from past experience. These ideas extend previous work on the availability and simulation heuristics (Kahneman & Tversky, 1982; Tversky & Kahneman, 1973). A constructive process must be invoked to explain some cases of surprise. Thus, an observer who knows Marty's affection for his aunt and his propensity for emotional displays may be surprised if Marty does not cry at her funeral—even if Marty rarely cries and if no one else cries at that funeral. Surprise is produced in such cases by the contrast between a stimulus and a counterfactual alternative that is constructed, not retrieved. Constructed elements also play a crucial role in counterfactual emotions such as frustration or regret, in which reality is compared to an imagined view of what might have been (Kahneman & Tversky, 1982). At the core of the present analysis are the rules and constraints that govern the spontaneous retrieval or construction of alter-",
"title": ""
},
{
"docid": "8218ce22ac1cccd73b942a184c819d8c",
"text": "The extended SMAS facelift techniques gave plastic surgeons the ability to correct the nasolabial fold and medial cheek. Retensioning the SMAS transmits the benefit through the multilinked fibrous support system of the facial soft tissues. The effect is to provide a recontouring of the ptotic soft tissues, which fills out the cheeks as it reduces nasolabial fullness. Indirectly, dermal tightening occurs to a lesser but more natural degree than with traditional facelift surgery. Although details of current techniques may be superseded, the emerging surgical principles are becoming more clearly defined. This article presents these principles and describes the author's current surgical technique.",
"title": ""
},
{
"docid": "dd01611bcbc8a50fbe20bdc676326ce5",
"text": "PURPOSE\nWe evaluated the accuracy of magnetic resonance imaging in determining the size and shape of localized prostate cancer.\n\n\nMATERIALS AND METHODS\nThe subjects were 114 men who underwent multiparametric magnetic resonance imaging before radical prostatectomy with patient specific mold processing of the specimen from 2013 to 2015. T2-weighted images were used to contour the prostate capsule and cancer suspicious regions of interest. The contours were used to design and print 3-dimensional custom molds, which permitted alignment of excised prostates with magnetic resonance imaging scans. Tumors were reconstructed in 3 dimensions from digitized whole mount sections. Tumors were then matched with regions of interest and the relative geometries were compared.\n\n\nRESULTS\nOf the 222 tumors evident on whole mount sections 118 had been identified on magnetic resonance imaging. For the 118 regions of interest mean volume was 0.8 cc and the longest 3-dimensional diameter was 17 mm. However, for matched pathological tumors, of which most were Gleason score 3 + 4 or greater, mean volume was 2.5 cc and the longest 3-dimensional diameter was 28 mm. The median tumor had a 13.5 mm maximal extent beyond the magnetic resonance imaging contour and 80% of cancer volume from matched tumors was outside region of interest boundaries. Size estimation was most accurate in the axial plane and least accurate along the base-apex axis.\n\n\nCONCLUSIONS\nMagnetic resonance imaging consistently underestimates the size and extent of prostate tumors. Prostate cancer foci had an average diameter 11 mm longer and a volume 3 times greater than T2-weighted magnetic resonance imaging segmentations. These results may have important implications for the assessment and treatment of prostate cancer.",
"title": ""
},
{
"docid": "645f320514b0fa5a8b122c4635bc3df6",
"text": "A critical decision problem for top management, and the focus of this study, is whether the CEO (chief executive officer) and CIO (chief information officer) should commit their time to formal planning with the expectation of producing an information technology (IT)-based competitive advantage. Using the perspective of the resource-based view, a model is presented that examines how strategic IT alignment can produce enhanced organizational strategies that yield competitive advantage. One hundred sixty-one CIOs provided data using a postal survey. Results supported seven of the eight hypotheses. They showed that information intensity is an important antecedent to strategic IT alignment, that strategic IT alignment is best explained by multiple constructs which operationalize both process and content measures, and that alignment between the IT plan and the business plan is significantly related to the use of IT for competitive advantage. Study results raise questions about the effect of CEO participation, which appears to be the weak link in the process, and also about the perception of the CIO on the importance of CEO involvement. The paper contributes to our understanding of how knowledge sharing in the alignment process contributes to the creation of superior organizational strategies, provides a framework of the alignment-performance relationship, and furnishes several new constructs. Subject Areas: Competitive Advantage, Information Systems Planning, Knowledge Sharing, Resource-Based View, Strategic Planning, and Structural Equation Modeling.",
"title": ""
},
{
"docid": "8f227f66fc7c86c19edae8036c571579",
"text": "Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the Web of Science and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data (Google Scholar, GS) as well as 3 alternatives to the JIF to assess journal impact (h-index, g-index and the number of citations per paper). Because of its broader range of data sources, the use of GS generally results in more comprehensive citation coverage in the area of management and international business. The use of GS particularly benefits academics publishing in sources that are not (well) covered in ISI. Among these are books, conference papers, non-US journals, and in general journals in the field of strategy and international business. The 3 alternative GS-based metrics showed strong correlations with the traditional JIF. As such, they provide academics and universities committed to JIFs with a good alternative for journals that are not ISI-indexed. However, we argue that these metrics provide additional advantages over the JIF and that the free availability of GS allows for a democratization of citation analysis as it provides every academic access to citation data regardless of their institution’s financial means.",
"title": ""
},
{
"docid": "0397514e0d4a87bd8b59d9b317f8c660",
"text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.",
"title": ""
},
{
"docid": "90ba7add9e8b265c787efd6ebddb1a58",
"text": "Program Synthesis by Sketching by Armando Solar-Lezama Doctor in Philosophy in Engineering-Electrical Engineering and Computer Science University of California, Berkeley Rastislav Bodik, Chair The goal of software synthesis is to generate programs automatically from highlevel speci cations. However, e cient implementations for challenging programs require a combination of high-level algorithmic insights and low-level implementation details. Deriving the low-level details is a natural job for a computer, but the synthesizer can not replace the human insight. Therefore, one of the central challenges for software synthesis is to establish a synergy between the programmer and the synthesizer, exploiting the programmer's expertise to reduce the burden on the synthesizer. This thesis introduces sketching, a new style of synthesis that o ers a fresh approach to the synergy problem. Previous approaches have relied on meta-programming, or variations of interactive theorem proving to help the synthesizer deduce an e cient implementation. The resulting systems are very powerful, but they require the programmer to master new formalisms far removed from traditional programming models. To make synthesis accessible, programmers must be able to provide their insight e ortlessly, using formalisms they already understand. In Sketching, insight is communicated through a partial program, a sketch that expresses the high-level structure of an implementation but leaves holes in place of the lowlevel details. This form of synthesis is made possible by a new SAT-based inductive synthesis procedure that can e ciently synthesize an implementation from a small number of test cases. This algorithm forms the core of a new counterexample guided inductive synthesis procedure (CEGIS) which combines the inductive synthesizer with a validation procedure to automatically generate test inputs and ensure that the generated program satis es its",
"title": ""
},
{
"docid": "5dee244ee673909c3ba3d3d174a7bf83",
"text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.",
"title": ""
},
{
"docid": "5d170dcd5d2c9c1f4e5645217444fd98",
"text": "In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations to help adapt to new tasks and domains. MTDNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.2% (1.8% absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. Our code and pre-trained models will be made publicly available.",
"title": ""
},
{
"docid": "f08b294c1107372d81c39f13ee2caa34",
"text": "The success of deep learning methodologies draws a huge attention to their applications in medical image analysis. One of the applications of deep learning is in segmentation of retinal vessel and severity classification of diabetic retinopathy (DR) from retinal funduscopic image. This paper studies U-Net model performance in segmenting retinal vessel with different settings of dropout and batch normalization and use it to investigate the effect of retina vessel in DR classification. Pre-trained Inception V1 network was used to classify the DR severity. Two sets of retinal images, with and without the presence of vessel, were created from MESSIDOR dataset. The vessel extraction process was done using the best trained U-Net on DRIVE dataset. Final analysis showed that retinal vessel is a good feature in classifying both severe and early cases of DR stage.",
"title": ""
},
{
"docid": "3f9f01e3b3f5ab541cbe78fb210cf744",
"text": "The reliable and effective localization system is the basis of Automatic Guided Vehicle (AGV) to complete given tasks automatically in warehouse environment. However, there are no obvious features that can be used for localization of AGV to be extracted in warehouse environment and it dose make it difficult to realize the localization of AGV. So in this paper, we concentrate on the problem of optimal landmarks placement in warehouse so as to improve the reliability of localization. Firstly, we take the practical warehouse environment into consideration and transform the problem of landmarks placement into an optimization problem which aims at maximizing the difference degree between each basic unit of localization. Then Genetic Algorithm (GA) is used to solve the optimization problem. Then we match the observed landmarks with the already known ones stored in the map and the Triangulation method is used to estimate the position of AGV after the matching has been done. Finally, experiments in a real warehouse environment validate the effectiveness and reliability of our method.",
"title": ""
},
{
"docid": "6ce2529ff446db2d647337f30773cdc9",
"text": "The physical demands in soccer have been studied intensively, and the aim of the present review is to provide an overview of metabolic changes during a game and their relation to the development of fatigue. Heart-rate and body-temperature measurements suggest that for elite soccer players the average oxygen uptake during a match is around 70% of maximum oxygen uptake (VO2max). A top-class player has 150 to 250 brief intense actions during a game, indicating that the rates of creatine-phosphate (CP) utilization and glycolysis are frequently high during a game, which is supported by findings of reduced muscle CP levels and severalfold increases in blood and muscle lactate concentrations. Likewise, muscle pH is lowered and muscle inosine monophosphate (IMP) elevated during a soccer game. Fatigue appears to occur temporarily during a game, but it is not likely to be caused by elevated muscle lactate, lowered muscle pH, or change in muscle-energy status. It is unclear what causes the transient reduced ability of players to perform maximally. Muscle glycogen is reduced by 40% to 90% during a game and is probably the most important substrate for energy production, and fatigue toward the end of a game might be related to depletion of glycogen in some muscle fibers. Blood glucose and catecholamines are elevated and insulin lowered during a game. The blood free-fatty-acid levels increase progressively during a game, probably reflecting an increasing fat oxidation compensating for the lowering of muscle glycogen. Thus, elite soccer players have high aerobic requirements throughout a game and extensive anaerobic demands during periods of a match leading to major metabolic changes, which might contribute to the observed development of fatigue during and toward the end of a game.",
"title": ""
},
{
"docid": "bf16ccf68804d05201ad7a6f0a2920fe",
"text": "The purpose of this paper is to review and discuss public performance management in general and performance appraisal and pay for performance specifically. Performance is a topic that is a popular catch-cry and performance management has become a new organizational ideology. Under the global economic crisis, almost every public and private organization is struggling with a performance challenge, one way or another. Various aspects of performance management have been extensively discussed in the literature. Many researchers and experts assert that sets of guidelines for design of performance management systems would lead to high performance (Kaplan and Norton, 1996, 2006). A long time ago, the traditional performance measurement was developed from cost and management accounting and such purely financial perspective of performance measures was perceived to be inappropriate so that multi-dimensional performance management was development in the 1970s (Radnor and McGuire, 2004).",
"title": ""
}
] | scidocsrr |
777bbe1278ca8be1d239feb3d34eceec | BSIF: Binarized statistical image features | [
{
"docid": "13cb08194cf7254932b49b7f7aff97d1",
"text": "When there are many people who don't need to expect something more than the benefits to take, we will suggest you to have willing to reach all benefits. Be sure and surely do to take this computer vision using local binary patterns that gives the best reasons to read. When you really need to get the reason why, this computer vision using local binary patterns book will probably make you feel curious.",
"title": ""
}
] | [
{
"docid": "a9d516ede8966dde5e79ea1304bbedb9",
"text": "Successful implementation of Information Technology can be judged or predicted from the user acceptance. Technology acceptance model (TAM) is a model that is built to analyze and understand the factors that influence the acceptance of the use of technologies based on the user's perspective. In other words, TAM offers a powerful explanation related to acceptance of the technology and its behavior. TAM model has been applied widely to evaluate various information systems or information technology (IS/IT), but it is the lack of research related to the evaluation of the TAM model itself. This study aims to determine whether the model used TAM is still relevant today considering rapid development of information & communication technology (ICT). In other words, this study would like to test whether the TAM measurement indicators are valid and can represent each dimension of the model. The method used is quantitative method with factor analysis approach. The results showed that all indicators valid and can represent each dimension of TAM, those are perceived usefulness, perceived ease of use and behavioral intention to use. Thus the TAM model is still relevant used to measure the user acceptance of technology.",
"title": ""
},
{
"docid": "5aa14ba34672f4afa9c27f7f863d8c57",
"text": "Knowledge distillation is an effective approach to transferring knowledge from a teacher neural network to a student target network for satisfying the low-memory and fast running requirements in practice use. Whilst being able to create stronger target networks compared to the vanilla non-teacher based learning strategy, this scheme needs to train additionally a large teacher model with expensive computational cost. In this work, we present a Self-Referenced Deep Learning (SRDL) strategy. Unlike both vanilla optimisation and existing knowledge distillation, SRDL distils the knowledge discovered by the in-training target model back to itself to regularise the subsequent learning procedure therefore eliminating the need for training a large teacher model. SRDL improves the model generalisation performance compared to vanilla learning and conventional knowledge distillation approaches with negligible extra computational cost. Extensive evaluations show that a variety of deep networks benefit from SRDL resulting in enhanced deployment performance on both coarse-grained object categorisation tasks (CIFAR10, CIFAR100, Tiny ImageNet, and ImageNet) and fine-grained person instance identification tasks (Market-1501).",
"title": ""
},
{
"docid": "909ec68a644cfd1d338270ee67144c23",
"text": "We have constructed an optical tweezer using two lasers (830 nm and 1064 nm) combined with micropipette manipulation having sub-pN force sensitivity. Sample position is controlled within nanometer accuracy using XYZ piezo-electric stage. The position of the bead in the trap is monitored using single particle laser backscattering technique. The instrument is automated to operate in constant force, constant velocity or constant position measurement. We present data on single DNA force-extension, dynamics of DNA integration on membranes and optically trapped bead–cell interactions. A quantitative analysis of single DNA and protein mechanics, assembly and dynamics opens up new possibilities in nanobioscience.",
"title": ""
},
{
"docid": "cde1b5f21bdc05aa5a86aa819688d63c",
"text": "This paper presents two fuzzy portfolio selection models where the objective is to minimize the downside risk constrained by a given expected return. We assume that the rates of returns on securities are approximated as LR-fuzzy numbers of the same shape, and that the expected return and risk are evaluated by interval-valued means. We establish the relationship between those mean-interval definitions for a given fuzzy portfolio by using suitable ordering relations. Finally, we formulate the portfolio selection problem as a linear program when the returns on the assets are of trapezoidal form. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ea0e8b5bf62de6205bd993610f663f50",
"text": "Design Thinking has collected theories and best-practices to foster creativity and innovation in group processes. This is in particular valuable for sketchy and complex problems. Other disciplines can learn from this body-of-behaviors and values to tackle their complex problems. In this paper, using four Design Thinking qualities, we propose a framework to identify the level of Design Thinkingness in existing analytical software engineering tools: Q1) Iterative Creation Cycles, Q2) Human Integration in Design, Q3) Suitability for Heterogeneity, and Q4) Media Accessibility. We believe that our framework can also be used to transform tools in various engineering areas to support abductive and divergent thinking processes. We argue, based on insights gained from the successful transformation of classical business process modeling into tangible business process modeling. This was achieved by incorporating rapid prototyping, human integration, knowledge base heterogeneity and the media-models theory. The latter is given special attention as it allows us to break free from the limiting factors of the exiting analytic tools.",
"title": ""
},
{
"docid": "786a70f221a70038f930352e8022ae29",
"text": "We present IndoNet, a multilingual lexical knowledge base for Indian languages. It is a linked structure of wordnets of 18 different Indian languages, Universal Word dictionary and the Suggested Upper Merged Ontology (SUMO). We discuss various benefits of the network and challenges involved in the development. The system is encoded in Lexical Markup Framework (LMF) and we propose modifications in LMF to accommodate Universal Word Dictionary and SUMO. This standardized version of lexical knowledge base of Indian Languages can now easily be linked to similar global resources.",
"title": ""
},
{
"docid": "63c550438679c0353c2f175032a73369",
"text": "Large screens or projections in public and private settings have become part of our daily lives, as they enable the collaboration and presentation of information in many diverse ways. When discussing the shown information with other persons, we often point to a displayed object with our index finger or a laser pointer in order to talk about it. Although mobile phone-based interactions with remote screens have been investigated intensively in the last decade, none of them considered such direct pointing interactions for application in everyday tasks. In this paper, we present the concept and design space of PointerPhone which enables users to directly point at objects on a remote screen with their mobile phone and interact with them in a natural and seamless way. We detail the design space and distinguish three categories of interactions including low-level interactions using the mobile phone as a precise and fast pointing device, as well as an input and output device. We detail the category of widgetlevel interactions. Further, we demonstrate versatile high-level interaction techniques and show their application in a collaborative presentation scenario. Based on the results of a qualitative study, we provide design implications for application designs.",
"title": ""
},
{
"docid": "6d777bd24d9e869189c388af94384fa1",
"text": "OBJECTIVE\nThe aim of this study was to explore the efficacy of insulin-loaded trimethylchitosan nanoparticles on certain destructive effects of diabetes type one.\n\n\nMATERIALS AND METHODS\nTwenty-five male Wistar rats were randomly divided into three control groups (n=5) and two treatment groups (n=5). The control groups included normal diabetic rats without treatment and diabetic rats treated with the nanoparticles. The treatment groups included diabetic rats treated with the insulin-loaded trimethylchitosan nanoparticles and the diabetic rats treated with trade insulin. The experiment period was eight weeks and the rats were treated for the last two weeks.\n\n\nRESULT\nThe livers of the rats receiving both forms of insulin showed less severe microvascular steatosis and fatty degeneration, and ameliorated blood glucose, serum biomarkers, and oxidant/antioxidant parameters with no significant differences. The gene expression of pyruvate kinase could be compensated by both the treatment protocols and the new coated form of insulin could not significantly influence the gene expression of glucokinase (p<0.05). The result of the present study showed the potency of the nanoparticle form of insulin to attenuate hyperglycemia, oxidative stress, and inflammation in diabetes, which indicate the bioavailability of insulin-encapsulated trimethylchitosan nanoparticles.",
"title": ""
},
{
"docid": "376ea61271c36d1d8edbd869da910666",
"text": "Purpose – Many thought leaders are promoting information technology (IT) governance and its supporting practices as an approach to improve business/IT alignment. This paper aims to further explore this assumed positive relationship between IT governance practices and business/IT alignment. Design/methodology/approach – This paper explores the relationship between the use of IT governance practices and business/IT alignment, by creating a business/IT alignment maturity benchmark and qualitatively comparing the use of IT governance practices in the extreme cases. Findings – The main conclusion of the research is that all extreme case organisations are leveraging a broad set of IT governance practices, and that IT governance practices need to obtain at least a maturity level 2 (on a scale of 5) to positively influence business/IT alignment. Also, a list of 11 key enabling IT governance practices is identified. Research limitations/implications – This research adheres to the process theory, implying a limited definition of prediction. An important opportunity for future research lies in the domain of complementary statistical correlation research. Practical implications – This research identifies key IT governance practices that organisations can leverage to improve business/IT alignment. Originality/value – This research contributes to new theory building in the IT governance and alignment domain and provides practitioners with insight on how to implement IT governance in their organisations.",
"title": ""
},
{
"docid": "49d714c778b820fca5946b9a587d1e17",
"text": "The current Web of Data is producing increasingly large RDF datasets. Massive publication efforts of RDF data driven by initiatives like the Linked Open Data movement, and the need to exchange large datasets has unveiled the drawbacks of traditional RDF representations, inspired and designed by a documentcentric and human-readable Web. Among the main problems are high levels of verbosity/redundancy and weak machine-processable capabilities in the description of these datasets. This scenario calls for efficient formats for publication and exchange. This article presents a binary RDF representation addressing these issues. Based on a set of metrics that characterizes the skewed structure of real-world RDF data, we develop a proposal of an RDF representation that modularly partitions and efficiently represents three components of RDF datasets: Header information, a Dictionary, and the actual Triples structure (thus called HDT). Our experimental evaluation shows that datasets in HDT format can be compacted by more than fifteen times as compared to current naive representations, improving both parsing and processing while keeping a consistent publication scheme. Specific compression techniques over HDT further improve these compression rates and prove to outperform existing compression solutions for efficient RDF exchange. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d657085072f829db812a2735d0e7f41c",
"text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.",
"title": ""
},
{
"docid": "6cf7a5286a03190b0910380830968351",
"text": "In this paper, the mechanical and aerodynamic design, carbon composite production, hierarchical control system design and vertical flight tests of a new unmanned aerial vehicle, which is capable of VTOL (vertical takeoff and landing) like a helicopter and long range horizontal flight like an airplane, are presented. Real flight tests show that the aerial vehicle can successfully operate in VTOL mode. Kalman filtering is employed to obtain accurate roll and pitch angle estimations.",
"title": ""
},
{
"docid": "5ed8f3b58ae1320411f15a4d7c0f5634",
"text": "With the advent of the ubiquitous era, context-based music recommendation has become one of rapidly emerging applications. Context-based music recommendation requires multidisciplinary efforts including low level feature extraction, music mood classification and human emotion prediction. Especially, in this paper, we focus on the implementation issues of context-based mood classification and music recommendation. For mood classification, we reformulate it into a regression problem based on support vector regression (SVR). Through the use of the SVR-based mood classifier, we achieved 87.8% accuracy. For music recommendation, we reason about the user's mood and situation using both collaborative filtering and ontology technology. We implement a prototype music recommendation system based on this scheme and report some of the results that we obtained.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "1987ba476be524db448cce1835460a33",
"text": "We report on the main features of the IJCAI’07 program, including its theme, and its schedule and organization. In particular, we discuss an effective and novel presentation format at IJCAI in which oral and poster papers were presented in the same sessions categorized by topic area.",
"title": ""
},
{
"docid": "2343f238e92a74e3f456b2215b18ad20",
"text": "Nonlinear activation function is one of the main building blocks of artificial neural networks. Hyperbolic tangent and sigmoid are the most used nonlinear activation functions. Accurate implementation of these transfer functions in digital networks faces certain challenges. In this paper, an efficient approximation scheme for hyperbolic tangent function is proposed. The approximation is based on a mathematical analysis considering the maximum allowable error as design parameter. Hardware implementation of the proposed approximation scheme is presented, which shows that the proposed structure compares favorably with previous architectures in terms of area and delay. The proposed structure requires less output bits for the same maximum allowable error when compared to the state-of-the-art. The number of output bits of the activation function determines the bit width of multipliers and adders in the network. Therefore, the proposed activation function results in reduction in area, delay, and power in VLSI implementation of artificial neural networks with hyperbolic tangent activation function.",
"title": ""
},
{
"docid": "9960d17cb019350a279e4daccccb8e87",
"text": "Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.",
"title": ""
},
{
"docid": "10706a3915da7a66696816af7bd1f638",
"text": "In this paper, we present a family of fluxgate magnetic sensors on printed circuit boards (PCBs), suitable for an electronic compass. This fabrication process is simple and inexpensive and uses commercially available thin ferromagnetic materials. We developed and analyzed the prototype sensors with software tools based on the finite-element method. We developed both singleand double-axis planar fluxgate magnetic sensors as well as front-end circuitry based on second-harmonic detection. Two amorphous magnetic materials, Vitrovac 6025X (25 mum thick) and Vitrovac 6025Z (20 mum thick), were used as the ferromagnetic core. We found that the same structures can be made with Metglas ferromagnetic core. The double-axis fluxgate magnetic sensor has a sensitivity of about 1.25 mV/muT with a linearity error of 1.5% full scale, which is suitable for detecting Earth's magnetic field (plusmn60 muT full-scale) in an electronic compass",
"title": ""
},
{
"docid": "8d9be82bfc32a4631f1b1f24e1d962a9",
"text": "Determine an optimal set of design parameter of PR whose DW fits a prescribed workspace as closely as possible is an important and foremost design task before manufacturing. In this paper, an optimal design method of a linear Delta robot (LDR) to obtain the prescribed cuboid dexterous workspace (PCDW) is proposed. The optical algorithms are based on the concept of performance chart. The performance chart shows the relationship between a criterion and design parameters graphically and globally. The kinematic problem is analyzed in brief to determine the design parameters and their relation. Two algorithms are designed to determine the maximal inscribed rectangle of dexterous workspace in the O-xy plane and plot the performance chart. As an applying example, a design result of the LDR with a prescribed cuboid dexterous workspace is presented. The optical results shown that every corresponding maximal inscribed rectangle can be obtained for every given RATE by the algorithm and the error of RATE is less than 0.05.The method and the results of this paper are very useful for the design and comparison of the parallel robot. Key-Words: Parallel Robot, Cuboid Dexterous Workspace, Optimal Design, performance chart ∗ This work is supported by Zhejiang Province Education Funded Grant #20051392.",
"title": ""
},
{
"docid": "ed34383cada585951e1dcc62445d08c2",
"text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.",
"title": ""
}
] | scidocsrr |
45a2867442af48a54aa14e21d04e1ad4 | CitNetExplorer: A new software tool for analyzing and visualizing citation networks | [
{
"docid": "1c11672bab0fae36cbfde410ac902852",
"text": "To better understand the topic of this colloquium, we have created a series of databases related to knowledge domains (dynamic systems [small world/Milgram], information visualization [Tufte], co-citation [Small], bibliographic coupling [Kessler], and scientometrics [Scientometrics]). I have used a software package called HistCite which generates chronological maps of subject (topical) collections resulting from searches of the ISI Web of Science or ISI citation indexes (SCI, SSCI, and/or AHCI) on CD-ROM. When a marked list is created on WoS, an export file is created which contains all cited references for each source document captured. These bibliographic collections, saved as ASCII files, are processed by HistCite in order to generate chronological and other tables as well as historiographs which highlight the most-cited works in and outside the collection. HistCite also includes a module for detecting and editing errors or variations in cited references as well as a vocabulary analyzer which generates both ranked word lists and word pairs used in the collection. Ideally the system will be used to help the searcher quickly identify the most significant work on a topic and trace its year-by-year historical development. In addition to the collections mentioned above, historiographs based on collections of papers that cite the Watson-Crick 1953 classic paper identifying the helical structure of DNA were created. Both year-by-year as well as month-by-month displays of papers from 1953 to 1958 were necessary to highlight the publication activity of those years.",
"title": ""
},
{
"docid": "5e07328bf13a9dd2486e9dddbe6a3d8f",
"text": "We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.",
"title": ""
}
] | [
{
"docid": "c5ee2a4e38dfa27bc9d77edcd062612f",
"text": "We perform transaction-level analyses of entrusted loans – the largest component of shadow banking in China. There are two types – affiliated and non-affiliated. The latter involve a much higher interest rate than the former and official bank loan rates, and largely flow into the real estate industry. Both involve firms with privileged access to cheap capital to channel funds to less privileged firms and increase when credit is tight. The pricing of entrusted loans, especially that of non-affiliated loans, incorporates fundamental and informational risks. Stock market reactions suggest that both affiliated and non-affiliated loans are fairly-compensated investments.",
"title": ""
},
{
"docid": "82cf154da3bc34c4311cc3ae1b0bfce3",
"text": "Literature in advertising and information systems suggests that advertising in both traditional media and the Internet is either easily ignored by the audience or is perceived with little value. However, these studies assumed that the audience was passive and failed to consider the motives of the users. In light of this, the present study measures consumers attitudes toward advertisements for different purposes/functions (brand building and directional) and different media (traditional and Internet-based). Literature suggests the following factors that contribute to consumers perceptions of ads: entertainment, irritation, informativeness, credibility, and demographic. We believe that interactivity is also a factor that contributes to consumers perceptions. By understanding consumers attitude towards advertising, designers and marketers can better strategize their advertising designs. A better understanding of interactivity can also help to improve the effectiveness of interactive media such as the Internet. A methodology for studying the factors that contribute to consumers perceptions of ads is proposed and implications for Internet-based advertising and e-commerce is discussed.",
"title": ""
},
{
"docid": "c89b94565b7071420017deae01295e23",
"text": "Using cross-sectional data from three waves of the Youth Tobacco Policy Study, which examines the impact of the UK's Tobacco Advertising and Promotion Act (TAPA) on adolescent smoking behaviour, we examined normative pathways between tobacco marketing awareness and smoking intentions. The sample comprised 1121 adolescents in Wave 2 (pre-ban), 1123 in Wave 3 (mid-ban) and 1159 in Wave 4 (post-ban). Structural equation modelling was used to assess the direct effect of tobacco advertising and promotion on intentions at each wave, and also the indirect effect, mediated through normative influences. Pre-ban, higher levels of awareness of advertising and promotion were independently associated with higher levels of perceived sibling approval which, in turn, was positively related to intentions. Independent paths from perceived prevalence and benefits fully mediated the effects of advertising and promotion awareness on intentions mid- and post-ban. Advertising awareness indirectly affected intentions via the interaction between perceived prevalence and benefits pre-ban, whereas the indirect effect on intentions of advertising and promotion awareness was mediated by the interaction of perceived prevalence and benefits mid-ban. Our findings indicate that policy measures such as the TAPA can significantly reduce adolescents' smoking intentions by signifying smoking to be less normative and socially unacceptable.",
"title": ""
},
{
"docid": "fe97095f2af18806e7032176c6ac5d89",
"text": "Targeted social engineering attacks in the form of spear phishing emails, are often the main gimmick used by attackers to infiltrate organizational networks and implant state-of-the-art Advanced Persistent Threats (APTs). Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76% in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28% without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails.",
"title": ""
},
{
"docid": "d25c4ed5656c5972591fb7da4f86be83",
"text": "Opinion mining or sentimental analysis plays important role in the data mining process. In the proposed method, opinions are classified using various statistical measures to provide ratings to help the sentimental analysis of big data. Experimental results demonstrate the efficiency of the proposed method to help in analysis of quality of product, marketers evaluation of success of a new product launched, determine which versions of a product or service are popular and identify demographics like or dislike of product features, etc.",
"title": ""
},
{
"docid": "773813311ca5cb2f68662faab7040678",
"text": "This paper presents a latent variable structured prediction model for discriminative supervised clustering of items called the Latent Left-linking Model (LM). We present an online clustering algorithm for LM based on a feature-based item similarity function. We provide a learning framework for estimating the similarity function and present a fast stochastic gradient-based learning technique. In our experiments on coreference resolution and document clustering, LM outperforms several existing online as well as batch supervised clustering techniques.",
"title": ""
},
{
"docid": "9a283f62dad38887bc6779c3ea61979d",
"text": "Recent evidence supports that alterations in hepatocyte-derived exosomes (HDE) may play a role in the pathogenesis of drug-induced liver injury (DILI). HDE-based biomarkers also hold promise to improve the sensitivity of existing in vitro assays for predicting DILI liability. Primary human hepatocytes (PHH) provide a physiologically relevant in vitro model to explore the mechanistic and biomarker potential of HDE in DILI. However, optimal methods to study exosomes in this culture system have not been defined. Here we use HepG2 and HepaRG cells along with PHH to optimize methods for in vitro HDE research. We compared the quantity and purity of HDE enriched from HepG2 cell culture medium by 3 widely used methods: ultracentrifugation (UC), OptiPrep density gradient ultracentrifugation (ODG), and ExoQuick (EQ)-a commercially available exosome precipitation reagent. Although EQ resulted in the highest number of particles, UC resulted in more exosomes as indicated by the relative abundance of exosomal CD63 to cellular prohibitin-1 as well as the comparative absence of contaminating extravesicular material. To determine culture conditions that best supported exosome release, we also assessed the effect of Matrigel matrix overlay at concentrations ranging from 0 to 0.25 mg/ml in HepaRG cells and compared exosome release from fresh and cryopreserved PHH from same donor. Sandwich culture did not impair exosome release, and freshly prepared PHH yielded a higher number of HDE overall. Taken together, our data support the use of UC-based enrichment from fresh preparations of sandwich-cultured PHH for future studies of HDE in DILI.",
"title": ""
},
{
"docid": "8cc3af1b9bb2ed98130871c7d5bae23a",
"text": "BACKGROUND\nAnimal experiments have convincingly demonstrated that prenatal maternal stress affects pregnancy outcome and results in early programming of brain functions with permanent changes in neuroendocrine regulation and behaviour in offspring.\n\n\nAIM\nTo evaluate the existing evidence of comparable effects of prenatal stress on human pregnancy and child development.\n\n\nSTUDY DESIGN\nData sources used included a computerized literature search of PUBMED (1966-2001); Psychlit (1987-2001); and manual search of bibliographies of pertinent articles.\n\n\nRESULTS\nRecent well-controlled human studies indicate that pregnant women with high stress and anxiety levels are at increased risk for spontaneous abortion and preterm labour and for having a malformed or growth-retarded baby (reduced head circumference in particular). Evidence of long-term functional disorders after prenatal exposure to stress is limited, but retrospective studies and two prospective studies support the possibility of such effects. A comprehensive model of putative interrelationships between maternal, placental, and fetal factors is presented.\n\n\nCONCLUSIONS\nApart from the well-known negative effects of biomedical risks, maternal psychological factors may significantly contribute to pregnancy complications and unfavourable development of the (unborn) child. These problems might be reduced by specific stress reduction in high anxious pregnant women, although much more research is needed.",
"title": ""
},
{
"docid": "57cbffa039208b85df59b7b3bc1718d5",
"text": "This paper provides an in-depth analysis of the technological and social factors that led to the successful adoption of groupware by a virtual team in a educational setting. Drawing on a theoretical framework based on the concept of technological frames, we conducted an action research study to analyse the chronological sequence of events in groupware adoption. We argue that groupware adoption can be conceptualised as a three-step process of expanding and aligning individual technological frames towards groupware. The first step comprises activities that bring knowledge of new technological opportunities to the participants. The second step involves facilitating the participants to articulate and evaluate their work practices and their use of tech© Scandinavian Journal of Information Systems, 2006, 18(2):29-68 nology. The third and final step deals with the participants' commitment to, and practical enactment of, groupware technology. The alignment of individual technological frames requires the articulation and re-evaluation of experience with collaborative practice and with the use of technology. One of the key findings is that this activity cannot take place at the outset of groupware adoption.",
"title": ""
},
{
"docid": "76cef1b6d0703127c3ae33bcf71cdef8",
"text": "Risks have a significant impact on a construction project’s performance in terms of cost, time and quality. As the size and complexity of the projects have increased, an ability to manage risks throughout the construction process has become a central element preventing unwanted consequences. How risks are shared between the project actors is to a large extent governed by the procurement option and the content of the related contract documents. Therefore, selecting an appropriate project procurement option is a key issue for project actors. The overall aim of this research is to increase the understanding of risk management in the different procurement options: design-bid-build contracts, designbuild contracts and collaborative form of partnering. Deeper understanding is expected to contribute to a more effective risk management and, therefore, a better project output and better value for both clients and contractors. The study involves nine construction projects recently performed in Sweden and comprises a questionnaire survey and a series of interviews with clients, contractors and consultants involved in these construction projects. The findings of this work show a lack of an iterative approach to risk management, which is a weakness in current procurement practices. This aspect must be addressed if the risk management process is to serve projects and, thus, their clients. The absence of systematic risk management is especially noted in the programme phase, where it arguably has the greatest potential impact. The production phase is where most interest and activity are to be found. As a matter of practice, the communication of risks between the actors simply does not work to the extent that it must if projects are to be delivered with certainty, irrespective of the form of procurement. A clear connection between the procurement option and risk management in construction projects has been found. Traditional design-bid-build contracts do not create opportunities for open discussion of project risks and joint risk management. A number of drivers of and obstacles to effective risk management have been explored in the study. Every actor’s involvement in dialogue, effective communication and information exchange, open attitudes and trustful relationship are the factors that support open discussion of project risks and, therefore, contribute to successful risk management. Based on the findings, a number of recommendations facilitating more effective risk management have been developed for the industry practitioners. Keywords--Risk Management, Risk Allocation, Construction Project, Construction Contract, Design-BidBuild, Design-Build, Partnering",
"title": ""
},
{
"docid": "0f71e64aaf081b6624f442cb95b2220c",
"text": "Objective\nElectronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training.\n\n\nMethods\nThe most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification.\n\n\nResults\nWe validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference.\n\n\nConclusion\nThe accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data.",
"title": ""
},
{
"docid": "52f912cd5a8def1122d7ce6ba7f47271",
"text": "System event logs have been frequently used as a valuable resource in data-driven approaches to enhance system health and stability. A typical procedure in system log analytics is to first parse unstructured logs, and then apply data analysis on the resulting structured data. Previous work on parsing system event logs focused on offline, batch processing of raw log files. But increasingly, applications demand online monitoring and processing. We propose an online streaming method Spell, which utilizes a longest common subsequence based approach, to parse system event logs. We show how to dynamically extract log patterns from incoming logs and how to maintain a set of discovered message types in streaming fashion. Evaluation results on large real system logs demonstrate that even compared with the offline alternatives, Spell shows its superiority in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "0cae4ea322daaaf33a42427b69e8ba9f",
"text": "Background--By leveraging cloud services, organizations can deploy their software systems over a pool of resources. However, organizations heavily depend on their business-critical systems, which have been developed over long periods. These legacy applications are usually deployed on-premise. In recent years, research in cloud migration has been carried out. However, there is no secondary study to consolidate this research. Objective--This paper aims to identify, taxonomically classify, and systematically compare existing research on cloud migration. Method--We conducted a systematic literature review (SLR) of 23 selected studies, published from 2010 to 2013. We classified and compared the selected studies based on a characterization framework that we also introduce in this paper. Results--The research synthesis results in a knowledge base of current solutions for legacy-to-cloud migration. This review also identifies research gaps and directions for future research. Conclusion--This review reveals that cloud migration research is still in early stages of maturity, but is advancing. It identifies the needs for a migration framework to help improving the maturity level and consequently trust into cloud migration. This review shows a lack of tool support to automate migration tasks. This study also identifies needs for architectural adaptation and self-adaptive cloud-enabled systems.",
"title": ""
},
{
"docid": "e2950089f76e1509ad2aa74ea5c738eb",
"text": "In this review the knowledge status of and future research options on a green gas supply based on biogas production by co-digestion is explored. Applications and developments of the (bio)gas supply in The Netherlands have been considered, whereafter literature research has been done into the several stages from production of dairy cattle manure and biomass to green gas injection into the gas grid. An overview of a green gas supply chain has not been made before. In this study it is concluded that on installation level (micro-level) much practical knowledge is available and on macro-level knowledge about availability of biomass. But on meso-level (operations level of a green gas supply) very little research has been done until now. Future research should include the modeling of a green gas supply chain on an operations level, i.e. questions must be answered as where to build digesters based on availability of biomass. Such a model should also advise on technology of upgrading depending on scale factors. Future research might also give insight in the usability of mixing (partly upgraded) biogas with natural gas. The preconditions for mixing would depend on composition of the gas, the ratio of gases to be mixed and the requirements on the mixture.",
"title": ""
},
{
"docid": "42dfa7988f31403dba1c390741aa164c",
"text": "This study explored friendship variables in relation to body image, dietary restraint, extreme weight-loss behaviors (EWEBs), and binge eating in adolescent girls. From 523 girls, 79 friendship cliques were identified using social network analysis. Participants completed questionnaires that assessed body image concerns, eating, friendship relations, and psychological family, and media variables. Similarity was greater for within than for between friendship cliques for body image concerns, dietary restraint, and EWLBs, but not for binge eating. Cliques high in body image concerns and dieting manifested these concerns in ways consistent with a high weight/shape-preoccupied subculture. Friendship attitudes contributed significantly to the prediction of individual body image concern and eating behaviors. Use of EWLBs by friends predicted an individual's own level of use.",
"title": ""
},
{
"docid": "8bcbb5d7ae6c57d60ff34abc1259349c",
"text": "Habitat remnants in urbanized areas typically conserve biodiversity and serve the recreation and urban open-space needs of human populations. Nevertheless, these goals can be in conflict if human activity negatively affects wildlife. Hence, when considering habitat remnants as conservation refuges it is crucial to understand how human activities and land uses affect wildlife use of those and adjacent areas. We used tracking data (animal tracks and den or bed sites) on 10 animal species and information on human activity and environmental factors associated with anthropogenic disturbance in 12 habitat fragments across San Diego County, California, to examine the relationships among habitat fragment characteristics, human activity, and wildlife presence. There were no significant correlations of species presence and abundance with percent plant cover for all species or with different land-use intensities for all species, except the opossum (Didelphis virginiana), which preferred areas with intensive development. Woodrats (Neotoma spp.) and cougars (Puma concolor) were associated significantly and positively and significantly and negatively, respectively, with the presence and prominence of utilities. Woodrats were also negatively associated with the presence of horses. Raccoons (Procyon lotor) and coyotes (Canis latrans) were associated significantly and negatively and significantly and positively, respectively, with plant bulk and permanence. Cougars and gray foxes (Urocyon cinereoargenteus) were negatively associated with the presence of roads. Roadrunners (Geococcyx californianus) were positively associated with litter. The only species that had no significant correlations with any of the environmental variables were black-tailed jackrabbits (Lepus californicus) and mule deer (Odocoileus hemionus). Bobcat tracks were observed more often than gray foxes in the study area and bobcats correlated significantly only with water availability, contrasting with results from other studies. Our results appear to indicate that maintenance of habitat fragments in urban areas is of conservation benefit to some animal species, despite human activity and disturbance, as long as the fragments are large.",
"title": ""
},
{
"docid": "25ca94db4d6a4a2f24831d78d198b129",
"text": "In recent years, Sentiment Analysis has become one of the most interesting topics in AI research due to its promising commercial benefits. An important step in a Sentiment Analysis system for text mining is the preprocessing phase, but it is often underestimated and not extensively covered in literature. In this work, our aim is to highlight the importance of preprocessing techniques and show how they can improve system accuracy. In particular, some different preprocessing methods are presented and the accuracy of each of them is compared with the others. The purpose of this comparison is to evaluate which techniques are effective. In this paper, we also present the reasons why the accuracy improves, by means of a precise analysis of each method.",
"title": ""
},
{
"docid": "951ad18af2b3c9b0ca06147b0c804f65",
"text": "Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.",
"title": ""
},
{
"docid": "ad45d9a69112010f84ff8d0fae04596d",
"text": "PURPOSE\nWe document the postpubertal outcome of feminizing genitoplasty.\n\n\nMATERIALS AND METHODS\nA total of 14 girls, mean age 13.1 years, with congenital adrenal hyperplasia were assessed under anesthesia by a pediatric urologist, plastic/reconstructive surgeon and gynecologist. Of these patients 13 had previously undergone feminizing genitoplasty in early childhood at 4 different specialist centers in the United Kingdom.\n\n\nRESULTS\nThe outcome of clitoral surgery was unsatisfactory (clitoral atrophy or prominent glans) in 6 girls, including 3 whose genitoplasty had been performed by 3 different specialist pediatric urologists. Additional vaginal surgery was necessary for normal comfortable intercourse in 13 patients. Fibrosis and scarring were most evident in those who had undergone aggressive attempts at vaginal reconstruction in infancy.\n\n\nCONCLUSIONS\nThese disappointing results, even in the hands of specialists, highlight the importance of late followup and challenge the prevailing assumption that total correction can be achieved with a single stage operation in infancy. Although simple exteriorization of a low vagina can reasonably be combined with cosmetic correction of virilized external genitalia in infancy, we now believe that in some cases it may be best to defer definitive reconstruction of the intermediate or high vagina until after puberty. The psychological issues surrounding sexuality in these patients are inadequately researched and poorly understood.",
"title": ""
},
{
"docid": "e648aa29c191885832b4deee5af9b5b5",
"text": "Development of controlled release transdermal dosage form is a complex process involving extensive research. Transdermal patches have been developed to improve clinical efficacy of the drug and to enhance patient compliance by delivering smaller amount of drug at a predetermined rate. This makes evaluation studies even more important in order to ensure their desired performance and reproducibility under the specified environmental conditions. These studies are predictive of transdermal dosage forms and can be classified into following types:",
"title": ""
}
] | scidocsrr |
35ecb6181280a474aa2de6c410750227 | Parallelizing Skip Lists for In-Memory Multi-Core Database Systems | [
{
"docid": "5ea65d6e878d2d6853237a74dbc5a894",
"text": "We study indexing techniques for main memory, including hash indexes, binary search trees, T-trees, B+-trees, interpolation search, and binary search on arrays. In a decision-support context, our primary concerns are the lookup time, and the space occupied by the index structure. Our goal is to provide faster lookup times than binary search by paying attention to reference locality and cache behavior, without using substantial extra space. We propose a new indexing technique called “Cache-Sensitive Search Trees” (CSS-trees). Our technique stores a directory structure on top of a sorted array. Nodes in this directory have size matching the cache-line size of the machine. We store the directory in an array and do not store internal-node pointers; child nodes can be found by performing arithmetic on array offsets. We compare the algorithms based on their time and space requirements. We have implemented all of the techniques, and present a performance study on two popular modern machines. We demonstrate that with ∗This research was supported by a David and Lucile Packard Foundation Fellowship in Science and Engineering, by an NSF Young Investigator Award, by NSF grant number IIS-98-12014, and by NSF CISE award CDA-9625374. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. a small space overhead, we can reduce the cost of binary search on the array by more than a factor of two. We also show that our technique dominates B+-trees, T-trees, and binary search trees in terms of both space and time. A cache simulation verifies that the gap is due largely to cache misses.",
"title": ""
},
{
"docid": "f10660b168700e38e24110a575b5aafa",
"text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.",
"title": ""
},
{
"docid": "00f88387c8539fcbed2f6ec4f953438d",
"text": "We present Masstree, a fast key-value database designed for SMP machines. Masstree keeps all data in memory. Its main data structure is a trie-like concatenation of B+-trees, each of which handles a fixed-length slice of a variable-length key. This structure effectively handles arbitrary-length possiblybinary keys, including keys with long shared prefixes. +-tree fanout was chosen to minimize total DRAM delay when descending the tree and prefetching each tree node. Lookups use optimistic concurrency control, a read-copy-update-like technique, and do not write shared data structures; updates lock only affected nodes. Logging and checkpointing provide consistency and durability. Though some of these ideas appear elsewhere, Masstree is the first to combine them. We discuss design variants and their consequences.\n On a 16-core machine, with logging enabled and queries arriving over a network, Masstree executes more than six million simple queries per second. This performance is comparable to that of memcached, a non-persistent hash table server, and higher (often much higher) than that of VoltDB, MongoDB, and Redis.",
"title": ""
},
{
"docid": "45c006e52bdb9cfa73fd4c0ebf692dfe",
"text": "Main memory capacities have grown up to a point where most databases fit into RAM. For main-memory database systems, index structure performance is a critical bottleneck. Traditional in-memory data structures like balanced binary search trees are not efficient on modern hardware, because they do not optimally utilize on-CPU caches. Hash tables, also often used for main-memory indexes, are fast but only support point queries. To overcome these shortcomings, we present ART, an adaptive radix tree (trie) for efficient indexing in main memory. Its lookup performance surpasses highly tuned, read-only search trees, while supporting very efficient insertions and deletions as well. At the same time, ART is very space efficient and solves the problem of excessive worst-case space consumption, which plagues most radix trees, by adaptively choosing compact and efficient data structures for internal nodes. Even though ART's performance is comparable to hash tables, it maintains the data in sorted order, which enables additional operations like range scan and prefix lookup.",
"title": ""
}
] | [
{
"docid": "ddb36948e400c970309bd0886bfcfccb",
"text": "1 Introduction \"S pace\" and \"place\" are familiar words denoting common \"Sexperiences. We live in space. There is no space for an-< • / other building on the lot. The Great Plains look spacious. Place is security, space is freedom: we are attached to the one and long for the other. There is no place like home. What is home? It is the old homestead, the old neighborhood, home-town, or motherland. Geographers study places. Planners would like to evoke \"a sense of place.\" These are unexceptional ways of speaking. Space and place are basic components of the lived world; we take them for granted. When we think about them, however, they may assume unexpected meanings and raise questions we have not thought to ask. What is space? Let an episode in the life of the theologian Paul Tillich focus the question so that it bears on the meaning of space in experience. Tillich was born and brought up in a small town in eastern Germany before the turn of the century. The town was medieval in character. Surrounded by a wall and administered from a medieval town hall, it gave the impression of a small, protected, and self-contained world. To an imaginative child it felt narrow and restrictive. Every year, however young Tillich was able to escape with his family to the Baltic Sea. The flight to the limitless horizon and unrestricted space 3 4 Introduction of the seashore was a great event. Much later Tillich chose a place on the Atlantic Ocean for his days of retirement, a decision that undoubtedly owed much to those early experiences. As a boy Tillich was also able to escape from the narrowness of small-town life by making trips to Berlin. Visits to the big city curiously reminded him of the sea. Berlin, too, gave Tillich a feeling of openness, infinity, unrestricted space. 1 Experiences of this kind make us ponder anew the meaning of a word like \"space\" or \"spaciousness\" that we think we know well. What is a place? What gives a place its identity, its aura? These questions occurred to the physicists Niels Bohr and Werner Heisenberg when they visited Kronberg Castle in Denmark. Bohr said to Heisenberg: Isn't it strange how this castle changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the …",
"title": ""
},
{
"docid": "a86dac3d0c47757ce8cad41499090b8e",
"text": "We propose a theory of regret regulation that distinguishes regret from related emotions, specifies the conditions under which regret is felt, the aspects of the decision that are regretted, and the behavioral implications. The theory incorporates hitherto scattered findings and ideas from psychology, economics, marketing, and related disciplines. By identifying strategies that consumers may employ to regulate anticipated and experienced regret, the theory identifies gaps in our current knowledge and thereby outlines opportunities for future research.",
"title": ""
},
{
"docid": "76cc47710ab6fa91446844368821c991",
"text": "Recommender systems (RSs) have been successfully applied to alleviate the problem of information overload and assist users' decision makings. Multi-criteria recommender systems is one of the RSs which utilizes users' multiple ratings on different aspects of the items (i.e., multi-criteria ratings) to predict user preferences. Traditional approaches simply treat these multi-criteria ratings as addons, and aggregate them together to serve for item recommendations. In this paper, we propose the novel approaches which treat criteria preferences as contextual situations. More specifically, we believe that part of multiple criteria preferences can be viewed as contexts, while others can be treated in the traditional way in multi-criteria recommender systems. We compare the recommendation performance among three settings: using all the criteria ratings in the traditional way, treating all the criteria preferences as contexts, and utilizing selected criteria ratings as contexts. Our experiments based on two real-world rating data sets reveal that treating criteria preferences as contexts can improve the performance of item recommendations, but they should be carefully selected. The hybrid model of using selected criteria preferences as contexts and the remaining ones in the traditional way is finally demonstrated as the overall winner in our experiments.",
"title": ""
},
{
"docid": "bdb41d1633c603f4b68dfe0191eb822b",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "3e9aa3bcc728f8d735f6b02e0d7f0502",
"text": "Linda Marion is a doctoral student at Drexel University. E-mail: Linda.Marion@drexel.edu. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.",
"title": ""
},
{
"docid": "66432ab91b459c3de8e867c8214029d8",
"text": "Distributional hypothesis lies in the root of most existing word representation models by inferring word meaning from its external contexts. However, distributional models cannot handle rare and morphologically complex words very well and fail to identify some finegrained linguistic regularity as they are ignoring the word forms. On the contrary, morphology points out that words are built from some basic units, i.e., morphemes. Therefore, the meaning and function of such rare words can be inferred from the words sharing the same morphemes, and many syntactic relations can be directly identified based on the word forms. However, the limitation of morphology is that it cannot infer the relationship between two words that do not share any morphemes. Considering the advantages and limitations of both approaches, we propose two novel models to build better word representations by modeling both external contexts and internal morphemes in a jointly predictive way, called BEING and SEING. These two models can also be extended to learn phrase representations according to the distributed morphology theory. We evaluate the proposed models on similarity tasks and analogy tasks. The results demonstrate that the proposed models can outperform state-of-the-art models significantly on both word and phrase representation learning.",
"title": ""
},
{
"docid": "44ff7fa960b3c91cd66c5fbceacfba3d",
"text": "God gifted sense of vision to the human being is an important aspect of our life. But there are some unfortunate people who lack the ability of visualizing things. The visually impaired have to face many challenges in their daily life. The problem gets worse when there is an obstacle in front of them. Blind stick is an innovative stick designed for visually disabled people for improved navigation. The paper presents a theoretical system concept to provide a smart ultrasonic aid for blind people. The system is intended to provide overall measures – Artificial vision and object detection. The aim of the overall system is to provide a low cost and efficient navigation aid for a visually impaired person who gets a sense of artificial vision by providing information about the environmental scenario of static and dynamic objects around them. Ultrasonic sensors are used to calculate distance of the obstacles around the blind person to guide the user towards the available path. Output is in the form of sequence of beep sound which the blind person can hear.",
"title": ""
},
{
"docid": "5a2be4e590d31b0cb553215f11776a15",
"text": "This paper presents a review of the state of the art and a discussion on vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) applied to the inspection of power utility assets and other similar civil applications. The first part of the paper presents the authors' view on specific benefits and operation constraints associated with the use of UAVs in power industry applications. The second part cites more than 70 recent publications related to this field of application. Among them, some present complete technologies while others deal with specific subsystems relevant to the application of such mobile platforms to power line inspection. The authors close with a discussion of key factors for successful application of VTOL UAVs to power industry infrastructure inspection.",
"title": ""
},
{
"docid": "ebb8e498650191ea148ce1b97f443b21",
"text": "Many learning algorithms use a metric defined over the input s ace as a principal tool, and their performance critically depends on the quality of this metric. We address the problem of learning metrics using side-information in the form of equi valence constraints. Unlike labels, we demonstrate that this type of side-information can sometim es be automatically obtained without the need of human intervention. We show how such side-inform ation can be used to modify the representation of the data, leading to improved clustering and classification. Specifically, we present the Relevant Component Analysis (R CA) algorithm, which is a simple and efficient algorithm for learning a Mahalanobis metric. W e show that RCA is the solution of an interesting optimization problem, founded on an informa tion theoretic basis. If dimensionality reduction is allowed within RCA, we show that it is optimally accomplished by a version of Fisher’s linear discriminant that uses constraints. Moreover, unde r certain Gaussian assumptions, RCA can be viewed as a Maximum Likelihood estimation of the within cl ass covariance matrix. We conclude with extensive empirical evaluations of RCA, showing its ad v ntage over alternative methods.",
"title": ""
},
{
"docid": "08731e24a7ea5e8829b03d79ef801384",
"text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "3cc84fda5e04ccd36f5b632d9da3a943",
"text": "We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.",
"title": ""
},
{
"docid": "c5dacb6e808c30b0e7c603c3ee93fe2b",
"text": "Deep learning presents many opportunities for image-based plant phenotyping. Here we consider the capability of deep convolutional neural networks to perform the leaf counting task. Deep learning techniques typically require large and diverse datasets to learn generalizable models without providing a priori an engineered algorithm for performing the task. This requirement is challenging, however, for applications in the plant phenotyping field, where available datasets are often small and the costs associated with generating new data are high. In this work we propose a new method for augmenting plant phenotyping datasets using rendered images of synthetic plants. We demonstrate that the use of high-quality 3D synthetic plants to augment a dataset can improve performance on the leaf counting task. We also show that the ability of the model to generate an arbitrary distribution of phenotypes mitigates the problem of dataset shift when training and testing on different datasets. Finally, we show that real and synthetic plants are significantly interchangeable when training a neural network on the leaf counting task.",
"title": ""
},
{
"docid": "62b8d1ecb04506794f81a47fccb63269",
"text": "This paper addresses the mode collapse for generative adversarial networks (GANs). We view modes as a geometric structure of data distribution in a metric space. Under this geometric lens, we embed subsamples of the dataset from an arbitrary metric space into the `2 space, while preserving their pairwise distance distribution. Not only does this metric embedding determine the dimensionality of the latent space automatically, it also enables us to construct a mixture of Gaussians to draw latent space random vectors. We use the Gaussian mixture model in tandem with a simple augmentation of the objective function to train GANs. Every major step of our method is supported by theoretical analysis, and our experiments on real and synthetic data confirm that the generator is able to produce samples spreading over most of the modes while avoiding unwanted samples, outperforming several recent GAN variants on a number of metrics and offering new features.",
"title": ""
},
{
"docid": "a5c58dbcbf2dc9c298f5fda2721f87a0",
"text": "The purpose of this study was to investigate how university students perceive their involvement in the cyberbullying phenomenon, and its impact on their well-being. Thus, this study presents a preliminary approach of how college students’ perceived involvement in acts of cyberbullying can be measured. Firstly, Exploratory Factor Analysis (N = 349) revealed a unidimensional structure of the four scales included in the Cyberbullying Inventory for College Students. Then, Item Response Theory (N = 170) was used to analyze the unidimensionality of each scale and the interactions between participants and items. Results revealed good item reliability and Cronbach’s a for each scale. Results also showed the potential of the instrument and how college students underrated their involvement in acts of cyberbullying. Additionally, aggression types, coping strategies and sources of help to deal with cyberbullying were identified and discussed. Lastly, age, gender and course-related issues were considered in the analysis. Implications for researchers and practitioners are discussed. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "54bf44e04920bdaa7388dbbbbd34a1a8",
"text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.",
"title": ""
},
{
"docid": "adeebdc680819ca992f9d53e4866122a",
"text": "Large numbers of black kites (Milvus migrans govinda) forage with house crows (Corvus splendens) at garbage dumps in many Indian cities. Such aggregation of many individuals results in aggressiveness where adoption of a suitable behavioral approach is crucial. We studied foraging behavior of black kites in dumping sites adjoining two major corporation markets of Kolkata, India. Black kites used four different foraging tactics which varied and significantly influenced foraging attempts and their success rates. Kleptoparasitism was significantly higher than autonomous foraging events; interspecific kleptoparasitism was highest in occurrence with a low success rate, while ‘autonomous-ground’ was least adopted but had the highest success rate.",
"title": ""
},
{
"docid": "ecd144226fdb065c2325a0d3131fd802",
"text": "The unknown and the invisible exploit the unwary and the uninformed for illicit financial gain and reputation damage.",
"title": ""
},
{
"docid": "df27cb7c7ab82ef44aebfeb45d6c3cf1",
"text": "Nowadays, data is created by humans as well as automatically collected by physical things, which embed electronics, software, sensors and network connectivity. Together, these entities constitute the Internet of Things (IoT). The automated analysis of its data can provide insights into previously unknown relationships between things, their environment and their users, facilitating an optimization of their behavior. Especially the real-time analysis of data, embedded into physical systems, can enable new forms of autonomous control. These in turn may lead to more sustainable applications, reducing waste and saving resources. IoT’s distributed and dynamic nature, resource constraints of sensors and embedded devices as well as the amounts of generated data are challenging even the most advanced automated data analysis methods known today. In particular, the IoT requires a new generation of distributed analysis methods. Many existing surveys have strongly focused on the centralization of data in the cloud and big data analysis, which follows the paradigm of parallel high-performance computing. However, bandwidth and energy can be too limited for the transmission of raw data, or it is prohibited due to privacy constraints. Such communication-constrained scenarios require decentralized analysis algorithms which at least partly work directly on the generating devices. After listing data-driven IoT applications, in contrast to existing surveys, we highlight the differences between cloudbased and decentralized analysis from an algorithmic perspective. We present the opportunities and challenges of research on communication-efficient decentralized analysis algorithms. Here, the focus is on the difficult scenario of vertically partitioned data, which covers common IoT use cases. The comprehensive bibliography aims at providing readers with a good starting point for their own work.",
"title": ""
},
{
"docid": "731a3a94245b67df3e362ac80f41155f",
"text": "Opportunistic networking offers many appealing application perspectives from local social-networking applications to supporting communications in remote areas or in disaster and emergency situations. Yet, despite the increasing penetration of smartphones, opportunistic networking is not feasible with most popular mobile devices. There is still no support for WiFi Ad-Hoc and protocols such as Bluetooth have severe limitations (short range, pairing). We believe that WiFi Ad-Hoc communication will not be supported by most popular mobile OSes (i.e., iOS and Android) and that WiFi Direct will not bring the desired features. Instead, we propose WiFi-Opp, a realistic opportunistic setup relying on (i) open stationary APs and (ii) spontaneous mobile APs (i.e., smartphones in AP or tethering mode), a feature used to share Internet access, which we use to enable opportunistic communications. We compare WiFi-Opp to WiFi Ad-Hoc by replaying real-world contact traces and evaluate their performance in terms of capacity for content dissemination as well as energy consumption. While achieving comparable throughput, WiFi-Opp is up to 10 times more energy efficient than its Ad-Hoc counterpart. Eventually, a proof of concept demonstrates the feasibility of WiFi-Opp, which opens new perspectives for opportunistic networking.",
"title": ""
}
] | scidocsrr |
a31565af70a5a6229d4b9623366bda3f | Creativity: Self-Referential Mistaking, Not Negating | [
{
"docid": "db70e6c202dc2c7f72fab88057f274af",
"text": "Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analyzed in terms of how model-building observers infer from measurements the computational capabilities embedded in nonlinear processes. An observer’s notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtlely, though, on how those resources are organized. The descriptive power of the observer’s chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data. This paper presents an overview of an inductive framework — hierarchical -machine reconstruction — in which the emergence of complexity is associated with the innovation of new computational model classes. Complexity metrics for detecting structure and quantifying emergence, along with an analysis of the constraints on the dynamics of innovation, are outlined. Illustrative examples are drawn from the onset of unpredictability in nonlinear systems, finitary nondeterministic processes, and cellular automata pattern recognition. They demonstrate how finite inference resources drive the innovation of new structures and so lead to the emergence of complexity.",
"title": ""
}
] | [
{
"docid": "815cdf2829b60ff44b38878b16f17695",
"text": "Nowadays, Vending Machines are well known among Japan, Malaysia and Singapore. The quantity of machines in these countries is on the top worldwide. This is due to the modern lifestyles which require fast food processing with high quality. This paper describes the designing of multi select machine using Finite State Machine Model with Auto-Billing Features. Finite State Machine (FSM) modeling is the most crucial part in developing proposed model as this reduces the hardware. In this paper the process of four state (user Selection, Waiting for money insertion, product delivery and servicing) has been modeled using MEALY Machine Model. The proposed model is tested using Spartan 3 development board and its performance is compared with CMOS based machine.",
"title": ""
},
{
"docid": "2a68a1bcdd4b764f7981c76199f96cc9",
"text": "In this paper we present a method for logo detection in image collections and streams. The proposed method is based on features, extracted from reference logo images and test images. Extracted features are combined with respect to their similarity in their descriptors' space and afterwards with respect to their geometric consistency on the image plane. The contribution of this paper is a novel method for fast geometric consistency test. Using state of the art fast matching methods, it produces pairs of similar features between the test image and the reference logo image and then examines which pairs are forming a consistent geometry on both the test and the reference logo image. It is noteworthy that the proposed method is scale, rotation and translation invariant. The key advantage of the proposed method is that it exhibits a much lower computational complexity and better performance than the state of the art methods. Experimental results on large scale datasets are presented to support these statements.",
"title": ""
},
{
"docid": "07409cd81cc5f0178724297245039878",
"text": "In recent years, the number of sensor network deployments for real-life applications has rapidly increased and it is expected to expand even more in the near future. Actually, for a credible deployment in a real environment three properties need to be fulfilled, i.e., energy efficiency, scalability and reliability. In this paper we focus on IEEE 802.15.4 sensor networks and show that they can suffer from a serious MAC unreliability problem, also in an ideal environment where transmission errors never occur. This problem arises whenever power management is enabled - for improving the energy efficiency - and results in a very low delivery ratio, even when the number of nodes in the network is very low (e.g., 5). We carried out an extensive analysis, based on simulations and real measurements, to investigate the ultimate reasons of this problem. We found that it is caused by the default MAC parameter setting suggested by the 802.15.4 standard. We also found that, with a more appropriate parameter setting, it is possible to achieve the desired level of reliability (as well as a better energy efficiency). However, in some scenarios this is possible only by choosing parameter values formally not allowed by the standard.",
"title": ""
},
{
"docid": "bc0804d1fb9494d73f2b4ef39f0a5e78",
"text": "OBJECTIVE\nStudies have shown that stress can delay the healing of experimental punch biopsy wounds. This study examined the relationship between the healing of natural wounds and anxiety and depression.\n\n\nMETHODS\nFifty-three subjects (31 women and 22 men) were studied. Wound healing was rated using a five-point Likert scale. Anxiety and depression were measured using the Hospital Anxiety and Depression Scale (HAD), a well-validated psychometric questionnaire. Psychological and clinical wound assessments were each conducted with raters and subjects blinded to the results of the other assessment.\n\n\nRESULTS\nDelayed healing was associated with a higher mean HAD score (p = .0348). Higher HAD anxiety and depression scores (indicating \"caseness\") were also associated with delayed healing (p = .0476 and p = .0311, respectively). Patients scoring in the top 50% of total HAD scores were four times more likely to have delayed healing than those scoring in the bottom 50% (confidence interval = 1.06-15.08).\n\n\nCONCLUSIONS\nThe relationship between healing of chronic wounds and anxiety and depression as measured by the HAD was statistically significant. Further research in the form of a longitudinal study and/or an interventional study is proposed.",
"title": ""
},
{
"docid": "f97491ae5324d737aadc42e3c402d838",
"text": "Diese Arbeit verknüpft Lernziele, didaktische Methoden und Techniken zur Bearbeitung und Bewertung von Programmieraufgaben in E-Learning-Plattformen. Das Ziel ist dabei, sowohl eine Bewertungsgrundlage für den Einsatz einer Plattform für beliebige Lehrveranstaltungen der Programmierlehre zu schaffen als auch ein Gesamtkonzept für eine idealisierte E-Learning-Anwendung zu präsentieren. Dabei steht bewusst die Kompetenzbildung der Studierenden im Zentrum der Überlegungen – die dafür benötigte technische Unterstützung wird aus den didaktischen Methoden zur Vermittlung der Kompetenzen abgeleitet.",
"title": ""
},
{
"docid": "a5f7a243e68212e211d9d89da06ceae1",
"text": "A new technique to achieve a circularly polarized probe-fed single-layer microstrip-patch antenna with a wideband axial ratio is proposed. The antenna is a modified form of the conventional E-shaped patch, used to broaden the impedance bandwidth of a basic patch antenna. By letting the two parallel slots of the E patch be unequal, asymmetry is introduced. This leads to two orthogonal currents on the patch and, hence, circularly polarized fields are excited. The proposed technique exhibits the advantage of the simplicity of the E-shaped patch design, which requires only the slot lengths, widths, and position parameters to be determined. Investigations of the effect of various dimensions of the antenna have been carried out via parametric analysis. Based on these investigations, a design procedure for a circularly polarized E-shaped patch was developed. A prototype has been designed, following the suggested procedure for the IEEE 802.11big WLAN band. The performance of the fabricated antenna was measured and compared with simulation results. Various examples with different substrate thicknesses and material types are presented and compared with the recently proposed circularly polarized U-slot patch antennas.",
"title": ""
},
{
"docid": "42aca9ffdd5c0d2a2f310280d12afa1a",
"text": "Communication skills courses are an essential component of undergraduate and postgraduate training and effective communication skills are actively promoted by medical defence organisations as a means of decreasing litigation. This article discusses active listening, a difficult discipline for anyone to practise, and examines why this is particularly so for doctors. It draws together themes from key literature in the field of communication skills, and examines how these theories apply in general practice.",
"title": ""
},
{
"docid": "0f2b09447d0cf8189264eda201a5dd8e",
"text": "Owing to its critical role in human cognition, the neural basis of language has occupied the interest of neurologists, psychologists, and cognitive neuroscientists for over 150 years. The language system was initially conceptualized as a left hemisphere circuit with discrete comprehension and production centers. Since then, advances in neuroscience have allowed a much more complex and nuanced understanding of the neural organization of language to emerge. In the course of mapping this complicated architecture, one especially important discovery has been the degree to which the map itself can change. Evidence from lesion studies, neuroimaging, and neuromodulation research demonstrates that the representation of language in the brain is altered by injury of the normal language network, that it changes over the course of language recovery, and that it is influenced by successful treatment interventions. This special issue of RNN is devoted to plasticity in the language system and focuses on changes that occur in the setting of left hemisphere stroke, the most common cause of aphasia. Aphasia—the acquired loss of language ability—is one of the most common and debilitating cognitive consequences of stroke, affecting approximately 20–40% of stroke survivors and impacting",
"title": ""
},
{
"docid": "2d8f76cef3d0c11441bbc8f5487588cb",
"text": "Abstract. It seems natural to assume that the more It seems natural to assume that the more closely robots come to resemble people, the more likely they are to elicit the kinds of responses people direct toward each other. However, subtle flaws in appearance and movement only seem eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit a model of a human other but do not measure up to it. If so, a very humanlike robot may provide the best means of finding out what kinds of behavior are perceived as human, since deviations from a human other are more obvious. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that an uncanny robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability. An experiment, which borrows from the methods of terror management research, was performed to test this hypothesis. Across all questions subjects who were exposed to a still image of an uncanny humanlike robot had on average a heightened preference for worldview supporters and a diminished preference for worldview threats relative to the control group.",
"title": ""
},
{
"docid": "5bee27378a98ff5872f7ae5e899f81e2",
"text": "An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework.",
"title": ""
},
{
"docid": "3f1a9a0e601187836177a54d5fa7cbeb",
"text": "For the last twenty years, different kinds of information systems are developed for different purposes, depending on the need of the business . In today’s business world, there are varieties of information systems such as transaction processing systems (TPS), office automation systems (OAS), management information systems (MIS), decision support system (DSS), and executive information systems (EIS), Expert System (ES) etc . Each plays a different role in organizational hierarchy and management operations. This study attempts to explain the role of each type of information systems in business organizations.",
"title": ""
},
{
"docid": "a814fedf9bedf31911f8db43b0d494a5",
"text": "A critical period for language learning is often defined as a sharp decline in learning outcomes with age. This study examines the relevance of the critical period for English speaking proficiency among immigrants in the US. It uses microdata from the 2000 US Census, a model of language acquisition, and a flexible specification of an estimating equation based on 64 age-at-migration dichotomous variables. Self-reported English speaking proficiency among immigrants declines more-or-less monotonically with age at migration, and this relationship is not characterized by any sharp decline or discontinuity that might be considered consistent with a “critical” period. The findings are robust across the various immigrant samples, and between the genders. (110 words).",
"title": ""
},
{
"docid": "97f2f0dd427c5f18dae178bc2fd620ba",
"text": "NOTICE The contents of this report reflect the views of the author, who is responsible for the facts and accuracy of the data presented herein. The contents do not necessarily reflect policy of the Department of Transportation. This report does not constitute a standard, specification, or regulation. Abstract This report summarizes the historical development of the resistance factors developed for the geotechnical foundation design sections of the AASHTO LRFD Bridge Design Specifications, and recommends how to specifically implement recent developments in resistance factors for geotechnical foundation design. In addition, recommendations regarding the load factor for downdrag loads, based on statistical analysis of available load test data and reliability theory, are provided. The scope of this report is limited to shallow and deep foundation geotechnical design at the strength limit state. 17. Forward With the advent of the AASHTO Load and Resistance Factor (LRFD) Bridge Design Specifications in 1992, there has been considerable focus on the geotechnical aspects of those specifications, since most geotechnical engineers are unfamiliar with LRFD concepts. This is especially true regarding the quantification of the level of safety needed for design. Up to the time of writing of this report, the geotechnical profession has typically used safety factors within an allowable stress design (ASD) framework (also termed working stress design, or WSD). For those agencies that use Load Factor Design (LFD), the safety factors for the foundation design are used in combination with factored loads in accordance with the AASHTO Standard Specifications for Highway Bridges (2002). The adaptation of geotechnical design and the associated safety factors to what would become the first edition of the AASHTO LRFD Bridge Design Specifications began in earnest with the publication of the results of NCHRP Project 24-4 as NCHRP Report 343 (Barker, et al., 1991). The details of the calibrations they conducted are provided in an unpublished Appendix to that report (Appendix A). This is the primary source of resistance factors for foundation design as currently published in AASHTO (2004). Since that report was published, changes have occurred in the specifications regarding load factors and design methodology that have required re-evaluation of the resistance factors. Furthermore, new studies have been or are being conducted that are yet to be implemented in the LRFD specifications. In 2002, the AASHTO Bridge Subcommittee initiated an effort, with the help of the Federal Highway Administration (FHWA), to rewrite the foundation design sections of the AASHTO …",
"title": ""
},
{
"docid": "127ef38020617fda8598971b3f10926f",
"text": "Web services are important for creating distributed applications on the Web. In fact, they're a key enabler for service-oriented architectures that focus on service reuse and interoperability. The World Wide Web Consortium (W3C) has recently finished work on two important standards for describing Web services the Web Services Description Language (WSDL) 2.0 and Semantic Annotations for WSDL and XML Schema (SAWSDL). Here, the authors discuss the latter, which is the first standard for adding semantics to Web service descriptions.",
"title": ""
},
{
"docid": "3956a033021add41b1f4e80864e3b196",
"text": "Recently, most of malicious web pages include obfuscated codes in order to circumvent the detection of signature-based detection systems .It is difficult to decide whether the sting is obfuscated because the shape of obfuscated strings are changed continuously. In this paper, we propose a novel methodology that can detect obfuscated strings in the malicious web pages. We extracted three metrics as rules for detecting obfuscated strings by analyzing patterns of normal and malicious JavaScript codes. They are N-gram, Entropy, and Word Size. N-gram checks how many each byte code is used in strings. Entropy checks distributed of used byte codes. Word size checks whether there is used very long string. Based on the metrics, we implemented a practical tool for our methodology and evaluated it using read malicious web pages. The experiment results showed that our methodology can detect obfuscated strings in web pages effectively.",
"title": ""
},
{
"docid": "56d4abc61377dc2afa3ded978d318646",
"text": "Clothoids, i.e. curves Z(s) in RI whoem curvatures xes) are linear fitting functions of arclength ., have been nued for some time for curve fitting purposes in engineering applications. The first part of the paper deals with some basic interpolation problems for lothoids and studies the existence and uniqueness of their solutions. The second part discusses curve fitting problems for clothoidal spines, i.e. C2-carves, which are composed of finitely many clothoids. An iterative method is described for finding a clothoidal spline Z(aJ passing through given Points Z1 cR 2 . i = 0,1L.. n+ 1, which minimizes the integral frX(S) 2 ds.",
"title": ""
},
{
"docid": "9581483f301b3522b88f6690b2668217",
"text": "AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method – specifically hypothesis testing – in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused questions only comprise a subset of the important questions about AI systems. This results in the AI Knowledge Gap: the number of unique AI systems grows faster than the number of studies that characterize these systems’ behavior. To close this gap, we argue that the study of AI could benefit from the greater inclusion of researchers who are well positioned to formulate and test hypotheses about the behavior of AI systems. We examine the barriers preventing social and behavioral scientists from conducting such studies. Our diagnosis suggests that accelerating the scientific study of AI systems requires new incentives for academia and industry, mediated by new tools and institutions. To address these needs, we propose a two-sided marketplace called TuringBox. On one side, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks designed to evaluate and characterize algorithmic behavior. We discuss this market’s potential to democratize the scientific study of AI behavior, and thus narrow the AI Knowledge Gap. 1 The Many Facets of AI Research Although AI is a sub-discipline of computer science, AI researchers do not exclusively use the scientific method in their work. For example, the methods used by early AI researchers often drew from logic, a subfield of mathematics, and are distinct from the scientific method we think of today. Indeed AI has adopted many techniques and approaches over time. In this section, we distinguish and explore the history of these ∗Equal contribution. methodologies with a particular emphasis on characterizing the evolving science of AI.",
"title": ""
},
{
"docid": "45f6bb33f098a61c4166e3b942501604",
"text": "Estimating human age automatically via facial image analysis has lots of potential real-world applications, such as human computer interaction and multimedia communication. However, it is still a challenging problem for the existing computer vision systems to automatically and effectively estimate human ages. The aging process is determined by not only the person's gene, but also many external factors, such as health, living style, living location, and weather conditions. Males and females may also age differently. The current age estimation performance is still not good enough for practical use and more effort has to be put into this research direction. In this paper, we introduce the age manifold learning scheme for extracting face aging features and design a locally adjusted robust regressor for learning and prediction of human ages. The novel approach improves the age estimation accuracy significantly over all previous methods. The merit of the proposed approaches for image-based age estimation is shown by extensive experiments on a large internal age database and the public available FG-NET database.",
"title": ""
},
{
"docid": "86b330069b20d410eb2186479fe7f500",
"text": "Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities,whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier’s behavior in adversarial environments, and lead to better design choices micans infotech +91 90036 28940 +91 94435 11725 MICANS INFOTECH, NO: 8 , 100 FEET ROAD,PONDICHERRY. WWW.MICANSINFOTECH.COM ; MICANSINFOTECH@GMAIL.COM +91 90036 28940; +91 94435 11725 IEEE Projects 100% WORKING CODE + DOCUMENTATION+ EXPLAINATION – BEST PRICE LOW PRICE GUARANTEED",
"title": ""
}
] | scidocsrr |
b3d10d3125708a84dcb956d775f80f92 | Non-inverting buck-boost power-factor-correction converter with wide input-voltage-range applications | [
{
"docid": "8d495d909cb2a93929b34d85c371693b",
"text": "1 This work is supported by Philips Research, Briarcliff Manor, NY, through Colorado Power Electronics Center Abstract – Single-switch step-up/step-down converters, such as the buck-boost, SEPIC and Cuk, have relatively high voltage and current stresses on components compared to the buck or the boost converter. A buck-boost converter with two independently controlled switches can work as a boost or as a buck converter depending on input-output conditions, and thus achieves lower stresses on components. Using the converter synthesis method from [1], families of two-switch buck-boost converters are generated, including several new converter topologies. The two-switch buck-boost converters are evaluated and compared in terms of component stresses in universal-input power-factor-corrector applications. Among them, one new two-switch converter is identified that has low inductor conduction losses (50% of the boost converter), low inductor volt-seconds (72% of the boost converter), and about the same switch conduction losses and voltage stresses as the boost converter.",
"title": ""
}
] | [
{
"docid": "73bf9a956ea7a10648851c85ef740db0",
"text": "Printed atmospheric spark gaps as ESD-protection on PCBs are examined. At first an introduction to the physic behind spark gaps. Afterward the time lag (response time) vs. voltage is measured with high load impedance. The dependable clamp voltage (will be defined later) is measured as a function of the load impedance and the local field in the air gap is simulated with FIT simulation software. At last the observed results are discussed on the basic of the physic and the simulations.",
"title": ""
},
{
"docid": "836001910512e8bd7f71f4ac7448a6dd",
"text": "We have developed a high-speed 1310-nm Al-MQW buried-hetero laser having 29-GHz bandwidth (BW). The laser was used to compare 28-Gbaud four-level pulse-amplitude-modulation (PAM4) and 56-Gb/s nonreturn to zero (NRZ) transmission performance. In both cases, it was possible to meet the 10-km link budget, however, 56-Gb/s NRZ operation achieved a 2-dB better sensitivity, attributable to the wide BW of the directly modulated laser and the larger eye amplitude for the NRZ format. On the other hand, the advantages for 28-Gbaud PAM4 were the reduced BW requirement for both the transmitter and the receiver PIN diode, which enabled us to use a lower bias to the laser and a PIN with a higher responsivity, or conversely enable the possibility of high temperature operation with lower power consumption. Both formats showed a negative dispersion penalty compared to back-to-back sensitivity using a negative fiber dispersion of -60 ps/nm, which was expected from the observed chirp characteristics of the laser. The reliability study up to 11 600 h at 85 °C under accelerated conditions showed no decrease in the output power at a constant bias of 60 mA.",
"title": ""
},
{
"docid": "39d67fe0ea08adf64d1122d4c173a9af",
"text": "Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.",
"title": ""
},
{
"docid": "162ad6b8d48f5d6c76067d25b320a947",
"text": "Image Understanding is fundamental to systems that need to extract contents and infer concepts from images. In this paper, we develop an architecture for understanding images, through which a system can recognize the content and the underlying concepts of an image and, reason and answer questions about both using a visual module, a reasoning module, and a commonsense knowledge base. In this architecture, visual data combines with background knowledge and; iterates through visual and reasoning modules to answer questions about an image or to generate a textual description of an image. We first provide motivations of such a Deep Image Understanding architecture and then, we describe the necessary components it should include. We also introduce our own preliminary implementation of this architecture and empirically show how this more generic implementation compares with a recent end-to-end Neural approach on specific applications. We address the knowledge-representation challenge in such an architecture by representing an image using a directed labeled graph (called Scene Description Graph). Our implementation uses generic visual recognition techniques and commonsense reasoning1 to extract such graphs from images. Our experiments show that the extracted graphs capture the syntactic and semantic content of an image with reasonable accuracy.",
"title": ""
},
{
"docid": "67733a15509caa529f2dd6068461c91d",
"text": "We used broadband ferromagnetic resonance (FMR) spectroscopy to measure the second- and fourth-order perpendicular magnetic anisotropies in Ta/(t) Co<sub>60</sub>Fe<sub>20</sub>B<sub>20</sub>/MgO layers over a Co<sub>60</sub>Fe<sub>20</sub>B<sub>20</sub> thickness range of 5.0 nm ≥ t ≥ 0.8 nm. Fort > 1.0 nm, the easy axis is in the plane of the film, but when t <; 1.0 nm, the easy axis is directed perpendicular to the surface. However, the presence of a substantial higher order perpendicular anisotropy results in an easy cone state when t = 1.0 nm. Angular-dependent FMR measurements verify the presence of the easy cone state. Measurement of the spectroscopic g-factor via FMR for both the in-plane and out-of-plane geometries suggests a significant change in electronic and/or physical structure at t ≈ 1.0 nm thickness.",
"title": ""
},
{
"docid": "e766e5a45936c53767898c591e6126f8",
"text": "Video completion is a computer vision technique to recover the missing values in video sequences by filling the unknown regions with the known information. In recent research, tensor completion, a generalization of matrix completion for higher order data, emerges as a new solution to estimate the missing information in video with the assumption that the video frames are homogenous and correlated. However, each video clip often stores the heterogeneous episodes and the correlations among all video frames are not high. Thus, the regular tenor completion methods are not suitable to recover the video missing values in practical applications. To solve this problem, we propose a novel spatiallytemporally consistent tensor completion method for recovering the video missing data. Instead of minimizing the average of the trace norms of all matrices unfolded along each mode of a tensor data, we introduce a new smoothness regularization along video time direction to utilize the temporal information between consecutive video frames. Meanwhile, we also minimize the trace norm of each individual video frame to employ the spatial correlations among pixels. Different to previous tensor completion approaches, our new method can keep the spatio-temporal consistency in video and do not assume the global correlation in video frames. Thus, the proposed method can be applied to the general and practical video completion applications. Our method shows promising results in all evaluations on both 3D biomedical image sequence and video benchmark data sets. Video completion is the process of filling in missing pixels or replacing undesirable pixels in a video. The missing values in a video can be caused by many situations, e.g., the natural noise in video capture equipment, the occlusion from the obstacles in environment, segmenting or removing interested objects from videos. Video completion is of great importance to many applications such as video repairing and editing, movie post-production (e.g., remove unwanted objects), etc. Missing information recovery in images is called inpaint∗To whom all correspondence should be addressed. This work was partially supported by US NSF IIS-1117965, IIS-1302675, IIS-1344152. Copyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ing, which is usually accomplished by inferring or guessing the missing information from the surrounding regions, i.e. the spatial information. Video completion can be considered as an extension of 2D image inpainting to 3D. Video completion uses the information from the past and the future frames to fill the pixels in the missing region, i.e. the spatiotemporal information, which has been getting increasing attention in recent years. In computer vision, an important application area of artificial intelligence, there are many video completion algorithms. The most representative approaches include video inpainting, analogous to image inpainting (Bertalmio, Bertozzi, and Sapiro 2001), motion layer video completion, which splits the video sequence into different motion layers and completes each motion layer separately (Shiratori et al. 2006), space-time video completion, which is based on texture synthesis and is good but slow (Wexler, Shechtman, and Irani 2004), and video repairing, which repairs static background with motion layers and repairs moving foreground using model alignment (Jia et al. 2004). Many video completion methods are less effective because the video is often treated as a set of independent 2D images. Although the temporal independence assumption simplifies the problem, losing temporal consistency in recovered pixels leads to the unsatisfactory performance. On the other hand, temporal information can improve the video completion results (Wexler, Shechtman, and Irani 2004; Matsushita et al. 2005), but to exploit it the computational speeds of most methods are significantly reduced. Thus, how to efficiently and effectively utilize both spatial and temporal information is a challenging problem in video completion. In most recent work, Liu et. al. (Liu et al. 2013) estimated the missing data in video via tensor completion which was generalized from matrix completion methods. In these methods, the rank or rank approximation (trace norm) is used, as a powerful tool, to capture the global information. The tensor completion method (Liu et al. 2013) minimizes the trace norm of a tensor, i.e. the average of the trace norms of all matrices unfolded along each mode. Thus, it assumes the video frames are highly correlated in the temporal direction. If the video records homogenous episodes and all frames describe the similar information, this assumption has no problem. However, one video clip usually includes multiple different episodes and the frames from different episodes Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "9e4adad2e248895d80f28cf6134f68c1",
"text": "Maltodextrin (MX) is an ingredient in high demand in the food industry, mainly for its useful physical properties which depend on the dextrose equivalent (DE). The DE has however been shown to be an inaccurate parameter for predicting the performance of the MXs in technological applications, hence commercial MXs were characterized by mass spectrometry (MS) to determine their molecular weight distribution (MWD) and degree of polymerization (DP). Samples were subjected to different water activities (aw). Water adsorption was similar at low aw, but radically increased with the DP at higher aw. The decomposition temperature (Td) showed some variations attributed to the thermal hydrolysis induced by the large amount of adsorbed water and the supplied heat. The glass transition temperature (Tg) linearly decreased with both, aw and DP. The microstructural analysis by X-ray diffraction showed that MXs did not crystallize with the adsorption of water, preserving their amorphous structure. The optical micrographs showed radical changes in the overall appearance of the MXs, indicating a transition from a glassy to a rubbery state. Based on these characterizations, different technological applications for the MXs were suggested.",
"title": ""
},
{
"docid": "616d20b1359cc1cf4fcfb1a0318d721e",
"text": "The Burj Khalifa Project is the tallest structure ever built by man; the tower is 828 meters tall and compromise of 162 floors above grade and 3 basement levels. Early integration of aerodynamic shaping and wind engineering played a major role in the architectural massing and design of this multi-use tower, where mitigating and taming the dynamic wind effects was one of the most important design criteria set forth at the onset of the project design. This paper provides brief description of the tower structural systems, focuses on the key issues considered in construction planning of the key structural components, and briefly outlines the execution of one of the most comprehensive structural health monitoring program in tall buildings.",
"title": ""
},
{
"docid": "462e3be75902bf8a39104c75ec2bea53",
"text": "A new model for associative memory, based on a correlation matrix, is suggested. In this model information is accumulated on memory elements as products of component data. Denoting a key vector by q(p), and the data associated with it by another vector x(p), the pairs (q(p), x(p)) are memorized in the form of a matrix {see the Equation in PDF File} where c is a constant. A randomly selected subset of the elements of Mxq can also be used for memorizing. The recalling of a particular datum x(r) is made by a transformation x(r)=Mxqq(r). This model is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories. Two classes of memories are discussed: a complete correlation matrix memory (CCMM), and randomly organized incomplete correlation matrix memories (ICMM). The data recalled from the latter are stochastic variables but the fidelity of recall is shown to have a deterministic limit if the number of memory elements grows without limits. A special case of correlation matrix memories is the auto-associative memory in which any part of the memorized information can be used as a key. The memories are selective with respect to accumulated data. The ICMM exhibits adaptive improvement under certain circumstances. It is also suggested that correlation matrix memories could be applied for the classification of data.",
"title": ""
},
{
"docid": "3a2c37a96407b79f6ddd9d38f9b79741",
"text": "This paper proposes a network-aware resource management scheme that improves the quality of experience (QoE) for adaptive video streaming in CDNs and Information-Centric Networks (ICN) in general, and Dynamic Adaptive Streaming over HTTP (DASH) in particular. By utilizing the DASH manifest, the network (by way of a logically centralized controller) computes the available link resources and schedules the chunk dissemination to edge caches ahead of the end-user's requests. Our approach is optimized for multi-rate DASH videos. We implemented our resource management scheme, and demonstrated that in the scenario when network conditions evolve quickly, our approach can maintain smooth high quality playback. We show on actual video server data and in our own simulation environment that a significant reduction in peak bandwidth of 20% can be achieved using our approach.",
"title": ""
},
{
"docid": "a5a0e1b984eac30c225190c0cba63ab4",
"text": "The traditional intrusion detection system is not flexible in providing security in cloud computing because of the distributed structure of cloud computing. This paper surveys the intrusion detection and prevention techniques and possible solutions in Host Based and Network Based Intrusion Detection System. It discusses DDoS attacks in Cloud environment. Different Intrusion Detection techniques are also discussed namely anomaly based techniques and signature based techniques. It also surveys different approaches of Intrusion Prevention System.",
"title": ""
},
{
"docid": "16932e01fdea801f28ec6c4194f70352",
"text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.",
"title": ""
},
{
"docid": "60bdd255a19784ed2d19550222e61b69",
"text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.",
"title": ""
},
{
"docid": "050540ce54975f34b752ddb25b001bf4",
"text": "This paper describes a custom Low Power Motor Controller (LoPoMoCo) that was developed for a 34-axis robot system currently being designed for Minimally Invasive Surgery (MIS) of the upper airways. The robot system will includes three robot arms equipped with small snake-like mechanisms, which challenge the controller design due to their requirement for precise sensing and control of low motor currents. The controller hardware also provides accurate velocity estimate from incremental encoder feedback and can selectively be operated in speed or torque control mode. The experimental results demonstrate that the controller can measure applied loads with a resolution of , even though the transmission is nonbackdriveable. Although the controller was designed for this particular robot, it is applicable to other systems requiring torque monitoring capabilities.",
"title": ""
},
{
"docid": "a48278ee8a21a33ff87b66248c6b0b8a",
"text": "We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models.",
"title": ""
},
{
"docid": "6aee20acd54b5d6f2399106075c9fee1",
"text": "BACKGROUND\nThe aim of this study was to compare the effectiveness of the ampicillin plus ceftriaxone (AC) and ampicillin plus gentamicin (AG) combinations for treating Enterococcus faecalis infective endocarditis (EFIE).\n\n\nMETHODS\nAn observational, nonrandomized, comparative multicenter cohort study was conducted at 17 Spanish and 1 Italian hospitals. Consecutive adult patients diagnosed of EFIE were included. Outcome measurements were death during treatment and at 3 months of follow-up, adverse events requiring treatment withdrawal, treatment failure requiring a change of antimicrobials, and relapse.\n\n\nRESULTS\nA larger percentage of AC-treated patients (n = 159) had previous chronic renal failure than AG-treated patients (n = 87) (33% vs 16%, P = .004), and AC patients had a higher incidence of cancer (18% vs 7%, P = .015), transplantation (6% vs 0%, P = .040), and healthcare-acquired infection (59% vs 40%, P = .006). Between AC and AG-treated EFIE patients, there were no differences in mortality while on antimicrobial treatment (22% vs 21%, P = .81) or at 3-month follow-up (8% vs 7%, P = .72), in treatment failure requiring a change in antimicrobials (1% vs 2%, P = .54), or in relapses (3% vs 4%, P = .67). However, interruption of antibiotic treatment due to adverse events was much more frequent in AG-treated patients than in those receiving AC (25% vs 1%, P < .001), mainly due to new renal failure (≥25% increase in baseline creatinine concentration; 23% vs 0%, P < .001).\n\n\nCONCLUSIONS\nAC appears as effective as AG for treating EFIE patients and can be used with virtually no risk of renal failure and regardless of the high-level aminoglycoside resistance status of E. faecalis.",
"title": ""
},
{
"docid": "65af21566422d9f0a11f07d43d7ead13",
"text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.",
"title": ""
},
{
"docid": "101bcd956dcdb0fff3ecf78aa841314a",
"text": "HCI research has increasingly examined how sensing technologies can help people capture and visualize data about their health-related behaviors. Yet, few systems help people reflect more fundamentally on the factors that influence behaviors such as physical activity (PA). To address this research gap, we take a novel approach, examining how such reflections can be stimulated through a medium that generations of families have used for reflection and teaching: storytelling. Through observations and interviews, we studied how 13 families interacted with a low-fidelity prototype, and their attitudes towards this tool. Our prototype used storytelling and interactive prompts to scaffold reflection on factors that impact children's PA. We contribute to HCI research by characterizing how families interacted with a story-driven reflection tool, and how such a tool can encourage critical processes for behavior change. Informed by the Transtheoretical Model, we present design implications for reflective informatics systems.",
"title": ""
},
{
"docid": "378c3b785db68bd5efdf1ad026c901ea",
"text": "Intrinsically switched tunable filters are switched on and off using the tuning elements that tune their center frequencies and/or bandwidths, without requiring an increase in the tuning range of the tuning elements. Because external RF switches are not needed, substantial improvements in insertion loss, linearity, dc power consumption, control complexity, size, and weight are possible compared to conventional approaches. An intrinsically switched varactor-tuned bandstop filter and bandpass filter bank are demonstrated here for the first time. The intrinsically switched bandstop filter prototype has a second-order notch response with more than 50 dB of rejection continuously tunable from 665 to 1000 MHz (50%) with negligible passband ripple in the intrinsic off state. The intrinsically switched tunable bandpass filter bank prototype, comprised of three third-order bandpass filters, has a constant 50-MHz bandwidth response continuously tunable from 740 to 1644 MHz (122%) with less than 5 dB of passband insertion loss and more than 40 dB of isolation between bands.",
"title": ""
}
] | scidocsrr |
32a2c438985fb1e2c9d3e19b35a3da50 | Stochastic properties of the random waypoint mobility model: epoch length, direction distribution, and cell change rate | [
{
"docid": "d339f7d94334a2ccc256c29c63fd936f",
"text": "The random waypoint model is a frequently used mobility model for simulation–based studies of wireless ad hoc networks. This paper investigates the spatial node distribution that results from using this model. We show and interpret simulation results on a square and circular system area, derive an analytical expression of the expected node distribution in one dimension, and give an approximation for the two–dimensional case. Finally, the concept of attraction areas and a modified random waypoint model, the random borderpoint model, is analyzed by simulation.",
"title": ""
}
] | [
{
"docid": "9a8133fbfe2c9422b6962dd88505a9e9",
"text": "The amino acid sequences of 301 glycosyl hydrolases and related enzymes have been compared. A total of 291 sequences corresponding to 39 EC entries could be classified into 35 families. Only ten sequences (less than 5% of the sample) could not be assigned to any family. With the sequences available for this analysis, 18 families were found to be monospecific (containing only one EC number) and 17 were found to be polyspecific (containing at least two EC numbers). Implications on the folding characteristics and mechanism of action of these enzymes and on the evolution of carbohydrate metabolism are discussed. With the steady increase in sequence and structural data, it is suggested that the enzyme classification system should perhaps be revised.",
"title": ""
},
{
"docid": "3b8e716e658176cebfbdb313c8cb22ac",
"text": "To realize the vision of Internet-of-Things (IoT), numerous IoT devices have been developed for improving daily lives, in which smart home devices are among the most popular ones. Smart locks rely on smartphones to ease the burden of physical key management and keep tracking the door opening/close status, the security of which have aroused great interests from the security community. As security is of utmost importance for the IoT environment, we try to investigate the security of IoT by examining smart lock security. Specifically, we focus on analyzing the security of August smart lock. The threat models are illustrated for attacking August smart lock. We then demonstrate several practical attacks based on the threat models toward August smart lock including handshake key leakage, owner account leakage, personal information leakage, and denial-of-service (DoS) attacks. We also propose the corresponding defense methods to counteract these attacks.",
"title": ""
},
{
"docid": "8fd97add7e3b48bad9fd82dc01422e59",
"text": "Anaerobic nitrate-dependent Fe(II) oxidation is widespread in various environments and is known to be performed by both heterotrophic and autotrophic microorganisms. Although Fe(II) oxidation is predominantly biological under acidic conditions, to date most of the studies on nitrate-dependent Fe(II) oxidation were from environments of circumneutral pH. The present study was conducted in Lake Grosse Fuchskuhle, a moderately acidic ecosystem receiving humic acids from an adjacent bog, with the objective of identifying, characterizing and enumerating the microorganisms responsible for this process. The incubations of sediment under chemolithotrophic nitrate-dependent Fe(II)-oxidizing conditions have shown the enrichment of TM3 group of uncultured Actinobacteria. A time-course experiment done on these Actinobacteria showed a consumption of Fe(II) and nitrate in accordance with the expected stoichiometry (1:0.2) required for nitrate-dependent Fe(II) oxidation. Quantifications done by most probable number showed the presence of 1 × 104 autotrophic and 1 × 107 heterotrophic nitrate-dependent Fe(II) oxidizers per gram fresh weight of sediment. The analysis of microbial community by 16S rRNA gene amplicon pyrosequencing showed that these actinobacterial sequences correspond to ∼0.6% of bacterial 16S rRNA gene sequences. Stable isotope probing using 13CO2 was performed with the lake sediment and showed labeling of these Actinobacteria. This indicated that they might be important autotrophs in this environment. Although these Actinobacteria are not dominant members of the sediment microbial community, they could be of functional significance due to their contribution to the regeneration of Fe(III), which has a critical role as an electron acceptor for anaerobic microorganisms mineralizing sediment organic matter. To the best of our knowledge this is the first study to show the autotrophic nitrate-dependent Fe(II)-oxidizing nature of TM3 group of uncultured Actinobacteria.",
"title": ""
},
{
"docid": "2601ff3b4af85883017d8fb7e28e5faa",
"text": "The heterogeneous nature of the applications, technologies and equipment that today's networks have to support has made the management of such infrastructures a complex task. The Software-Defined Networking (SDN) paradigm has emerged as a promising solution to reduce this complexity through the creation of a unified control plane independent of specific vendor equipment. However, designing a SDN-based solution for network resource management raises several challenges as it should exhibit flexibility, scalability and adaptability. In this paper, we present a new SDN-based management and control framework for fixed backbone networks, which provides support for both static and dynamic resource management applications. The framework consists of three layers which interact with each other through a set of interfaces. We develop a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer. We then show how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.",
"title": ""
},
{
"docid": "73d3c622c98fba72ae2156df52c860d3",
"text": "We suggest analyzing neural networks through the prism of space constraints. We observe that most training algorithms applied in practice use bounded memory, which enables us to use a new notion introduced in the study of spacetime tradeoffs that we call mixing complexity. This notion was devised in order to measure the (in)ability to learn using a bounded-memory algorithm. In this paper we describe how we use mixing complexity to obtain new results on what can and cannot be learned using neural networks.",
"title": ""
},
{
"docid": "bd376c939a5935838cbec64c55ff88ee",
"text": "We consider the problem of autonomous navigation in an unstr ctu ed outdoor environment. The goal is for a small outdoor robot to come into a ne w area, learn about and map its environment, and move to a given goal at modest spe ed (1 m/s). This problem is especially difficult in outdoor, off-road enviro nments, where tall grass, shadows, deadfall, and other obstacles predominate. Not su rpri ingly, the biggest challenge is acquiring and using a reliable map of the new are a. Although work in outdoor navigation has preferentially used laser rangefi d rs [13, 2, 6], we use stereo vision as the main sensor. Vision sensors allow us to u e more distant objects as landmarks for navigation, and to learn and use color and te xture models of the environment, in looking further ahead than is possible with range sensors alone. In this paper we show how to build a consistent, globally corr ect map in real time, using a combination of the following vision-based tec hniques:",
"title": ""
},
{
"docid": "81840452c52d61024ba5830437e6a2c4",
"text": "Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR). We are given a set of items, each with a positive real weight, and a set of knapsacks, each with a positive real capacity. In addition, for each item a set of knapsacks that can hold that item is specified. In a feasible assignment of items to knapsacks, each item is assigned to at most one knapsack, assignment restrictions are satisfied, and knapsack capacities are not exceeded. We consider the objectives of maximizing assigned weight and minimizing utilized capacity. We focus on obtaining approximate solutions in polynomial computational time. We show that simple greedy approaches yield 1/3-approximation algorithms for the objective of maximizing assigned weight. We give two different 1/2-approximation algorithms: the first one solves single knapsack problems successively and the second one is based on rounding the LP relaxation solution. For the bicriteria problem of minimizing utilized capacity subject to a minimum requirement on assigned weight, we give an (1/3,2)-approximation algorithm.",
"title": ""
},
{
"docid": "e0f797ff66a81b88bbc452e86864d7bc",
"text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.",
"title": ""
},
{
"docid": "89513d2cf137e60bf7f341362de2ba84",
"text": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
"title": ""
},
{
"docid": "f0efa93a150ca1be1351277ea30e370b",
"text": "We describe an effort to train a RoboCup soccer-playing agent playing in the Simulation League using casebased reasoning. The agent learns (builds a case base) by observing the behaviour of existing players and determining the spatial configuration of the objects the existing players pay attention to. The agent can then use the case base to determine what actions it should perform given similar spatial configurations. When observing a simple goal-driven, rule-based, stateless agent, the trained player appears to imitate the behaviour of the original and experimental results confirm the observed behaviour. The process requires little human intervention and can be used to train agents exhibiting diverse behaviour in an automated manner.",
"title": ""
},
{
"docid": "15cde62b96f8c87bedb6f721befa3ae4",
"text": "To investigate the dispersion mechanism(s) of ternary dry powder inhaler (DPI) formulations by comparison of the interparticulate adhesions and in vitro performance of a number of carrier–drug–fines combinations. The relative levels of adhesion and cohesion between a lactose carrier and a number of drugs and fine excipients were quantified using the cohesion–adhesion balance (CAB) approach to atomic force microscopy. The in vitro performance of formulations produced using these materials was quantified and the particle size distribution of the aerosol clouds produced from these formulations determined by laser diffraction. Comparison between CAB ratios and formulation performance suggested that the improvement in performance brought about by the addition of fines to which the drug was more adhesive than cohesive might have been due to the formation of agglomerates of drug and fines particles. This was supported by aerosol cloud particle size data. The mechanism(s) underlying the improved performance of ternary formulations where the drug was more cohesive than adhesive to the fines was unclear. The performance of ternary DPI formulations might be increased by the preferential formation of drug–fines agglomerates, which might be subject to greater deagglomeration forces during aerosolisation than smaller agglomerates, thus producing better formulation performance.",
"title": ""
},
{
"docid": "c3838ee9c296364d2bea785556dfd2fb",
"text": "Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.",
"title": ""
},
{
"docid": "c11e1e156835d98707c383711f4e3953",
"text": "We present an approach for automatically generating provably correct abstractions from C source code that are useful for practical implementation verification. The abstractions are easier for a human verification engineer to reason about than the implementation and increase the productivity of interactive code proof. We guarantee soundness by automatically generating proofs that the abstractions are correct.\n In particular, we show two key abstractions that are critical for verifying systems-level C code: automatically turning potentially overflowing machine-word arithmetic into ideal integers, and transforming low-level C pointer reasoning into separate abstract heaps. Previous work carrying out such transformations has either done so using unverified translations, or required significant proof engineering effort.\n We implement these abstractions in an existing proof-producing specification transformation framework named AutoCorres, developed in Isabelle/HOL, and demonstrate its effectiveness in a number of case studies. We show scalability on multiple OS microkernels, and we show how our changes to AutoCorres improve productivity for total correctness by porting an existing high-level verification of the Schorr-Waite algorithm to a low-level C implementation with minimal effort.",
"title": ""
},
{
"docid": "be220ab28653645e5186a8cefc120215",
"text": "OBJECTIVE\nBoluses are used in high-energy radiotherapy in order to overcome the skin sparing effect. In practice though, commonly used flat boluses fail to make a perfect contact with the irregular surface of the patient's skin, resulting in air gaps. Hence, we fabricated a customized bolus using a 3-dimensional (3D) printer and evaluated its feasibility for radiotherapy.\n\n\nMETHODS\nWe designed two kinds of bolus for production on a 3D printer, one of which was the 3D printed flat bolus for the Blue water phantom and the other was a 3D printed customized bolus for the RANDO phantom. The 3D printed flat bolus was fabricated to verify its physical quality. The resulting 3D printed flat bolus was evaluated by assessing dosimetric parameters such as D1.5 cm, D5 cm, and D10 cm. The 3D printed customized bolus was then fabricated, and its quality and clinical feasibility were evaluated by visual inspection and by assessing dosimetric parameters such as Dmax, Dmin, Dmean, D90%, and V90%.\n\n\nRESULTS\nThe dosimetric parameters of the resulting 3D printed flat bolus showed that it was a useful dose escalating material, equivalent to a commercially available flat bolus. Analysis of the dosimetric parameters of the 3D printed customized bolus demonstrated that it is provided good dose escalation and good contact with the irregular surface of the RANDO phantom.\n\n\nCONCLUSIONS\nA customized bolus produced using a 3D printer could potentially replace commercially available flat boluses.",
"title": ""
},
{
"docid": "07295446da02d11750e05f496be44089",
"text": "As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment. 1 Motivation and Problem Statement In this paper, we discuss our work on grounding natural language–interpreting human language into semantically informed structures in the context of robotic perception and actuation. To this end, we explore the question of interpreting natural language commands so they can be executed by a robot, specifically in the context of following route instructions through a map. Natural language (NL) is a rich, intuitive mechanism by which humans can interact with systems around them, offering sufficient signal to support robot task planning. Human route instructions include complex language constructs, which robots must be able to execute without being given a fully specified world model such as a map. Our goal is to investigate whether it is possible to learn a parser that produces · All authors are affiliated with the University of Washington, Seattle, USA. · Email: {cynthia,eherbst,lsz,fox}@cs.washington.edu",
"title": ""
},
{
"docid": "0c177af9c2fffa6c4c667d1b4a4d3d79",
"text": "In the last decade, a large number of different software component models have been developed, with different aims and using different principles and technologies. This has resulted in a number of models which have many similarities, but also principal differences, and in many cases unclear concepts. Component-based development has not succeeded in providing standard principles, as has, for example, object-oriented development. In order to increase the understanding of the concepts and to differentiate component models more easily, this paper identifies, discusses, and characterizes fundamental principles of component models and provides a Component Model Classification Framework based on these principles. Further, the paper classifies a large number of component models using this framework.",
"title": ""
},
{
"docid": "4b7eb2b8f4d4ec135ab1978b4811eca4",
"text": "This paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared with existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.",
"title": ""
},
{
"docid": "7462d739a80bf654d6f9df78b4a6e6e3",
"text": "Multi-class pattern classification has many applications including text document classification, speech recognition, object recognition, etc. Multi-class pattern classification using neural networks is not a trivial extension from two-class neural networks. This paper presents a comprehensive and competitive study in multi-class neural learning with focuses on issues including neural network architecture, encoding schemes, training methodology and training time complexity. Our study includes multi-class pattern classification using either a system of multiple neural networks or a single neural network, and modeling pattern classes using one-against-all, one-against-one, one-againsthigher-order, and P-against-Q. We also discuss implementations of these approaches and analyze training time complexity associated with each approach. We evaluate six different neural network system architectures for multi-class pattern classification along the dimensions of imbalanced data, large number of pattern classes, large vs. small training data through experiments conducted on well-known benchmark data. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "df114f9765d4c0bba7371c243bad8608",
"text": "CAPTCHAs are automated tests to tell computers and humans apart. They are designed to be easily solvable by humans, but unsolvable by machines. With Convolutional Neural Networks these tests can also be solved automatically. However, the strength of CNNs relies on the training data that the classifier is learnt on and especially on the size of the training set. Hence, it is intractable to solve the problem with CNNs in case of insufficient training data. We propose an Active Deep Learning strategy that makes use of the ability to gain new training data for free without any human intervention which is possible in the special case of CAPTCHAs. We discuss how to choose the new samples to re-train the network and present results on an auto-generated CAPTCHA dataset. Our approach dramatically improves the performance of the network if we initially have only few labeled training data.",
"title": ""
}
] | scidocsrr |
513ecae3dde0ac74c17e01d0aad02629 | Automatic program repair with evolutionary computation | [
{
"docid": "c15492fea3db1af99bc8a04bdff71fdc",
"text": "The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.",
"title": ""
},
{
"docid": "552545ea9de47c26e1626efc4a0f201e",
"text": "For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the \"alphabet\" used to describe those systems.",
"title": ""
},
{
"docid": "f8742208fef05beb86d77f1d5b5d25ef",
"text": "The latest book on Genetic Programming, Poli, Langdon and McPhee’s (with contributions from John R. Koza) A Field Guide to Genetic Programming represents an exciting landmark with the authors choosing to make their work freely available by publishing using a form of the Creative Commons License[1]. In so doing they have created a must-read resource which is, to use their words, ’aimed at both newcomers and old-timers’. The book is freely available from the authors companion website [2] and Lulu.com [3] in both pdf and html form. For those who desire the more traditional page turning exercise, inexpensive printed copies can be ordered from Lulu.com. The Field Guides companion website also provides a link to the TinyGP code printed over eight pages of Appendix B, and a Discussion Group centered around the book. The book is divided into four parts with fourteen chapters and two appendices. Part I introduces the basics of Genetic Programming, Part II overviews more advanced topics, Part III highlights some of the real world applications and discusses issues facing the GP researcher or practitioner, while Part IV contains two appendices, the first introducing some key resources and the second appendix describes the TinyGP code. The pdf and html forms of the book have an especially useful feature, providing links to the articles available on-line at the time of publication, and to bibtex entries of the GP Bibliography. Following an overview of the book in chapter 1, chapter 2 introduces the basic concepts of GP focusing on the tree representation, initialisation, selection, and the search operators. Chapter 3 is centered around the preparatory steps in applying GP to a problem, which is followed by an outline of a sample run of GP on a simple instance of symbolic regression in Chapter 4. Overall these chapters provide a compact and useful introduction to GP. The first of the Advanced GP chapters in Part II looks at alternative strategies for initialisation and the search operators for tree-based GP. An overview of Modular, Grammatical and Developmental GP is provided in Chapter 6. While the chapter title",
"title": ""
},
{
"docid": "2b471e61a6b95221d9ca9c740660a726",
"text": "We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a program's user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.",
"title": ""
}
] | [
{
"docid": "11a28e11ba6e7352713b8ee63291cd9c",
"text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.",
"title": ""
},
{
"docid": "054b5be56ae07c58b846cf59667734fc",
"text": "Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc. Such systems use a large number of cameras to triangulate the position of optical markers. The marker positions are estimated with high accuracy. However, especially when tracking articulated bodies, a fraction of the markers in each timestep is missing from the reconstruction. In this paper, we propose to use a neural network approach to learn how human motion is temporally and spatially correlated, and reconstruct missing markers positions through this model. We experiment with two different models, one LSTM-based and one time-window-based. Both methods produce state-of-the-art results, while working online, as opposed to most of the alternative methods, which require the complete sequence to be known. The implementation is publicly available at https://github.com/Svitozar/NN-for-Missing-Marker-Reconstruction.",
"title": ""
},
{
"docid": "0f25a4cd8a0a94f6666caadb6d4be3d3",
"text": "The tradeoff between the switching energy and electro-thermal robustness is explored for 1.2-kV SiC MOSFET, silicon power MOSFET, and 900-V CoolMOS body diodes at different temperatures. The maximum forward current for dynamic avalanche breakdown is decreased with increasing supply voltage and temperature for all technologies. The CoolMOS exhibited the largest latch-up current followed by the SiC MOSFET and silicon power MOSFET; however, when expressed as current density, the SiC MOSFET comes first followed by the CoolMOS and silicon power MOSFET. For the CoolMOS, the alternating p and n pillars of the superjunctions in the drift region suppress BJT latch-up during reverse recovery by minimizing lateral currents and providing low-resistance paths for carriers. Hence, the temperature dependence of the latch-up current for CoolMOS was the lowest. The switching energy of the CoolMOS body diode is the largest because of its superjunction architecture which means the drift region have higher doping, hence more reverse charge. In spite of having a higher thermal resistance, the SiC MOSFET has approximately the same latch-up current while exhibiting the lowest switching energy because of the least reverse charge. The silicon power MOSFET exhibits intermediate performance on switching energy with lowest dynamic latching current.",
"title": ""
},
{
"docid": "c02fb121399e1ed82458fb62179d2560",
"text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.",
"title": ""
},
{
"docid": "44e7ba0be5275047587e9afd22f1de2a",
"text": "Dialogue state tracking plays an important role in statistical dialogue management. Domain-independent rule-based approaches are attractive due to their efficiency, portability and interpretability. However, recent rule-based models are still not quite competitive to statistical tracking approaches. In this paper, a novel framework is proposed to formulate rule-based models in a general way. In the framework, a rule is considered as a special kind of polynomial function satisfying certain linear constraints. Under some particular definitions and assumptions, rule-based models can be seen as feasible solutions of an integer linear programming problem. Experiments showed that the proposed approach can not only achieve competitive performance compared to statistical approaches, but also have good generalisation ability. It is one of the only two entries that outperformed all the four baselines in the third Dialog State Tracking Challenge.",
"title": ""
},
{
"docid": "42bc10578e76a0d006ee5d11484b1488",
"text": "In this paper, we present a wrapper-based acoustic group feature selection system for the INTERSPEECH 2015 Computational Paralinguistics Challenge (ComParE) 2015, Eating Condition (EC) Sub-challenge. The wrapper-based method has two components: the feature subset evaluation and the feature space search. The feature subset evaluation is performed using Support Vector Machine (SVM) classifiers. The wrapper method combined with complex algorithms such as SVM is computationally intensive. To address this, the feature space search uses Best Incremental Ranked Subset (BIRS), a fast and efficient algorithm. Moreover, we investigate considering the feature space in meaningful groups rather than individually. The acoustic feature space is partitioned into groups with each group representing a Low Level Descriptor (LLD). This partitioning reduces the time complexity of the search algorithm and makes the problem more tractable while attempting to gain insight into the relevant acoustic feature groups. Our wrapper-based system achieves improvement over the challenge baseline on the EC Sub-challenge test set using a variant of BIRS algorithm and LLD groups.",
"title": ""
},
{
"docid": "9f32b1e95e163c96ebccb2596a2edb8d",
"text": "This paper is devoted to the control of a cable driven redundant parallel manipulator, which is a challenging problem due the optimal resolution of its inherent redundancy. Additionally to complicated forward kinematics, having a wide workspace makes it difficult to directly measure the pose of the end-effector. The goal of the controller is trajectory tracking in a large and singular free workspace, and to guarantee that the cables are always under tension. A control topology is proposed in this paper which is capable to fulfill the stringent positioning requirements for these type of manipulators. Closed-loop performance of various control topologies are compared by simulation of the closed-loop dynamics of the KNTU CDRPM, while the equations of parallel manipulator dynamics are implicit in structure and only special integration routines can be used for their integration. It is shown that the proposed joint space controller is capable to satisfy the required tracking performance, despite the inherent limitation of task space pose measurement.",
"title": ""
},
{
"docid": "4bf6c59cdd91d60cf6802ae99d84c700",
"text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.",
"title": ""
},
{
"docid": "19c5d5563e41fac1fd29833662ad0b6c",
"text": "This paper discusses our contribution to the third RTE Challenge – the SALSA RTE system. It builds on an earlier system based on a relatively deep linguistic analysis, which we complement with a shallow component based on word overlap. We evaluate their (combined) performance on various data sets. However, earlier observations that the combination of features improves the overall accuracy could be replicated only partly.",
"title": ""
},
{
"docid": "17cc2f4ae2286d36748b203492d406e6",
"text": "In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.",
"title": ""
},
{
"docid": "04644fb390a5d3690295551491f63167",
"text": "Massive graphs, such as online social networks and communication networks, have become common today. To efficiently analyze such large graphs, many distributed graph computing systems have been developed. These systems employ the \"think like a vertex\" programming paradigm, where a program proceeds in iterations and at each iteration, vertices exchange messages with each other. However, using Pregel's simple message passing mechanism, some vertices may send/receive significantly more messages than others due to either the high degree of these vertices or the logic of the algorithm used. This forms the communication bottleneck and leads to imbalanced workload among machines in the cluster. In this paper, we propose two effective message reduction techniques: (1)vertex mirroring with message combining, and (2)an additional request-respond API. These techniques not only reduce the total number of messages exchanged through the network, but also bound the number of messages sent/received by any single vertex. We theoretically analyze the effectiveness of our techniques, and implement them on top of our open-source Pregel implementation called Pregel+. Our experiments on various large real graphs demonstrate that our message reduction techniques significantly improve the performance of distributed graph computation.",
"title": ""
},
{
"docid": "ce3cd1edffb0754e55658daaafe18df6",
"text": "Fact finders in legal trials often need to evaluate a mass of weak, contradictory and ambiguous evidence. There are two general ways to accomplish this task: by holistically forming a coherent mental representation of the case, or by atomistically assessing the probative value of each item of evidence and integrating the values according to an algorithm. Parallel constraint satisfaction (PCS) models of cognitive coherence posit that a coherent mental representation is created by discounting contradicting evidence, inflating supporting evidence and interpreting ambivalent evidence in a way coherent with the emerging decision. This leads to inflated support for whichever hypothesis the fact finder accepts as true. Using a Bayesian network to model the direct dependencies between the evidence, the intermediate hypotheses and the main hypothesis, parameterised with (conditional) subjective probabilities elicited from the subjects, I demonstrate experimentally how an atomistic evaluation of evidence leads to a convergence of the computed posterior degrees of belief in the guilt of the defendant of those who convict and those who acquit. The atomistic evaluation preserves the inherent uncertainty that largely disappears in a holistic evaluation. Since the fact finders’ posterior degree of belief in the guilt of the defendant is the relevant standard of proof in many legal systems, this result implies that using an atomistic evaluation of evidence, the threshold level of posterior belief in guilt required for a conviction may often not be reached. ⃰ Max Planck Institute for Research on Collective Goods, Bonn",
"title": ""
},
{
"docid": "b49698c3df4e432285448103cda7f2dd",
"text": "Acoustic emission (AE)-signal-based techniques have recently been attracting researchers' attention to rotational machine health monitoring and diagnostics due to the advantages of the AE signals over the extensively used vibration signals. Unlike vibration-based methods, the AE-based techniques are in their infant stage of development. From the perspective of machine health monitoring and fault detection, developing an AE-based methodology is important. In this paper, a methodology for rotational machine health monitoring and fault detection using empirical mode decomposition (EMD)-based AE feature quantification is presented. The methodology incorporates a threshold-based denoising technique into EMD to increase the signal-to-noise ratio of the AE bursts. Multiple features are extracted from the denoised signals and then fused into a single compressed AE feature. The compressed AE features are then used for fault detection based on a statistical method. A gear fault detection case study is conducted on a notional split-torque gearbox using AE signals to demonstrate the effectiveness of the methodology. A fault detection performance comparison using the compressed AE features with the existing EMD-based AE features reported in the literature is also conducted.",
"title": ""
},
{
"docid": "7e08ddffc3a04c6dac886e14b7e93907",
"text": "The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and low-rank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are block-diagonal in the appropriate basis. We introduce a convex mixed penalty which involves `1-norm and trace norm simultaneously. We obtain an oracle inequality which indicates how the two effects interact according to the nature of the target matrix. We bound generalization error in the link prediction problem. We also develop proximal descent strategies to solve the optimization problem efficiently and evaluate performance on synthetic and real data sets.",
"title": ""
},
{
"docid": "43184dfe77050618402900bc309203d5",
"text": "A prototype of Air Gap RLSA has been designed and simulated using hybrid air gap and FR4 dielectric material. The 28% wide bandwidth has been recorded through this approach. A 12.35dBi directive gain also recorded from the simulation. The 13.3 degree beamwidth of the radiation pattern is sufficient for high directional application. Since the proposed application was for Point to Point Link, this study concluded the Air Gap RLSA is a new candidate for this application.",
"title": ""
},
{
"docid": "2488c17b39dd3904e2f17448a8519817",
"text": "Young healthy participants spontaneously use different strategies in a virtual radial maze, an adaptation of a task typically used with rodents. Functional magnetic resonance imaging confirmed previously that people who used spatial memory strategies showed increased activity in the hippocampus, whereas response strategies were associated with activity in the caudate nucleus. Here, voxel based morphometry was used to identify brain regions covarying with the navigational strategies used by individuals. Results showed that spatial learners had significantly more gray matter in the hippocampus and less gray matter in the caudate nucleus compared with response learners. Furthermore, the gray matter in the hippocampus was negatively correlated to the gray matter in the caudate nucleus, suggesting a competitive interaction between these two brain areas. In a second analysis, the gray matter of regions known to be anatomically connected to the hippocampus, such as the amygdala, parahippocampal, perirhinal, entorhinal and orbitofrontal cortices were shown to covary with gray matter in the hippocampus. Because low gray matter in the hippocampus is a risk factor for Alzheimer's disease, these results have important implications for intervention programs that aim at functional recovery in these brain areas. In addition, these data suggest that spatial strategies may provide protective effects against degeneration of the hippocampus that occurs with normal aging.",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "b23d7f18a7abcaa6d3984ef7ca0609e0",
"text": "FFT algorithm is the popular software design for spectrum analyzer, but doesnpsilat work well for parallel hardware system due to complex calculation and huge memory requirement. Observing the key components of a spectrum analyzer are the intensities for respective frequencies, we propose a Goertzel algorithm to directly extract the intensity factors for respective frequency components in the input signal. Goertzel algorithm dispenses with the memory for z-1 and z-2 processing, and only needs two multipliers and three adders for real number calculation. In this paper, we present the spectrum extraction algorithm and implement a spectrum extractor with high speed and low area consumption in a FPGA (field programmable gate array) chip. It proves the feasibility of implementing a handheld concurrent multi-channel real-time spectrum analysis IP into a low gate counts and low power consumption CPLD (complex programmable logic device) chip.",
"title": ""
},
{
"docid": "1a4d07d9a48668f7fa3bcf301c25f7f2",
"text": "A novel low-loss planar dielectric waveguide is proposed. It is based on a high-permittivity dielectric slab parallel to a metal ground. The guiding channel is limited at the sides by a number of air holes which are lowering the effective permittivity. A mode with the electric field primarily parallel to the ground plane is used, similar to the E11x mode of an insulated image guide. A rather thick gap layer between the ground and the high-permittivity slab makes this mode to show the highest effective permittivity. The paper discusses the mode dispersion behaviour and presents measured characteristics of a power divider circuit operating at a frequency of about 8 GHz. Low leakage of about 14% is observed at the discontinuities forming the power divider. Using a compact dipole antenna structure, excitation efficiency of more than 90% is obtained.",
"title": ""
},
{
"docid": "3fc2ec702c66501de0eea9f5f0cac511",
"text": "Emotional eating is a change in consumption of food in response to emotional stimuli, and has been linked in negative physical and psychological outcomes. Observers have noticed over the years a correlation between emotions, mood and food choice, in ways that vary from strong and overt to subtle and subconscious. Specific moods such as anger, fear, sadness and joy have been found to affect eating responses and eating itself can play a role in influencing one’s emotions. With such an obvious link between emotions and eating behavior, the research over the years continues to delve further into the phenomenon. This includes investigating individuals of different weight categories, as well as children, adolescents and parenting styles. EXPLORING THE ASSOCIATION BETWEEN EMOTIONS AND EATING BEHAVIOR v",
"title": ""
}
] | scidocsrr |
6f0a0c26eb5e6e645d04f6a23421dedc | VANet security challenges and solutions: A survey | [
{
"docid": "a84143b7aa2d42f3297d81a036dc0f5e",
"text": "Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.",
"title": ""
}
] | [
{
"docid": "20b7dfaa400433b6697393d4e265d78d",
"text": "Security Operation Centers (SOCs) are being operated by universities, government agencies, and corporations to defend their enterprise networks in general and in particular to identify malicious behaviors in both networks and hosts. The success of a SOC depends on having the right tools, processes and, most importantly, efficient and effective analysts. One of the worrying issues in recent times has been the consistently high burnout rates of security analysts in SOCs. Burnout results in analysts making poor judgments when analyzing security events as well as frequent personnel turnovers. In spite of high awareness of this problem, little has been known so far about the factors leading to burnout. Various coping strategies employed by SOC management such as career progression do not seem to address the problem but rather deal only with the symptoms. In short, burnout is a manifestation of one or more underlying issues in SOCs that are as of yet unknown. In this work we performed an anthropological study of a corporate SOC over a period of six months and identified concrete factors contributing to the burnout phenomenon. We use Grounded Theory to analyze our fieldwork data and propose a model that explains the burnout phenomenon. Our model indicates that burnout is a human capital management problem resulting from the cyclic interaction of a number of human, technical, and managerial factors. Specifically, we identified multiple vicious cycles connecting the factors affecting the morale of the analysts. In this paper we provide detailed descriptions of the various vicious cycles and suggest ways to turn these cycles into virtuous ones. We further validated our results on the fieldnotes from a SOC at a higher education institution. The proposed model is able to successfully capture and explain the burnout symptoms in this other SOC as well. Copyright is held by the author/owner. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee. Symposium on Usable Privacy and Security (SOUPS) 2015, July 22–24, 2015, Ottawa, Canada.",
"title": ""
},
{
"docid": "4abceedb1f6c735a8bc91bc811ce4438",
"text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.",
"title": ""
},
{
"docid": "ba3e4fb74d1912e95d05a01cbf92e3c9",
"text": "The Collaborative Filtering is the most successful algorithm in the recommender systems' field. A recommender system is an intelligent system can help users to come across interesting items. It uses data mining and information filtering techniques. The collaborative filtering creates suggestions for users based on their neighbors' preferences. But it suffers from its poor accuracy and scalability. This paper considers the users are m (m is the number of users) points in n dimensional space (n is the number of items) and represents an approach based on user clustering to produce a recommendation for active user by a new method. It uses k-means clustering algorithm to categorize users based on their interests. Then it uses a new method called voting algorithm to develop a recommendation. We evaluate the traditional collaborative filtering and the new one to compare them. Our results show the proposed algorithm is more accurate than the traditional one, besides it is less time consuming than it.",
"title": ""
},
{
"docid": "c55c339eb53de3a385df7d831cb4f24b",
"text": "Massive Open Online Courses (MOOCs) have gained tremendous popularity in the last few years. Thanks to MOOCs, millions of learners from all over the world have taken thousands of high-quality courses for free. Putting together an excellent MOOC ecosystem is a multidisciplinary endeavour that requires contributions from many different fields. Artificial intelligence (AI) and data mining (DM) are two such fields that have played a significant role in making MOOCs what they are today. By exploiting the vast amount of data generated by learners engaging in MOOCs, DM improves our understanding of the MOOC ecosystem and enables MOOC practitioners to deliver better courses. Similarly, AI, supported by DM, can greatly improve student experience and learning outcomes. In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem. We then offer an overview of key trends and important research to carry out in the fields of AI and DM so that MOOCs can reach their full potential.",
"title": ""
},
{
"docid": "bab606f99e64c7fd5ce3c04376fbd632",
"text": "Diagnostic reasoning is a key component of many professions. To improve students’ diagnostic reasoning skills, educational psychologists analyse and give feedback on epistemic activities used by these students while diagnosing, in particular, hypothesis generation, evidence generation, evidence evaluation, and drawing conclusions. However, this manual analysis is highly time-consuming. We aim to enable the large-scale adoption of diagnostic reasoning analysis and feedback by automating the epistemic activity identification. We create the first corpus for this task, comprising diagnostic reasoning selfexplanations of students from two domains annotated with epistemic activities. Based on insights from the corpus creation and the task’s characteristics, we discuss three challenges for the automatic identification of epistemic activities using AI methods: the correct identification of epistemic activity spans, the reliable distinction of similar epistemic activities, and the detection of overlapping epistemic activities. We propose a separate performance metric for each challenge and thus provide an evaluation framework for future research. Indeed, our evaluation of various state-of-the-art recurrent neural network architectures reveals that current techniques fail to address some of these challenges.",
"title": ""
},
{
"docid": "6f8cc4d648f223840ca67550f1a3b6dd",
"text": "Information interaction system plays an important role in establishing a real-time and high-efficient traffic management platform in Intelligent Transportation System (ITS) applications. However, the present transmission technology still exists some defects in satisfying with the real-time performance of users data demand in Vehicle-to-Vehicle (V2V) communication. In order to solve this problem, this paper puts forward a novel Node Operating System (NDOS) scheme to realize the real-time data exchange between vehicles with wireless communication chips of mobile devices, and creates a distributed information interaction system for the interoperability between devices from various manufacturers. In addition, optimized data forwarding scheme is discussed for NDOS to achieve better transmission property and channel resource utilization. Experiments have been carried out in Network Simulator 2 (NS2) evaluation environment, and the results suggest that the scheme can receive higher transmission efficiency and validity than existing communication skills.",
"title": ""
},
{
"docid": "349f85e6ffd66d6a1dd9d9c6925d00bc",
"text": "Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.",
"title": ""
},
{
"docid": "7395053055da53b32adf2b28dba6de2d",
"text": "The Discovery of Human Herpesvirus 6 (HHV-6) Initially designated HBLV, for human B-lymphotropic virus, HHV-6 was isolated fortuitously in 1986 from interleukin 2stimulated peripheral blood mononuclear cells (PBMCs) of patients with AIDS or lymphoproliferative disorders (1). The PBMC cultures exhibited an unusual cytopathic effect characterized by enlarged balloonlike cells. The causative agent was identified as a herpesvirus by electron microscopy and lack of crosshybridization to a number of human herpesviruses (2). The GS strain is the prototype of the first isolates. Two additional isolates of lymphotropic human herpesviruses, U1102 and Gambian, genetically similar to HBLV, were obtained 1 year later from PBMCs of African AIDS patients. All of the isolates could grow in T cells (CEM, H9, Jurkat), in monocytes (HL60, U937), in glial cells (HED), as well as in B-cell lines (Raji, RAMOS, L4, WHPT) (3,4). A new variant, Z29, subsequently shown to differ in restriction endonuclease pattern from GS-like strains, was isolated from PBMCs of patients with AIDS (5). The cells supporting virus growth were characterized as CD4+ T lymphocytes (6). The designation HHV-6 was proposed 1 year after discovery of the first isolate to comply with the rules established by the International Committee on Taxonomy of Viruses (7). More than 100 additional HHV-6 strains have been isolated from PBMCs of children with subitum or febrile syndromes (8), from cell-free saliva of healthy or HIV-infected patients (9,10), from PBMCs of patients with chronic fatigue syndrome (CFS) (11), and from PBMCs of healthy adultsthese PBMCs were cultivated for human herpesvirus 7 (HHV-7) isolation (12).",
"title": ""
},
{
"docid": "01d441a277e9f9cbf6af40d0d526d44f",
"text": "On-orbit fabrication of spacecraft components can enable space programs to escape the volumetric limitations of launch shrouds and create systems with extremely large apertures and very long baselines in order to deliver higher resolution, higher bandwidth, and higher SNR data. This paper will present results of efforts to investigated the value proposition and technical feasibility of adapting several of the many rapidly-evolving additive manufacturing and robotics technologies to the purpose of enabling space systems to fabricate and integrate significant parts of themselves on-orbit. We will first discuss several case studies for the value proposition for on-orbit fabrication of space structures, including one for a starshade designed to enhance the capabilities for optical imaging of exoplanets by the proposed New World Observer mission, and a second for a long-baseline phased array radar system. We will then summarize recent work adapting and evolving additive manufacturing techniques and robotic assembly technologies to enable automated on-orbit fabrication of large, complex, three-dimensional structures such as trusses, antenna reflectors, and shrouds.",
"title": ""
},
{
"docid": "7f47434e413230faf04849cf43a845fa",
"text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.",
"title": ""
},
{
"docid": "9cebd0ff0e218d742e44ebe05fb2e394",
"text": "Studies supporting the notion that physical activity and exercise can help alleviate the negative impact of age on the body and the mind abound. This literature review provides an overview of important findings in this fast growing research domain. Results from cross-sectional, longitudinal, and intervention studies with healthy older adults, frail patients, and persons suffering from mild cognitive impairment and dementia are reviewed and discussed. Together these finding suggest that physical exercise is a promising nonpharmaceutical intervention to prevent age-related cognitive decline and neurodegenerative diseases.",
"title": ""
},
{
"docid": "0965f1390233e71da72fbc8f37394add",
"text": "Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.",
"title": ""
},
{
"docid": "3c98c5bd1d9a6916ce5f6257b16c8701",
"text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.",
"title": ""
},
{
"docid": "1f26cc778ae481c8c72413f721926e57",
"text": "As improved versions of the successive cancellation (SC) decoding algorithm, the successive cancellation list (SCL) decoding and the successive cancellation stack (SCS) decoding are used to improve the finite-length performance of polar codes. In this paper, unified descriptions of the SC, SCL, and SCS decoding algorithms are given as path search procedures on the code tree of polar codes. Combining the principles of SCL and SCS, a new decoding algorithm called the successive cancellation hybrid (SCH) is proposed. This proposed algorithm can provide a flexible configuration when the time and space complexities are limited. Furthermore, a pruning technique is also proposed to lower the complexity by reducing unnecessary path searching operations. Performance and complexity analysis based on simulations shows that under proper configurations, all the three improved successive cancellation (ISC) decoding algorithms can approach the performance of the maximum likelihood (ML) decoding but with acceptable complexity. With the help of the proposed pruning technique, the time and space complexities of ISC decoders can be significantly reduced and be made very close to those of the SC decoder in the high signal-to-noise ratio regime.",
"title": ""
},
{
"docid": "7c9be363cf760d03aab0b6bffd764676",
"text": "Many children and youth in rural communities spend significant portions of their lives on school buses. This paper reviews the limited empirical research on the school bus experience, presents some new exploratory data, and offers some suggestions for future research on the impact of riding the school bus on children and youth.",
"title": ""
},
{
"docid": "082b1c341435ce93cfab869475ed32bd",
"text": "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Ω(k2) / Ω(k5/4) / Ω(k6/5) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k2) non-terminals, and any planar graph admits a minor with 1 + ε distortion and Õ((k/ε)2) non-terminals. 1998 ACM Subject Classification G.2.2 Graph Theory",
"title": ""
},
{
"docid": "cd9632f63fc5e3acf0ebb1039048f671",
"text": "The authors completed an 8-week practice placement at Thrive’s garden project in Battersea Park, London, as part of their occupational therapy degree programme. Thrive is a UK charity using social and therapeutic horticulture (STH) to enable disabled people to make positive changes to their own lives (Thrive 2008). STH is an emerging therapeutic movement, using horticulture-related activities to promote the health and wellbeing of disabled and vulnerable people (Sempik et al 2005, Fieldhouse and Sempik 2007). Within Battersea Park, Thrive has a main garden with available indoor facilities and two satellite gardens. All these gardens are publicly accessible. Thrive Battersea’s service users include people with learning disabilities, mental health challenges and physical disabilities. Thrive’s group facilitators (referred to as therapists) lead regular gardening groups, aiming to enable individual performance within the group and being mindful of health conditions and circumstances. The groups have three types of participant: Thrive’s therapists, service users (known as gardeners) and volunteers. The volunteers help Thrive’s therapists and gardeners to perform STH activities. The gardening groups comprise participants from various age groups and abilities. Thrive Battersea provides ongoing contact between the gardeners, volunteers and therapists. Integrating service users and non-service users is a method of tackling negative attitudes to disability and also promoting social inclusion (Sayce 2000). Thrive Battersea is an example of a ‘role-emerging’ practice placement, which is based outside either local authorities or the National Health Service (NHS) and does not have an on-site occupational therapist (College of Occupational Therapists 2006). The connection of occupational therapy theory to practice is essential on any placement (Alsop 2006). The roleemerging nature of this placement placed additional reflective onus on the authors to identify the links between theory and practice. The authors observed how Thrive’s gardeners connected to the spaces they worked and to the people they worked with. A sense of individual Gardening and belonging: reflections on how social and therapeutic horticulture may facilitate health, wellbeing and inclusion",
"title": ""
},
{
"docid": "35404fbbf92e7a995cdd6de044f2ec0d",
"text": "The ball on plate system is the extension of traditional ball on beam balancing problem in control theory. In this paper the implementation of a proportional-integral-derivative controller (PID controller) to balance a ball on a plate has been demonstrated. To increase the system response time and accuracy multiple controllers are piped through a simple custom serial protocol to boost the processing power, and overall performance. A single HD camera module is used as a sensor to detect the ball's position and two RC servo motors are used to tilt the plate to balance the ball. The result shows that by implementing multiple PUs (Processing Units) redundancy and high resolution can be achieved in real-time control systems.",
"title": ""
}
] | scidocsrr |
03c192db794d741241a84ccd46c5ba9b | Learning time-series shapelets | [
{
"docid": "8609f49cc78acc1ba25e83c8e68040a6",
"text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.",
"title": ""
}
] | [
{
"docid": "058515182c568c8df202542f28c15203",
"text": "Plant diseases have turned into a dilemma as it can cause significant reduction in both quality and quantity of agricultural products. Automatic detection of plant diseases is an essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect the symptoms of diseases as soon as they appear on plant leaves. The proposed system is a software solution for automatic detection and classification of plant leaf diseases. The developed processing scheme consists of four main steps, first a color transformation structure for the input RGB image is created, then the green pixels are masked and removed using specific threshold value followed by segmentation process, the texture statistics are computed for the useful segments, finally the extracted features are passed through the classifier. The proposed algorithm’s efficiency can successfully detect and classify the examined diseases with an accuracy of 94%. Experimental results on a database of about 500 plant leaves confirm the robustness of the proposed approach.",
"title": ""
},
{
"docid": "9d7a441731e9d0c62dd452ccb3d19f7b",
"text": " In many countries, especially in under developed and developing countries proper health care service is a major concern. The health centers are far and even the medical personnel are deficient when compared to the requirement of the people. For this reason, health services for people who are unhealthy and need health monitoring on regular basis is like impossible. This makes the health monitoring of healthy people left far more behind. In order for citizens not to be deprived of the primary care it is always desirable to implement some system to solve this issue. The application of Internet of Things (IoT) is wide and has been implemented in various areas like security, intelligent transport system, smart cities, smart factories and health. This paper focuses on the application of IoT in health care system and proposes a novel architecture of making use of an IoT concept under fog computing. The proposed architecture can be used to acknowledge the underlying problem of deficient clinic-centric health system and change it to smart patientcentric health system.",
"title": ""
},
{
"docid": "472946ba2e62d3d8a0a42c7e908bf18f",
"text": "BACKGROUND\nAntidepressants, aiming at monoaminergic neurotransmission, exhibit delayed onset of action, limited efficacy, and poor compliance. Glutamatergic neurotransmission is involved in depression. However, it is unclear whether enhancement of the N-methyl-D-aspartate (NMDA) subtype glutamate receptor can be a treatment for depression.\n\n\nMETHODS\nWe studied sarcosine, a glycine transporter-I inhibitor that potentiates NMDA function, in animal models and in depressed patients. We investigated its effects in forced swim test, tail suspension test, elevated plus maze test, novelty-suppressed feeding test, and chronic unpredictable stress test in rats and conducted a 6-week randomized, double-blinded, citalopram-controlled trial in 40 patients with major depressive disorder. Clinical efficacy and side effects were assessed biweekly, with the main outcomes of Hamilton Depression Rating Scale, Global Assessment of Function, and remission rate. The time course of response and dropout rates was also compared.\n\n\nRESULTS\nSarcosine decreased immobility in the forced swim test and tail suspension test, reduced the latency to feed in the novelty-suppressed feeding test, and reversed behavioral deficits caused by chronic unpredictable stress test, which are characteristics for an antidepressant. In the clinical study, sarcosine substantially improved scores of Hamilton Depression Rating Scale, Clinical Global Impression, and Global Assessment of Function more than citalopram treatment. Sarcosine-treated patients were much more likely and quicker to remit and less likely to drop out. Sarcosine was well tolerated without significant side effects.\n\n\nCONCLUSIONS\nOur preliminary findings suggest that enhancing NMDA function can improve depression-like behaviors in rodent models and in human depression. Establishment of glycine transporter-I inhibition as a novel treatment for depression waits for confirmation by further proof-of-principle studies.",
"title": ""
},
{
"docid": "f48712851095fa3b33898c38ebcfaa95",
"text": "Most existing image-based crop disease recognition algorithms rely on extracting various kinds of features from leaf images of diseased plants. They have a common limitation as the features selected for discriminating leaf images are usually treated as equally important in the classification process. We propose a novel cucumber disease recognition approach which consists of three pipelined procedures: segmenting diseased leaf images by K-means clustering, extracting shape and color features from lesion information, and classifying diseased leaf images using sparse representation (SR). A major advantage of this approach is that the classification in the SR space is able to effectively reduce the computation cost and improve the recognition performance. We perform a comparison with four other feature extraction based methods using a leaf image dataset on cucumber diseases. The proposed approach is shown to be effective in recognizing seven major cucumber diseases with an overall recognition rate of 85.7%, higher than those of the other methods. 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cf4509b8d2b458f608a7e72165cdf22b",
"text": "Nowadays, blockchain is becoming a synonym for distributed ledger technology. However, blockchain is only one of the specializations in the field and is currently well-covered in existing literature, but mostly from a cryptographic point of view. Besides blockchain technology, a new paradigm is gaining momentum: directed acyclic graphs. The contribution presented in this paper is twofold. Firstly, the paper analyzes distributed ledger technology with an emphasis on the features relevant to distributed systems. Secondly, the paper analyses the usage of directed acyclic graph paradigm in the context of distributed ledgers, and compares it with the blockchain-based solutions. The two paradigms are compared using representative implementations: Bitcoin, Ethereum and Nano. We examine representative solutions in terms of the applied data structures for maintaining the ledger, consensus mechanisms, transaction confirmation confidence, ledger size, and scalability.",
"title": ""
},
{
"docid": "0c7512ac95d72436e31b9b05199eefdd",
"text": "Usable security has unique usability challenges bec ause the need for security often means that standard human-comput er-in eraction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in s electing better passwords, thus increasing security by expanding th e effective password space. In click-based graphical passwords, poorly chosen passwords lead to the emergence of hotspots – portions of the image where users are more likely to select cli ck-points, allowing attackers to mount more successful diction ary attacks. We use persuasion to influence user choice in click -based graphical passwords, encouraging users to select mo re random, and hence more secure, click-points. Our approach i s to introduce persuasion to the Cued Click-Points graphical passw ord scheme (Chiasson, van Oorschot, Biddle, 2007) . Our resulting scheme significantly reduces hotspots while still maintain ing its usability.",
"title": ""
},
{
"docid": "b912b32d9f1f4e7a5067450b98870a71",
"text": "As of May 2013, 56 percent of American adults had a smartphone, and most of them used it to access the Internet. One-third of smartphone users report that their phone is the primary way they go online. Just as the Internet changed retailing in the late 1990s, many argue that the transition to mobile, sometimes referred to as “Web 3.0,” will have a similarly disruptive effect (Brynjolfsson et al. 2013). In this paper, we aim to document some early effects of how mobile devices might change Internet and retail commerce. We present three main findings based on an analysis of eBay’s mobile shopping application and core Internet platform. First, and not surprisingly, the early adopters of mobile e-commerce applications appear",
"title": ""
},
{
"docid": "dc98ddb6033ca1066f9b0ba5347a3d0c",
"text": "Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. The result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. This work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.",
"title": ""
},
{
"docid": "5700ba2411f9b4e4ed59c8c5839dc87d",
"text": "Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g., MSR-RF: C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.",
"title": ""
},
{
"docid": "365a236fee3cfda081d7d8ab2b31d4a2",
"text": "Defining software requirements is a complex and difficult process, which often leads to costly project failures. Requirements emerge from a collaborative and interactive negotiation process that involves heterogeneous stakeholders (people involved in an elicitation process such as users, analysts, developers, and customers). Practical experience shows that prioritizing requirements is not as straightforward task as the literature suggests. A process for prioritizing requirements must not only be simple and fast, but it must obtain trustworthy results. The objective of this paper is to provide a classification framework to characterize prioritization proposals. We highlight differences among eleven selected approaches by emphasizing their most important features.",
"title": ""
},
{
"docid": "12cd45e8832650d620695d4f5680148f",
"text": "OBJECTIVE\nCurrent systems to evaluate outcomes from tissue-engineered cartilage (TEC) are sub-optimal. The main purpose of our study was to demonstrate the use of second harmonic generation (SHG) microscopy as a novel quantitative approach to assess collagen deposition in laboratory made cartilage constructs.\n\n\nMETHODS\nScaffold-free cartilage constructs were obtained by condensation of in vitro expanded Hoffa's fat pad derived stromal cells (HFPSCs), incubated in the presence or absence of chondrogenic growth factors (GF) during a period of 21 d. Cartilage-like features in constructs were assessed by Alcian blue staining, transmission electron microscopy (TEM), SHG and two-photon excited fluorescence microscopy. A new scoring system, using second harmonic generation microscopy (SHGM) index for collagen density and distribution, was adapted to the existing \"Bern score\" in order to evaluate in vitro TEC.\n\n\nRESULTS\nSpheroids with GF gave a relative high Bern score value due to appropriate cell morphology, cell density, tissue-like features and proteoglycan content, whereas spheroids without GF did not. However, both TEM and SHGM revealed striking differences between the collagen framework in the spheroids and native cartilage. Spheroids required a four-fold increase in laser power to visualize the collagen matrix by SHGM compared to native cartilage. Additionally, collagen distribution, determined as the area of tissue generating SHG signal, was higher in spheroids with GF than without GF, but lower than in native cartilage.\n\n\nCONCLUSION\nSHG represents a reliable quantitative approach to assess collagen deposition in laboratory engineered cartilage, and may be applied to improve currently established scoring systems.",
"title": ""
},
{
"docid": "85a541f5d83b3de1695a5c994a2be21f",
"text": "1Department of Occupational Therapy, Faculty of Rehabilitation, Tehran University of Medical Sciences, Tehran, 2Department of Epidemiology and Biostatistics, Faculty of Health, Isfahan University of Medical Sciences, Isfahan, 3Department of Paediatrics, Faculty of Medicine, Baqiyatollah University of Medical Sciences, and 4Department of Physiotherapy, Faculty of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran. Reprint requests and correspondence to: Dr. Leila Dehghan, Department of Occupational Therapy, Faculty of Rehabilitation, Tehran University of Medical Sciences, Piche Shemiran, Tehran, Iran. E-mail: ldehghan@tums.ac.ir EFFECT OF THE BOBATH TECHNIQUE, CONDUCTIVE EDUCATION AND EDUCATION TO PARENTS IN ACTIVITIES OF DAILY LIVING IN CHILDREN WITH CEREBRAL PALSY IN IRAN",
"title": ""
},
{
"docid": "65cae0002bcff888d6514aa2d375da40",
"text": "We study the problem of finding efficiently computable non-degenerate multilinear maps from G1 to G2, where G1 and G2 are groups of the same prime order, and where computing discrete logarithms in G1 is hard. We present several applications to cryptography, explore directions for building such maps, and give some reasons to believe that finding examples with n > 2",
"title": ""
},
{
"docid": "8ec871d495cf8d796654015896e2dcd2",
"text": "Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers’ actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology—traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach.",
"title": ""
},
{
"docid": "9af703a47d382926698958fba88c1e1a",
"text": "Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non-security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "5807ace0e7e4e9a67c46f29a3f2e70e3",
"text": "In this work we present a pedestrian navigation system for indoor environments based on the dead reckoning positioning method, 2D barcodes, and data from accelerometers and magnetometers. All the sensing and computing technologies of our solution are available in common smart phones. The need to create indoor navigation systems arises from the inaccessibility of the classic navigation systems, such as GPS, in indoor environments.",
"title": ""
},
{
"docid": "8a708ec1187ecb2fe9fa929b46208b34",
"text": "This paper proposes a new face verification method that uses multiple deep convolutional neural networks (DCNNs) and a deep ensemble, that extracts two types of low dimensional but discriminative and high-level abstracted features from each DCNN, then combines them as a descriptor for face verification. Our DCNNs are built from stacked multi-scale convolutional layer blocks to present multi-scale abstraction. To train our DCNNs, we use different resolutions of triplets that consist of reference images, positive images, and negative images, and triplet-based loss function that maximize the ratio of distances between negative pairs and positive pairs and minimize the absolute distances between positive face images. A deep ensemble is generated from features extracted by each DCNN, and used as a descriptor to train the joint Bayesian learning and its transfer learning method. On the LFW, although we use only 198,018 images and only four different types of networks, the proposed method with the joint Bayesian learning and its transfer learning method achieved 98.33% accuracy. In addition to further increase the accuracy, we combine the proposed method and high dimensional LBP based joint Bayesian method, and achieved 99.08% accuracy on the LFW. Therefore, the proposed method helps to improve the accuracy of face verification when training data is insufficient to train DCNNs.",
"title": ""
},
{
"docid": "97838cc3eb7b31d49db6134f8fc81c84",
"text": "We study the problem of semi-supervised question answering—-utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.",
"title": ""
},
{
"docid": "0765510720f450736135efd797097450",
"text": "In this paper we discuss the re-orientation of human-computer interaction as an aesthetic field. We argue that mainstream approaches lack of general openness and ability to assess experience aspects of interaction, but that this can indeed be remedied. We introduce the concept of interface criticism as a way to turn the conceptual re-orientation into handles for practical design, and we present and discuss an interface criticism guide.",
"title": ""
}
] | scidocsrr |
0a9850db7c80e1ec31309807d1b7b512 | Monocular Visual-Inertial SLAM-Based Collision Avoidance Strategy for Fail-Safe UAV Using Fuzzy Logic Controllers - Comparison of Two Cross-Entropy Optimization Approaches | [
{
"docid": "b0d91cac5497879ea87bdf9034f3fd6d",
"text": "This paper presents an open-source indoor navigation system for quadrotor micro aerial vehicles(MAVs), implemented in the ROS framework. The system requires a minimal set of sensors including a planar laser range-finder and an inertial measurement unit. We address the issues of autonomous control, state estimation, path-planning, and teleoperation, and provide interfaces that allow the system to seamlessly integrate with existing ROS navigation tools for 2D SLAM and 3D mapping. All components run in real time onboard the MAV, with state estimation and control operating at 1 kHz. A major focus in our work is modularity and abstraction, allowing the system to be both flexible and hardware-independent. All the software and hardware components which we have developed, as well as documentation and test data, are available online.",
"title": ""
}
] | [
{
"docid": "b5ab4c11feee31195fdbec034b4c99d9",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "77d616dc746e74db02215dcf2fdb6141",
"text": "It is almost a quarter of a century since the launch in 1968 of NASA's Pioneer 9 spacecraft on the first mission into deep-space that relied on coding to enhance communications on the critical downlink channel. [The channel code used was a binary convolutional code that was decoded with sequential decoding--we will have much to say about this code in the sequel.] The success of this channel coding system had repercussions that extended far beyond NASA's space program. It is no exaggeration to say that the Pioneer 9 mission provided communications engineers with the first incontrovertible demonstration of the practical utility of channel coding techniques and thereby paved the way for the successful application of coding to many other channels.",
"title": ""
},
{
"docid": "b4d7a8b6b24c85af9f62105194087535",
"text": "New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature \"qualities\", \"dimensions\" or \"parameters\" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building.",
"title": ""
},
{
"docid": "cf9c23f046ca788d3e8927246568098b",
"text": "This study examined psychological well-being and coping in parents of children with ASD and parents of typically developing children. 73 parents of children with ASD and 63 parents of typically developing children completed a survey. Parents of children with ASD reported significantly more parenting stress symptoms (i.e., negative parental self-views, lower satisfaction with parent-child bond, and experiences of difficult child behaviors), more depression symptoms, and more frequent use of Active Avoidance coping, than parents of typically developing children. Parents of children with ASD did not differ significantly in psychological well-being and coping when compared as according to child's diagnosis. Study results reinforced the importance of addressing well-being and coping needs of parents of children with ASD.",
"title": ""
},
{
"docid": "11e220528f9d4b6a51cdb63268934586",
"text": "The function of DIRCM (directed infrared countermeasures) jamming is to cause the missile to miss its intended target by disturbing the seeker tracking process. The DIRCM jamming uses the pulsing flashes of infrared (IR) energy and its frequency, phase and intensity have the influence on the missile guidance system. In this paper, we analyze the DIRCM jamming effect of the spin-scan reticle seeker. The simulation results show that the jamming effect is greatly influenced by frequency, phase and intensity of the jammer signal.",
"title": ""
},
{
"docid": "322fd3b0c6c833bac9598b510dc40b98",
"text": "Quality assessment is an indispensable technique in a large body of media applications, i.e., photo retargeting, scenery rendering, and video summarization. In this paper, a fully automatic framework is proposed to mimic how humans subjectively perceive media quality. The key is a locality-preserved sparse encoding algorithm that accurately discovers human gaze shifting paths from each image or video clip. In particular, we first extract local image descriptors from each image/video, and subsequently project them into the so-called perceptual space. Then, a nonnegative matrix factorization (NMF) algorithm is proposed that represents each graphlet by a linear and sparse combination of the basis ones. Since each graphlet is visually/semantically similar to its neighbors, a locality-preserved constraint is encoded into the NMF algorithm. Mathematically, the saliency of each graphlet is quantified by the norm of its sparse codes. Afterward, we sequentially link them into a path to simulate human gaze allocation. Finally, a probabilistic quality model is learned based on such paths extracted from a collection of photos/videos, which are marked as high quality ones via multiple Flickr users. Comprehensive experiments have demonstrated that: 1) our quality model outperforms many of its competitors significantly, and 2) the learned paths are on average 89.5% consistent with real human gaze shifting paths.",
"title": ""
},
{
"docid": "e0f202362b9c51d92f268261a96bc11e",
"text": "Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. Although many generalizations and extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods corresponds to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov's technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.",
"title": ""
},
{
"docid": "2720f2aa50ddfc9150d6c2718f4433d3",
"text": "This paper describes InP/InGaAs double heterojunction bipolar transistor (HBT) technology that uses SiN/SiO2 sidewall spacers. This technology enables the formation of ledge passivation and narrow base metals by i-line lithography. With this process, HBTs with various emitter sizes and emitter-base (EB) spacings can be fabricated on the same wafer. The impact of the emitter size and EB spacing on the current gain and high-frequency characteristics is investigated. The reduction of the current gain is <;5% even though the emitter width decreases from 0.5 to 0.25 μm. A high current gain of over 40 is maintained even for a 0.25-μm emitter HBT. The HBTs with emitter widths ranging from 0.25 to 0.5 μm also provide peak ft of over 430 GHz. On the other hand, peak fmax greatly increases from 330 to 464 GHz with decreasing emitter width from 0.5 to 0.25 μm. These results indicate that the 0.25-μm emitter HBT with the ledge passivaiton exhibits balanced high-frequency performance (ft = 452 GHz and fmax = 464 GHz), while maintaining a current gain of over 40.",
"title": ""
},
{
"docid": "e59f4a08d0c7c789a5d83e7d7dc9ec3a",
"text": "In this paper, we present a new approach for audio tampering detection based on microphone classification. The underlying algorithm is based on a blind channel estimation, specifically designed for recordings from mobile devices. It is applied to detect a specific type of tampering, i.e., to detect whether footprints from more than one microphone exist within a given content item. As will be shown, the proposed method achieves an accuracy above 95% for AAC, MP3 and PCM-encoded recordings.",
"title": ""
},
{
"docid": "2512c057299a86d3e461a15b67377944",
"text": "Compressive sensing (CS) is an alternative to Shan-non/Nyquist sampling for the acquisition of sparse or compressible signals. Instead of taking periodic samples, CS measures inner products with M random vectors, where M is much smaller than the number of Nyquist-rate samples. The implications of CS are promising for many applications and enable the design of new kinds of analog-to-digital converters, imaging systems, and sensor networks. In this paper, we propose and study a wideband compressive radio receiver (WCRR) architecture that can efficiently acquire and track FM and other narrowband signals that live within a wide frequency bandwidth. The receiver operates below the Nyquist rate and has much lower complexity than either a traditional sampling system or CS recovery system. Our methods differ from most standard approaches to the problem of CS recovery in that we do not assume that the signals of interest are confined to a discrete set of frequencies, and we do not rely on traditional recovery methods such as l1-minimization. Instead, we develop a simple detection system that identifies the support of the narrowband FM signals and then applies compressive filtering techniques based on discrete prolate spheroidal sequences to cancel interference and isolate the signals. Lastly, a compressive phase-locked loop (PLL) directly recovers the FM message signals.",
"title": ""
},
{
"docid": "eec7a9a6859e641c3cc0ade73583ef5c",
"text": "We propose an Apache Spark-based scale-up server architecture using Docker container-based partitioning method to improve performance scalability. The performance scalability problem of Apache Spark-based scale-up servers is due to garbage collection(GC) and remote memory access overheads when the servers are equipped with significant number of cores and Non-Uniform Memory Access(NUMA). The proposed method minimizes the problems using Docker container-based architecture effectively partitioning the original scale-up server into small logical servers. Our evaluation study based on benchmark programs revealed that the partitioning method showed performance improvement by ranging from 1.1x through 1.7x on a 120 core scale-up system. Our proof-of-concept scale-up server architecture provides the basis towards complete and practical design of partitioning-based scale-up servers showing performance scalability.",
"title": ""
},
{
"docid": "b7957cc83988e0be2da64f6d9837419c",
"text": "Description: A revision of the #1 text in the Human Computer Interaction field, Interaction Design, the third edition is an ideal resource for learning the interdisciplinary skills needed for interaction design, human-computer interaction, information design, web design and ubiquitous computing. The authors are acknowledged leaders and educators in their field, with a strong global reputation. They bring depth of scope to the subject in this new edition, encompassing the latest technologies and devices including social networking, Web 2.0 and mobile devices. The third edition also adds, develops and updates cases, examples and questions to bring the book in line with the latest in Human Computer Interaction. Interaction Design offers a cross-disciplinary, practical and process-oriented approach to Human Computer Interaction, showing not just what principles ought to apply to Interaction Design, but crucially how they can be applied. The book focuses on how to design interactive products that enhance and extend the way people communicate, interact and work. Motivating examples are included to illustrate both technical, but also social and ethical issues, making the book approachable and adaptable for both Computer Science and non-Computer Science users. Interviews with key HCI luminaries are included and provide an insight into current and future trends.",
"title": ""
},
{
"docid": "3d0103c34fcc6a65ad56c85a9fe10bad",
"text": "This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space ‘interest points’ may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines.",
"title": ""
},
{
"docid": "b96e2dba118d89942990337df26c7b20",
"text": "This paper introduces a high-speed all-hardware scale-invariant feature transform (SIFT) architecture with parallel and pipeline technology for real-time extraction of image features. The task-level parallel and pipeline structure are exploited between the hardware blocks, and the data-level parallel and pipeline architecture are exploited inside each block. Two identical random access memories are adopted with ping-pong operation to execute the key point detection module and the descriptor generation module in task-level parallelism. With speeding up the key point detection module of SIFT, the descriptor generation module has become the bottleneck of the system's performance; therefore, this paper proposes an optimized descriptor generation algorithm. A novel window-dividing method is proposed with square subregions arranged in 16 directions, and the descriptors are generated by reordering the histogram instead of window rotation. Therefore, the main orientation detection block and descriptor generation block run in parallel instead of interactively. With the optimized algorithm cooperating with pipeline structure inside each block, we not only improve the parallelism of the algorithm, but also avoid floating data calculation to save hardware consumption. Thus, the descriptor generation module leads the speed almost 15 times faster than a recent solution. The proposed system was implemented on field programmable gate array and the overall time to extract SIFT features for an image having 512×512 pixels is only 6.55 ms (sufficient for real-time applications), and the number of feature points can reach up to 2900.",
"title": ""
},
{
"docid": "11962ec2381422cfac77ad543b519545",
"text": "In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable—beyond running the base learner itself, it only requires computing the top singular vector of a certain n×d matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with 1% corruptions, we achieved 7.4% test error, compared to 13.4%− 20.5% for the baselines, and 3% error on the uncorrupted dataset. Similarly, on the drug design dataset, with 10% corruptions, we achieved 1.42 mean-squared error test error, compared to 1.51-2.33 for the baselines, and 1.23 error on the uncorrupted dataset.",
"title": ""
},
{
"docid": "8f750438e7d78873fd33174d2e347ea5",
"text": "This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.",
"title": ""
},
{
"docid": "827493ff47cff1defaeafff2ef180dce",
"text": "We present a static analysis algorithm for detecting security vulnerabilities in PHP, a popular server-side scripting language for building web applications. Our analysis employs a novel three-tier architecture to capture information at decreasing levels of granularity at the intrablock, intraprocedural, and interprocedural level. This architecture enables us to handle dynamic features unique to scripting languages such as dynamic typing and code inclusion, which have not been adequately addressed by previous techniques. We demonstrate the effectiveness of our approach by running our tool on six popular open source PHP code bases and finding 105 previously unknown security vulnerabilities, most of which we believe are remotely exploitable.",
"title": ""
},
{
"docid": "d1b6007cfb2f8d6227817ab482758bc5",
"text": "Patient Health Monitoring is the one of the field that is rapidly growing very fast nowadays with the advancement of technologies many researchers have come with differentdesigns for patient health monitoring systems as per the technological development. With the widespread of internet, Internet of things is among of the emerged field recently in which many have been able to incorporate it into different applications. In this paper we introduce the system called Iot based patient health monitoring system using LabVIEW and Wireless Sensor Network (WSN).The system will be able to take patients physiological parameters and transmit it wirelessly via Xbees, displays the sensor data onLabVIEW and publish on webserver to enable other health care givers from far distance to visualize, control and monitor continuously via internet connectivity.",
"title": ""
},
{
"docid": "a3cd3ec70b5d794173db36cb9a219403",
"text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)",
"title": ""
}
] | scidocsrr |
fd78759cc271df5f6e213c2a3a9b006a | A Light-Weight Solution to Preservation of Access Pattern Privacy in Un-trusted Clouds | [
{
"docid": "70cc8c058105b905eebdf941ca2d3f2e",
"text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.",
"title": ""
}
] | [
{
"docid": "364f9c36bef260cc938d04ff3b4f4c67",
"text": "We propose a scalable, efficient and accurate approach to retrieve 3D models for objects in the wild. Our contribution is twofold. We first present a 3D pose estimation approach for object categories which significantly outperforms the state-of-the-art on Pascal3D+. Second, we use the estimated pose as a prior to retrieve 3D models which accurately represent the geometry of objects in RGB images. For this purpose, we render depth images from 3D models under our predicted pose and match learned image descriptors of RGB images against those of rendered depth images using a CNN-based multi-view metric learning approach. In this way, we are the first to report quantitative results for 3D model retrieval on Pascal3D+, where our method chooses the same models as human annotators for 50% of the validation images on average. In addition, we show that our method, which was trained purely on Pascal3D+, retrieves rich and accurate 3D models from ShapeNet given RGB images of objects in the wild.",
"title": ""
},
{
"docid": "b4978b2fbefc79fba6e69ad8fd55ebf9",
"text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.",
"title": ""
},
{
"docid": "26cc29177040461634929eb1fa13395d",
"text": "In this paper, we first characterize distributed real-time systems by the following two properties that have to be supported: best eflorl and leas2 suffering. Then, we propose a distributed real-time object model DRO which complies these properties. Based on the DRO model, we design an object oriented programming language DROL: an extension of C++ with the capa.bility of describing distributed real-time systems. The most eminent feature of DROL is that users can describe on meta level the semantics of message communications as a communication protocol with sending and receiving primitives. With this feature, we can construct a flexible distributed real-time system satisfying specifications which include timing constraints. We implement a runtime system of DROL on the ARTS kernel, and evaluate the efficiency of the prototype implementation as well as confirm the high expressive power of the language.",
"title": ""
},
{
"docid": "e181f73c36c1d8c9463ef34da29d9e03",
"text": "This paper examines prospects and limitations of citation studies in the humanities. We begin by presenting an overview of bibliometric analysis, noting several barriers to applying this method in the humanities. Following that, we present an experimental tool for extracting and classifying citation contexts in humanities journal articles. This tool reports the bibliographic information about each reference, as well as three features about its context(s): frequency, locationin-document, and polarity. We found that extraction was highly successful (above 85%) for three of the four journals, and statistics for the three citation figures were broadly consistent with previous research. We conclude by noting several limitations of the sentiment classifier and suggesting future areas for refinement. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "62d7490c530808eb7158f601292a55a1",
"text": "Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright © 2011 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "3840043afe85979eb901ad05b5b8952f",
"text": "Cross media retrieval systems have received increasing interest in recent years. Due to the semantic gap between low-level features and high-level semantic concepts of multimedia data, many researchers have explored joint-model techniques in cross media retrieval systems. Previous joint-model approaches usually focus on two traditional ways to design cross media retrieval systems: (a) fusing features from different media data; (b) learning different models for different media data and fusing their outputs. However, the process of fusing features or outputs will lose both low- and high-level abstraction information of media data. Hence, both ways do not really reveal the semantic correlations among the heterogeneous multimedia data. In this paper, we introduce a novel method for the cross media retrieval task, named Parallel Field Alignment Retrieval (PFAR), which integrates a manifold alignment framework from the perspective of vector fields. Instead of fusing original features or outputs, we consider the cross media retrieval as a manifold alignment problem using parallel fields. The proposed manifold alignment algorithm can effectively preserve the metric of data manifolds, model heterogeneous media data and project their relationship into intermediate latent semantic spaces during the process of manifold alignment. After the alignment, the semantic correlations are also determined. In this way, the cross media retrieval task can be resolved by the determined semantic correlations. Comprehensive experimental results have demonstrated the effectiveness of our approach.",
"title": ""
},
{
"docid": "52a7343d4b2c5cf8312b0ea4bb68098b",
"text": "Blockchain technologies can enable decentralized and trustful features for the Internet of Things (IoT). Although, existing blockchain based solutions to provide data integrity verification for semi-trusted data storages (e.g. cloud providers) cannot respect the time determinism required by Cyber-Physical Systems (CPS). Additionally, they cannot be applied to resource-constrained IoT devices. We propose an architecture that can take advantage of blockchain features to allow further integrity verification of data produced by IoT devices even in the realm of CPS. Our architecture is divided into three levels, each of them responsible for tasks compatible with their resources capabilities. The first level, composed of sensors, actuators, and gateways, introduces the concept of Proof-of-Trust (PoT), an energy-efficient, time-deterministic and secure communication based on the Trustful Space-Time Protocol (TSTP). Upper levels are responsible for data persistence and integrity verification in the Cloud. The work also comprises a performance evaluation of the critical path of data to demonstrate that the architecture respect time-bounded operations demanded by the sense-decide-actuate cycle of CPSs. The additional delay of 5.894us added by our architecture is negligible for a typical TSTP with IEEE 802.15.4 radios which has communication latencies in the order of hundreds of milisseconds in each hop.",
"title": ""
},
{
"docid": "d1525fdab295a16d5610210e80fb8104",
"text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.",
"title": ""
},
{
"docid": "3dff0dd3f6518f7bc7d8ea6e3a3b23e6",
"text": "We present a Deep Cuboid Detector which takes a consumer-quality RGB image of a cluttered scene and localizes all 3D cuboids (box-like objects). Contrary to classical approaches which fit a 3D model from low-level cues like corners, edges, and vanishing points, we propose an end-to-end deep learning system to detect cuboids across many semantic categories (e.g., ovens, shipping boxes, and furniture). We localize cuboids with a 2D bounding box, and simultaneously localize the cuboid’s corners, effectively producing a 3D interpretation of box-like objects. We refine keypoints by pooling convolutional features iteratively, improving the baseline method significantly. Our deep learning cuboid detector is trained in an end-to-end fashion and is suitable for real-time applications in augmented reality (AR) and robotics.",
"title": ""
},
{
"docid": "c52d31c7ae39d1a7df04140e920a26d2",
"text": "In the past half-decade, Amazon Mechanical Turk has radically changed the way many scholars do research. The availability of a massive, distributed, anonymous crowd of individuals willing to perform general human-intelligence micro-tasks for micro-payments is a valuable resource for researchers and practitioners. This paper addresses the challenges of obtaining quality annotations for subjective judgment oriented tasks of varying difficulty. We design and conduct a large, controlled experiment (N=68,000) to measure the efficacy of selected strategies for obtaining high quality data annotations from non-experts. Our results point to the advantages of person-oriented strategies over process-oriented strategies. Specifically, we find that screening workers for requisite cognitive aptitudes and providing training in qualitative coding techniques is quite effective, significantly outperforming control and baseline conditions. Interestingly, such strategies can improve coder annotation accuracy above and beyond common benchmark strategies such as Bayesian Truth Serum (BTS).",
"title": ""
},
{
"docid": "c19863ef5fa4979f288763837e887a7c",
"text": "Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system. In this work, we seek to address the following basic questions: • What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application? • Can we design a provably secure protocol that satisfies these requirements? To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.",
"title": ""
},
{
"docid": "4aeb208dd40ee7fdd7837b7faabfbdcf",
"text": "The growing recognition of precision medicine by clinicians, health systems, and the pharmaceutical industry, as well as by patients and policymakers,1 reflects the emergence of a field that is accelerating rapidly and will leave a major imprint on the practice of medicine. In this article, we summarize the forces accelerating precision medicine, the challenges to its implementation, and the implications for clinical practice.",
"title": ""
},
{
"docid": "4df7522303220444651f85b38b1a120f",
"text": "An efficient and novel technique is developed for detecting and localizing corners of planar curves. This paper discusses the gradient feature distribution of planar curves and constructs gradient correlation matrices (GCMs) over the region of support (ROS) of these planar curves. It is shown that the eigenstructure and determinant of the GCMs encode the geometric features of these curves, such as curvature features and the dominant points. The determinant of the GCMs is shown to have a strong corner response, and is used as a ‘‘cornerness’’ measure of planar curves. A comprehensive performance evaluation of the proposed detector is performed, using the ACU and localization error criteria. Experimental results demonstrate that the GCM detector has a strong corner position response, along with a high detection rate and good localization performance. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "550e19033cb00938aed89eb3cce50a76",
"text": "This paper presents a high gain wide band 2×2 microstrip array antenna. The microstrip array antenna (MSA) is fabricated on inexpensive FR4 substrate and placed 1mm above ground plane to improve the bandwidth and efficiency of the antenna. A reactive impedance surface (RIS) consisting of 13×13 array of 4 mm square patches with inter-element spacing of 1 mm is fabricated on the bottom side of FR4 substrate. RIS reduces the coupling between the ground plane and MSA array and therefore increases the efficiency of antenna. It enhances the bandwidth and gain of the antenna. RIS also helps in reduction of SLL and cross polarization. This MSA array with RIS is place in a Fabry Perot cavity (FPC) resonator to enhance the gain of the antenna. 2×2 and 4×4 array of square parasitic patches are fed by MSA array fabricated on a FR4 superstrate which forms the partially reflecting surface of FPC. The FR4 superstrate layer is supported with help of dielectric rods at the edges with air at about λ0/2 from ground plane. A microstrip feed line network is designed and the printed MSA array is fed by a 50 Ω coaxial probe. The VSWR is <; 2 is obtained over 5.725-6.4 GHz, which covers 5.725-5.875 GHz ISM WLAN frequency band and 5.9-6.4 GHz satellite uplink C band. The antenna gain increases from 12 dB to 15.8 dB as 4×4 square parasitic patches are fabricated on superstrate layer. The gain variation is less than 2 dB over the entire band. The antenna structure provides SLL and cross polarization less than -2ο dB, front to back lobe ratio higher than 20 dB and more than 70 % antenna efficiency. A prototype structure is realized and tested. The measured results satisfy with the simulation results. The antenna can be a suitable candidate for access point, satellite communication, mobile base station antenna and terrestrial communication system.",
"title": ""
},
{
"docid": "9051f952259ddd4393e9d14dbac6fe6a",
"text": "Doubly fed induction generators (DFIGs) are widely used in variable-speed wind turbines. Despite the well-accepted performance of DFIGs, these generators are highly sensible to grid faults. Hence, the presence of grid faults must be considered in the design of any control system to be deployed on DFIGs. Sliding mode control (SMC) is a useful alternative for electric machinery control since SMC offers fast dynamic response and less sensitivity to parameter variations and disturbances. Additionally, the natural outputs of SMC are discontinuous signals allowing direct switching of power electronic devices. In this paper, a grid-voltage-oriented SMC is proposed and tested under low voltage grid faults. Unlike other nonmodulated techniques such as direct torque control, there is not a necessity of modifying the controller structure for withstanding low depth voltage dips. For stator natural flux cancelation, the torque and reactive power references are modified to inject a demagnetizing current. Simulation results demonstrate the demagnetization of the natural flux component as well as a robust tracking control under balanced and unbalanced voltage dips.",
"title": ""
},
{
"docid": "bed29a89354c1dfcebbdde38d1addd1d",
"text": "Eosinophilic skin diseases, commonly termed as eosinophilic dermatoses, refer to a broad spectrum of skin diseases characterized by eosinophil infiltration and/or degranulation in skin lesions, with or without blood eosinophilia. The majority of eosinophilic dermatoses lie in the allergy-related group, including allergic drug eruption, urticaria, allergic contact dermatitis, atopic dermatitis, and eczema. Parasitic infestations, arthropod bites, and autoimmune blistering skin diseases such as bullous pemphigoid, are also common. Besides these, there are several rare types of eosinophilic dermatoses with unknown origin, in which eosinophil infiltration is a central component and affects specific tissue layers or adnexal structures of the skin, such as the dermis, subcutaneous fat, fascia, follicles, and cutaneous vessels. Some typical examples are eosinophilic cellulitis, granuloma faciale, eosinophilic pustular folliculitis, recurrent cutaneous eosinophilic vasculitis, and eosinophilic fasciitis. Although tissue eosinophilia is a common feature shared by these disorders, their clinical and pathological properties differ dramatically. Among these rare entities, eosinophilic pustular folliculitis may be associated with human immunodeficiency virus (HIV) infection or malignancies, and some other diseases, like eosinophilic fasciitis and eosinophilic cellulitis, may be associated with an underlying hematological disorder, while others are considered idiopathic. However, for most of these rare eosinophilic dermatoses, the causes and the pathogenic mechanisms remain largely unknown, and systemic, high-quality clinical investigations are needed for advances in better strategies for clinical diagnosis and treatment. Here, we present a comprehensive review on the etiology, pathogenesis, clinical features, and management of these rare entities, with an emphasis on recent advances and current consensus.",
"title": ""
},
{
"docid": "94d6182c7bf77d179e59247d04573bcd",
"text": "Flash memory cells typically undergo a few thousand Program/Erase (P/E) cycles before they wear out. However, the programming strategy of flash devices and process variations cause some flash cells to wear out significantly faster than others. This paper studies this variability on two commercial devices, acknowledges its unavoidability, figures out how to identify the weakest cells, and introduces a wear unbalancing technique that let the strongest cells relieve the weak ones in order to lengthen the overall lifetime of the device. Our technique periodically skips or relieves the weakest pages whenever a flash block is programmed. Relieving the weakest pages can lead to a lifetime extension of up to 60% for a negligible memory and storage overhead, while minimally affecting (sometimes improving) the write performance. Future technology nodes will bring larger variance to page endurance, increasing the need for techniques similar to the one proposed in this work.",
"title": ""
},
{
"docid": "3de480136e0fd3e122e63870bc49ebdb",
"text": "22FDX™ is the industry's first FDSOI technology architected to meet the requirements of emerging mobile, Internet-of-Things (IoT), and RF applications. This platform achieves the power and performance efficiency of a 16/14nm FinFET technology in a cost effective, planar device architecture that can be implemented with ∼30% fewer masks. Performance comes from a second generation FDSOI transistor, which produces nFET (pFET) drive currents of 910μΑ/μm (856μΑ/μm) at 0.8 V and 100nA/μm Ioff. For ultra-low power applications, it offers low-voltage operation down to 0.4V V<inf>min</inf> for 8T logic libraries, as well as 0.62V and 0.52V V<inf>min</inf> for high-density and high-current bitcells, ultra-low leakage devices approaching 1pA/μm I<inf>off</inf>, and body-biasing to actively trade-off power and performance. Superior RF/Analog characteristics to FinFET are achieved including high f<inf>T</inf>/f<inf>MAx</inf> of 375GHz/290GHz and 260GHz/250GHz for nFET and pFET, respectively. The high f<inf>MAx</inf> extends the capabilities to 5G and milli-meter wave (>24GHz) RF applications.",
"title": ""
}
] | scidocsrr |
ec9c10e81b972a103b15041f17c2c8e9 | Individual Tree Delineation in Windbreaks Using Airborne-Laser-Scanning Data and Unmanned Aerial Vehicle Stereo Images | [
{
"docid": "a0c37bb6608f51f7095d6e5392f3c2f9",
"text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589",
"title": ""
}
] | [
{
"docid": "24d77eb4ea6ecaa44e652216866ab8c8",
"text": "In the development of smart cities across the world VANET plays a vital role for optimized route between source and destination. The VANETs is based on infra-structure less network. It facilitates vehicles to give information about safety through vehicle to vehicle communication (V2V) or vehicle to infrastructure communication (V2I). In VANETs wireless communication between vehicles so attackers violate authenticity, confidentiality and privacy properties which further effect security. The VANET technology is encircled with security challenges these days. This paper presents overview on VANETs architecture, a related survey on VANET with major concern of the security issues. Further, prevention measures of those issues, and comparative analysis is done. From the survey, found out that encryption and authentication plays an important role in VANETS also some research direction defined for future work.",
"title": ""
},
{
"docid": "faf25bfda6d078195b15f5a36a32673a",
"text": "In high performance VLSI circuits, the power consumption is mainly related to signal transition, charging and discharging of parasitic capacitance in transistor during switching activity. Adiabatic switching is a reversible logic to conserve energy instead of dissipating power reuses it. In this paper, low power multipliers and compressor are designed using adiabatic logic. Compressors are the basic components in many applications like partial product summation in multipliers. The Vedic multiplier is designed using the compressor and the power result is analysed. The designs are implemented and the power results are obtained using TANNER EDA 12.0 tool. This paper presents a novel scheme for analysis of low power multipliers using adiabatic logic in inverter and in the compressor. The scheme is optimized for low power as well as high speed implementation over reported scheme. K e y w o r d s : A d i a b a t i c l o g i c , C o m p r e s s o r , M u l t i p l i e r s .",
"title": ""
},
{
"docid": "adf69030a68ed3bf6fc4d008c50ac5b5",
"text": "Many patients with low back and/or pelvic girdle pain feel relief after application of a pelvic belt. External compression might unload painful ligaments and joints, but the exact mechanical effect on pelvic structures, especially in (active) upright position, is still unknown. In the present study, a static three-dimensional (3-D) pelvic model was used to simulate compression at the level of anterior superior iliac spine and the greater trochanter. The model optimised forces in 100 muscles, 8 ligaments and 8 joints in upright trunk, pelvis and upper legs using a criterion of minimising maximum muscle stress. Initially, abdominal muscles, sacrotuberal ligaments and vertical sacroiliac joints (SIJ) shear forces mainly balanced a trunk weight of 500N in upright position. Application of 50N medial compression force at the anterior superior iliac spine (equivalent to 25N belt tension force) deactivated some dorsal hip muscles and reduced the maximum muscle stress by 37%. Increasing the compression up to 100N reduced the vertical SIJ shear force by 10% and increased SIJ compression force with 52%. Shifting the medial compression force of 100N in steps of 10N to the greater trochanter did not change the muscle activation pattern but further increased SIJ compression force by 40% compared to coxal compression. Moreover, the passive ligament forces were distributed over the sacrotuberal, the sacrospinal and the posterior ligaments. The findings support the cause-related designing of new pelvic belts to unload painful pelvic ligaments or muscles in upright posture.",
"title": ""
},
{
"docid": "d3ec3eeb5e56bdf862f12fe0d9ffe71c",
"text": "This paper will communicate preliminary findings from applied research exploring how to ensure that serious games are cost effective and engaging components of future training solutions. The applied research is part of a multimillion pound program for the Department of Trade and Industry, and involves a partnership between UK industry and academia to determine how bespoke serious games should be used to best satisfy learning needs in a range of contexts. The main objective of this project is to produce a minimum of three serious games prototypes for clients from different sectors (e.g., military, medical and business) each prototype addressing a learning need or learning outcome that helps solve a priority business problem or fulfill a specific training need. This paper will describe a development process that aims to encompass learner specifics and targeted learning outcomes in order to ensure that the serious game is successful. A framework for describing game-based learning scenarios is introduced, and an approach to the analysis that effectively profiles the learner within the learner group with respect to game-based learning is outlined. The proposed solution also takes account of relevant findings from serious games research on particular learner groups that might support the selection and specification of a game. A case study on infection control will be used to show how this approach to the analysis is being applied for a healthcare issue.",
"title": ""
},
{
"docid": "9e5cd32f56abf7ff9d98847970394236",
"text": "This paper presents the results of a detailed study of the singular configurations of 3planar parallel mechanisms with three identical legs. Only prismatic and revolute jo are considered. From the point of view of singularity analysis, there are ten diffe architectures. All of them are examined in a compact and systematic manner using p screw theory. The nature of each possible singular configuration is discussed an singularity loci for a constant orientation of the mobile platform are obtained. For so architectures, simplified designs with easy to determine singularities are identified. @DOI: 10.1115/1.1582878 #",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "8d8e7c9777f02c6a4a131f21a66ee870",
"text": "Teaching agile practices is becoming a priority in Software engineering curricula as a result of the increasing use of agile methods (AMs) such as Scrum in the software industry. Limitations in time, scope, and facilities within academic contexts hinder students’ hands-on experience in the use of professional AMs. To enhance students’ exposure to Scrum, we have developed Virtual Scrum, an educational virtual world that simulates a Scrum-based team room through virtual elements such as blackboards, a Web browser, document viewers, charts, and a calendar. A preliminary version of Virtual Scrum was tested with a group of 45 students running a capstone project with and without Virtual Scrum support. Students’ feedback showed that Virtual Scrum is a viable and effective tool to implement the different elements in a Scrum team room and to perform activities throughout the Scrum process. 2013 Wiley Periodicals, Inc. Comput Appl Eng Educ 23:147–156, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21588",
"title": ""
},
{
"docid": "68470cd075d9c475b5ff93578ff7e86d",
"text": "Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is being able to recognize feelings in the conversation partner and reply accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of large-scale publicly available datasets both for emotion and relevant dialogues. This work proposes a new task for empathetic dialogue generation and EMPATHETICDIALOGUES, a dataset of 25k conversations grounded in emotional contexts to facilitate training and evaluating dialogue systems. Our experiments indicate that models explicitly leveraging emotion predictions from previous utterances are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores).",
"title": ""
},
{
"docid": "0b50ec58f82b7ac4ad50eb90425b3aea",
"text": "OBJECTIVES\nThe study aimed (1) to examine if there are equivalent results in terms of union, alignment and elbow functionally comparing single- to dual-column plating of AO/OTA 13A2 and A3 distal humeral fractures and (2) if there are more implant-related complications in patients managed with bicolumnar plating compared to single-column plate fixation.\n\n\nDESIGN\nThis was a multi-centred retrospective comparative study.\n\n\nSETTING\nThe study was conducted at two academic level 1 trauma centres.\n\n\nPATIENTS/PARTICIPANTS\nA total of 105 patients were identified to have surgical management of extra-articular distal humeral fractures Arbeitsgemeinschaft für Osteosynthesefragen/Orthopaedic Trauma Association (AO/OTA) 13A2 and AO/OTA 13A3).\n\n\nINTERVENTION\nPatients were treated with traditional dual-column plating or a single-column posterolateral small-fragment pre-contoured locking plate used as a neutralisation device with at least five screws in the short distal segment.\n\n\nMAIN OUTCOME MEASUREMENTS\nThe patients' elbow functionality was assessed in terms of range of motion, union and alignment. In addition, the rate of complications between the groups including radial nerve palsy, implant-related complications (painful prominence and/or ulnar nerve neuritis) and elbow stiffness were compared.\n\n\nRESULTS\nPatients treated with single-column plating had similar union rates and alignment. However, single-column plating resulted in a significantly better range of motion with less complications.\n\n\nCONCLUSIONS\nThe current study suggests that exposure/instrumentation of only the lateral column is a reliable and preferred technique. This technique allows for comparable union rates and alignment with increased elbow functionality and decreased number of complications.",
"title": ""
},
{
"docid": "0db28b5ec56259c8f92f6cc04d4c2601",
"text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9da6883a9fe700aeb84208efbf0a56a3",
"text": "With the increasing demand for more energy efficient buildings, the construction industry is faced with the challenge to ensure that the energy efficiency predicted during the design is realised once a building is in use. There is, however, significant evidence to suggest that buildings are not performing as well as expected and initiatives such as PROBE and CarbonBuzz aim to illustrate the extent of this so called „Performance Gap‟. This paper discusses the underlying causes of discrepancies between detailed energy modelling predictions and in-use performance of occupied buildings (after the twelve month liability period). Many of the causal factors relate to the use of unrealistic input parameters regarding occupancy behaviour and facilities management in building energy models. In turn, this is associated with the lack of feedback to designers once a building has been constructed and occupied. This paper aims to demonstrate how knowledge acquired from Post-Occupancy Evaluation (POE) can be used to produce more accurate energy performance models. A case study focused specifically on lighting, small power and catering equipment in a high density office building is presented. Results show that by combining monitored data with predictive energy modelling, it was possible to increase the accuracy of the model to within 3% of actual electricity consumption values. Future work will seek to use detailed POE data to develop a set of evidence based benchmarks for energy consumption in office buildings. It is envisioned that these benchmarks will inform designers on the impact of occupancy and management on the actual energy consumption of buildings. Moreover, it should enable the use of more realistic input parameters in energy models, bringing the predicted figures closer to reality.",
"title": ""
},
{
"docid": "ced98c32f887001d40e783ab7b294e1a",
"text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.",
"title": ""
},
{
"docid": "f1fe8a9d2e4886f040b494d76bc4bb78",
"text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.",
"title": ""
},
{
"docid": "02d518721f8ab3c4b2abb854c9111267",
"text": "BACKGROUND\nDue to the excessive and pathologic effects of depression and anxiety, it is important to identify the role of protective factors, such as effective coping and social support. This study examined the associations between perceived social support and coping styles with depression and anxiety levels.\n\n\nMATERIALS AND METHODS\nThis cross sectional study was part of the Study on the Epidemiology of Psychological, Alimentary Health and Nutrition project. A total 4658 individuals aged ≥20 years was selected by cluster random sampling. Subjects completed questionnaires, which were used to describe perceived social support, coping styles, depression and anxiety. t-test, Chi-square test, pearson's correlation and Logistic regression analysis were used in data analyses.\n\n\nRESULTS\nThe results of Logistic regression analysis showed after adjusting demographic characteristics for odd ratio of anxiety, active copings such as positive re-interpretation and growth with odds ratios; 95% confidence interval: 0.82 (0.76, 0.89), problem engagement (0.92 [0.87, 0.97]), acceptance (0.82 [0.74, 0.92]) and also among perceived social supports, family (0.77 [0.71, 0.84]) and others (0.84 [0.76, 0.91]) were protective. In addition to, for odd ratio of depression, active copings such as positive re-interpretation and growth (0.74 [0.69, 0.79]), problem engagement (0.89 [0.86, 0.93]), and support seeking (0.96 [0.93, 0.99]) and all of social support types (family [0.75 (0.70, 0.80)], friends [0.90 (0.85, 0.95)] and others [0.80 (0.75, 0.86)]) were protective. Avoidance was risk factor for both of anxiety (1.19 [1.12, 1.27]) and depression (1.22 [1.16, 1.29]).\n\n\nCONCLUSION\nThis study shows active coping styles and perceived social supports particularly positive re-interpretation and family social support are protective factors for depression and anxiety.",
"title": ""
},
{
"docid": "eb8087d0f30945d45a0deb02b7f7bb53",
"text": "The use of teams, especially virtual teams, is growing significantly in corporations, branches of the government and nonprofit organizations. However, despite this prevalence, little is understood in terms of how to best train these teams for optimal performance. Team training is commonly cited as a factor for increasing team performance, yet, team training is often applied in a haphazard and brash manner, if it is even applied at all. Therefore, this paper attempts to identify the flow of a training model for virtual teams. Rooted in transactive memory systems, this theoretical model combines the science of encoding, storing and retrieving information with the science of team training.",
"title": ""
},
{
"docid": "210e9bc5f2312ca49438e6209ecac62e",
"text": "Image classification has become one of the main tasks in the field of computer vision technologies. In this context, a recent algorithm called CapsNet that implements an approach based on activity vectors and dynamic routing between capsules may overcome some of the limitations of the current state of the art artificial neural networks (ANN) classifiers, such as convolutional neural networks (CNN). In this paper, we evaluated the performance of the CapsNet algorithm in comparison with three well-known classifiers (Fisherfaces, LeNet, and ResNet). We tested the classification accuracy on four datasets with a different number of instances and classes, including images of faces, traffic signs, and everyday objects. The evaluation results show that even for simple architectures, training the CapsNet algorithm requires significant computational resources and its classification performance falls below the average accuracy values of the other three classifiers. However, we argue that CapsNet seems to be a promising new technique for image classification, and further experiments using more robust computation resources and refined CapsNet architectures may produce better outcomes.",
"title": ""
},
{
"docid": "19ea89fc23e7c4d564e4a164cfc4947a",
"text": "OBJECTIVES\nThe purpose of this study was to evaluate the proximity of the mandibular molar apex to the buccal bone surface in order to provide anatomic information for apical surgery.\n\n\nMATERIALS AND METHODS\nCone-beam computed tomography (CBCT) images of 127 mandibular first molars and 153 mandibular second molars were analyzed from 160 patients' records. The distance was measured from the buccal bone surface to the root apex and the apical 3.0 mm on the cross-sectional view of CBCT.\n\n\nRESULTS\nThe second molar apex and apical 3 mm were located significantly deeper relative to the buccal bone surface compared with the first molar (p < 0.01). For the mandibular second molars, the distance from the buccal bone surface to the root apex was significantly shorter in patients over 70 years of age (p < 0.05). Furthermore, this distance was significantly shorter when the first molar was missing compared to nonmissing cases (p < 0.05). For the mandibular first molars, the distance to the distal root apex of one distal-rooted tooth was significantly greater than the distance to the disto-buccal root apex (p < 0.01). In mandibular second molar, the distance to the apex of C-shaped roots was significantly greater than the distance to the mesial root apex of non-C-shaped roots (p < 0.01).\n\n\nCONCLUSIONS\nFor apical surgery in mandibular molars, the distance from the buccal bone surface to the apex and apical 3 mm is significantly affected by the location, patient age, an adjacent missing anterior tooth, and root configuration.",
"title": ""
},
{
"docid": "941df83e65700bc2e5ee7226b96e4f54",
"text": "This paper presents design and analysis of a three phase induction motor drive using IGBT‟s at the inverter power stage with volts hertz control (V/F) in closed loop using dsPIC30F2010 as a controller. It is a 16 bit high-performance digital signal controller (DSC). DSC is a single chip embedded controller that integrates the controller attributes of a microcontroller with the computation and throughput capabilities of a DSP in a single core. A 1HP, 3-phase, 415V, 50Hz induction motor is used as load for the inverter. Digital Storage Oscilloscope Textronix TDS2024B is used to record and analyze the various waveforms. The experimental results for V/F control of 3Phase induction motor using dsPIC30F2010 chip clearly shows constant volts per hertz and stable inverter line to line output voltage. Keywords--DSC, constant volts per hertz, PWM inverter, ACIM.",
"title": ""
},
{
"docid": "91f36db08fdc766d5dc86007dc7a02ad",
"text": "In the last few years communication technology has been improved, which increase the need of secure data communication. For this, many researchers have exerted much of their time and efforts in an attempt to find suitable ways for data hiding. There is a technique used for hiding the important information imperceptibly, which is Steganography. Steganography is the art of hiding information in such a way that prevents the detection of hidden messages. The process of using steganography in conjunction with cryptography, called as Dual Steganography. This paper tries to elucidate the basic concepts of steganography, its various types and techniques, and dual steganography. There is also some of research works done in steganography field in past few years.",
"title": ""
}
] | scidocsrr |
8887ddb4d570631146afc215538570ef | Adaptive Algorithms for Acoustic Echo Cancellation: A Review | [
{
"docid": "0991b582ad9fcc495eb534ebffe3b5f8",
"text": "A computationally cheap extension from single-microphone acoustic echo cancellation (AEC) to multi-microphone AEC is presented for the case of a single loudspeaker. It employs the idea of common-acoustical-pole and zero modeling of room transfer functions (RTFs). The RTF models used for multi-microphone AEC share a fixed common denominator polynomial, which is calculated off-line by means of a multi-channel warped linear prediction. By using the common denominator polynomial as a prefilter, only the numerator polynomial has to be estimated recursively for each microphone, hence adapting to changes in the RTFs. This approach allows to decrease the number of numerator coefficients by one order of magnitude for each microphone compared with all-zero modeling. In a first configuration, the prefiltering is done on the adaptive filter signal, hence achieving a pole-zero model of the RTF in the AEC. In a second configuration, the (inverse) prefiltering is done on the loudspeaker signal, hence achieving a dereverberation effect, in addition to AEC, on the microphone signals.",
"title": ""
}
] | [
{
"docid": "9003a12f984d2bf2fd84984a994770f0",
"text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.",
"title": ""
},
{
"docid": "282ace724b3c9a2e8b051499ba5e4bfe",
"text": "Fog computing, being an extension to cloud computing has addressed some issues found in cloud computing by providing additional features, such as location awareness, low latency, mobility support, and so on. Its unique features have also opened a way toward security challenges, which need to be focused for making it bug-free for the users. This paper is basically focusing on overcoming the security issues encountered during the data outsourcing from fog client to fog node. We have added Shibboleth also known as security and cross domain access control protocol between fog client and fog node for improved and secure communication between the fog client and fog node. Furthermore to prove whether Shibboleth meets the security requirement needed to provide the secure outsourcing. We have also formally verified the protocol against basic security properties using high level Petri net.",
"title": ""
},
{
"docid": "bc1d4ce838971d6a04d5bf61f6c3f2d8",
"text": "This paper presents a novel network slicing management and orchestration architectural framework. A brief description of business scenarios and potential customers of network slicing is provided, illustrating the need for ordering network services with very different requirements. Based on specific customer goals (of ordering and building an end-to-end network slice instance) and other requirements gathered from industry and standardization associations, a solution is proposed enabling the automation of end-to-end network slice management and orchestration in multiple resource domains. This architecture distinguishes between two main design time and runtime components: Network Slice Design and Multi-Domain Orchestrator, belonging to different competence service areas with different players in these domains, and proposes the required interfaces and data structures between these components.",
"title": ""
},
{
"docid": "fff85feeef18f7fa99819711e47e2d39",
"text": "This paper presents a robotic vehicle that can be operated by the voice commands given from the user. Here, we use the speech recognition system for giving &processing voice commands. The speech recognition system use an I.C called HM2007, which can store and recognize up to 20 voice commands. The R.F transmitter and receiver are used here, for the wireless transmission purpose. The micro controller used is AT89S52, to give the instructions to the robot for its operation. This robotic car can be able to avoid vehicle collision , obstacle collision and it is very secure and more accurate. Physically disabled persons can use these robotic cars and they can be used in many industries and for many applications Keywords—SpeechRecognitionSystem,AT89S52 micro controller, R. F. Transmitter and Receiver.",
"title": ""
},
{
"docid": "13fed0d1099638f536c5a950e3d54074",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are skipping a question, please include it on your PDF/photo, but leave the question blank and tag it appropriately on Gradescope. This includes extra credit problems. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices. 1. [23 points] Uniform convergence You are hired by CNN to help design the sampling procedure for making their electoral predictions for the next presidential election in the (fictitious) country of Elbania. The country of Elbania is organized into states, and there are only two candidates running in this election: One from the Elbanian Democratic party, and another from the Labor Party of Elbania. The plan for making our electorial predictions is as follows: We'll sample m voters from each state, and ask whether they're voting democrat. We'll then publish, for each state, the estimated fraction of democrat voters. In this problem, we'll work out how many voters we need to sample in order to ensure that we get good predictions with high probability. One reasonable goal might be to set m large enough that, with high probability, we obtain uniformly accurate estimates of the fraction of democrat voters in every state. But this might require surveying very many people, which would be prohibitively expensive. So, we're instead going to demand only a slightly lower degree of accuracy. Specifically, we'll say that our prediction for a state is \" highly inaccurate \" if the estimated fraction of democrat voters differs from the actual fraction of democrat voters within that state by more than a tolerance factor γ. CNN knows that their viewers will tolerate some small number of states' estimates being highly inaccurate; however, their credibility would be damaged if they reported highly inaccurate estimates for too many states. So, rather than …",
"title": ""
},
{
"docid": "34be7f7bef24df9c51ee43d360a462c5",
"text": "Rasterization hardware provides interactive frame rates for rendering dynamic scenes, but lacks the ability of ray tracing required for efficient global illumination simulation. Existing ray tracing based methods yield high quality renderings but are far too slow for interactive use. We present a new parallel global illumination algorithm that perfectly scales, has minimal preprocessing and communication overhead, applies highly efficient sampling techniques based on randomized quasi-Monte Carlo integration, and benefits from a fast parallel ray tracing implementation by shooting coherent groups of rays. Thus a performance is achieved that allows for applying arbitrary changes to the scene, while simulating global illumination including shadows from area light sources, indirect illumination, specular effects, and caustics at interactive frame rates. Ceasing interaction rapidly provides high quality renderings.",
"title": ""
},
{
"docid": "6b48f3791d5af0c6bea607360b6ebb9e",
"text": "Despite recent progress in computer vision, fine-grained interpretation of satellite images remains challenging because of a lack of labeled training data. To overcome this limitation, we propose using Wikipedia as a previously untapped source of rich, georeferenced textual information with global coverage. We construct a novel large-scale, multi-modal dataset by pairing geo-referenced Wikipedia articles with satellite imagery of their corresponding locations. To prove the efficacy of this dataset, we focus on the African continent and train a deep network to classify images based on labels extracted from articles. We then fine-tune the model on a humanannotated dataset and demonstrate that this weak form of supervision can drastically reduce the quantity of humanannotated labels and time required for downstream tasks.",
"title": ""
},
{
"docid": "1c075aac5462cf6c6251d6c9c1a679c0",
"text": "Why You Can’t Find a Taxi in the Rain and Other Labor Supply Lessons from Cab Drivers In a seminal paper, Camerer, Babcock, Loewenstein, and Thaler (1997) find that the wage elasticity of daily hours of work New York City (NYC) taxi drivers is negative and conclude that their labor supply behavior is consistent with target earning (having reference dependent preferences). I replicate and extend the CBLT analysis using data from all trips taken in all taxi cabs in NYC for the five years from 2009-2013. Using the model of expectations-based reference points of Koszegi and Rabin (2006), I distinguish between anticipated and unanticipated daily wage variation and present evidence that only a small fraction of wage variation (about 1/8) is unanticipated so that reference dependence (which is relevant only in response to unanticipated variation) can, at best, play a limited role in determining labor supply. The overall pattern in my data is clear: drivers tend to respond positively to unanticipated as well as anticipated increases in earnings opportunities. This is consistent with the neoclassical optimizing model of labor supply and does not support the reference dependent preferences model. I explore heterogeneity across drivers in their labor supply elasticities and consider whether new drivers differ from more experienced drivers in their behavior. I find substantial heterogeneity across drivers in their elasticities, but the estimated elasticities are generally positive and only rarely substantially negative. I also find that new drivers with smaller elasticities are more likely to exit the industry while drivers who remain learn quickly to be better optimizers (have positive labor supply elasticities that grow with experience). JEL Classification: J22, D01, D03",
"title": ""
},
{
"docid": "d59a2c1673d093584c5f19212d6ba520",
"text": "Introduction and Motivation Today, a majority of data is fundamentally distributed in nature. Data for almost any task is collected over a broad area, and streams in at a much greater rate than ever before. In particular, advances in sensor technology and miniaturization have led to the concept of the sensor network: a (typically wireless) collection of sensing devices collecting detailed data about their surroundings. A fundamental question arises: how to query and monitor this rich new source of data? Similar scenarios emerge within the context of monitoring more traditional, wired networks, and in other emerging models such as P2P networks and grid-based computing. The prevailing paradigm in database systems has been understanding management of centralized data: how to organize, index, access, and query data that is held centrally on a single machine or a small number of closely linked machines. In these distributed scenarios, the axiom is overturned: now, data typically streams into remote sites at high rates. Here, it is not feasible to collect the data in one place: the volume of data collection is too high, and the capacity for data communication relatively low. For example, in battery-powered wireless sensor networks, the main drain on battery life is communication, which is orders of magnitude more expensive than computation or sensing. This establishes a fundamental concept for distributed stream monitoring: if we can perform more computational work within the network to reduce the communication needed, then we can significantly improve the value of our network, by increasing its useful life and extending the range of computation possible over the network. We consider two broad classes of approaches to such in-network query processing, by analogy to query types in traditional DBMSs. In the one shot model, a query is issued by a user at some site, and must be answered based on the current state of data in the network. We identify several possible approaches to this problem. For simple queries, partial computation of the result over a tree can reduce the data transferred significantly. For “holistic” queries, such as medians, count distinct and so on, clever composable summaries give a compact way to accurately approximate query answers. Lastly, careful modeling of correlations between measurements and other trends in the data can further reduce the number of sensors probed. In the continuous model, a query is placed by a user which re-",
"title": ""
},
{
"docid": "022a2f42669fdb337cfb4646fed9eb09",
"text": "A mobile agent with the task to classify its sensor pattern has to cope with ambiguous information. Active recognition of three-dimensional objects involves the observer in a search for discriminative evidence, e.g., by change of its viewpoint. This paper defines the recognition process as a sequential decision problem with the objective to disambiguate initial object hypotheses. Reinforcement learning provides then an efficient method to autonomously develop near-optimal decision strategies in terms of sensorimotor mappings. The proposed system learns object models from visual appearance and uses a radial basis function (RBF) network for a probabilistic interpretation of the two-dimensional views. The information gain in fusing successive object hypotheses provides a utility measure to reinforce actions leading to discriminative viewpoints. The system is verified in experiments with 16 objects and two degrees of freedom in sensor motion. Crucial improvements in performance are gained using the learned in contrast to random camera placements. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "62e2ebbd0b32106f578e71b7494ea321",
"text": "The goal of text categorization is to classify documents into a certain number of predefined categories. The previous works in this area have used a large number of labeled training doculnents for supervised learning. One problem is that it is difficult to create the labeled training documents. While it is easy to collect the unlabeled documents, it is not so easy to manually categorize them for creating traiuing documents. In this paper, we propose an unsupervised learning method to overcome these difficulties. The proposed lnethod divides the documents into sentences, and categorizes each sentence using keyword lists of each category and sentence simihuity measure. And then, it uses the categorized sentences for refining. The proposed method shows a similar degree of performance, compared with the traditional supervised learning inethods. Therefore, this method can be used in areas where low-cost text categorization is needed. It also can be used for creating training documents.",
"title": ""
},
{
"docid": "4c0c6373c40bd42417fa2890fc80986b",
"text": "Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods were used in the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem severely limits both the expressiveness of possible regularity conditions and the variety of provably convergent Plug-and-Play denoising operators. In this paper, we introduce the concept of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of regularity operators without the need for an optimization formulation. Consensus equilibrium is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equations, which can be approached in multiple ways, including as a fixed point problem that generalizes the ADMM approach used in the Plug-and-Play method. We present the Douglas-Rachford (DR) algorithm for computing the CE solution as a fixed point and prove the convergence of this algorithm under conditions that include denoising operators that do not arise from optimization problems and that may not be nonexpansive. We give several examples to illustrate the idea of consensus equilibrium and the convergence properties of the DR algorithm and demonstrate this method on a sparse interpolation problem using electron microscopy data.",
"title": ""
},
{
"docid": "b4284204ae7d9ef39091a651583b3450",
"text": "Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models, which were developed purely as solutions to technical problems for modelling temporal knowledge graphs, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory. We discuss learning, query answering, the path from sensory input to semantic decoding, and relationships between episodic memory and semantic memory. We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models. There are three main hypotheses. The first one is that semantic memory is described as triples and that episodic memory is described as triples in time. A second main hypothesis is that generalized entities have unique latent representations which are shared across memory functions and that are the basis for prediction, decision support and other functionalities executed by working memory. A third main hypothesis is that the latent representation for a time t, which summarizes all sensory information available at time t, is the basis for episodic memory. The proposed model includes both a recall of previous memories and the mental imagery of future events and sensory impressions.",
"title": ""
},
{
"docid": "5a8729b6b08e79e7c27ddf779b0a5267",
"text": "Electric solid propellants are an attractive option for space propulsion because they are ignited by applied electric power only. In this work, the behavior of pulsed microthruster devices utilizing such a material is investigated. These devices are similar in function and operation to the pulsed plasma thruster, which typically uses Teflon as propellant. A Faraday probe, Langmuir triple probe, residual gas analyzer, pendulum thrust stand and high speed camera are utilized as diagnostic devices. These thrusters are made in batches, of which a few devices were tested experimentally in vacuum environments. Results indicate a plume electron temperature of about 1.7 eV, with an electron density between 10 and 10 cm. According to thermal equilibrium and adiabatic expansion calculations, these relatively hot electrons are mixed with ~2000 K neutral and ion species, forming a non-equilibrium gas. From time-of-flight analysis, this gas mixture plume has an effective velocity of 1500-1650 m/s on centerline. The ablated mass of this plume is 215 μg on average, of which an estimated 0.3% is ionized species while 45±11% is ablated at negligible relative speed. This late-time ablation occurs on a time scale three times that of the 0.5 ms pulse discharge, and does not contribute to the measured 0.21 mN-s impulse per pulse. Similar values have previously been measured in pulsed plasma thrusters. These observations indicate the electric solid propellant material in this configuration behaves similar to Teflon in an electrothermal pulsed plasma",
"title": ""
},
{
"docid": "4d4a413931365904cd460249448f3bf4",
"text": "For gaining proficiency in physical human-robot interaction (pHRI), it is crucial for engineering students to be provided with the opportunity to physically interact with and gain hands-on experience on design and control of force-feedback robotic devices. We present a single degree of freedom educational robot that features series elastic actuation and relies on closed loop force control to achieve the desired level of safety and transparency during physical interactions. The proposed device complements the existing impedance-type Haptic Paddle designs by demonstrating the challenges involved in the synergistic design and control of admittance-type devices. We present integration of this device into pHRI education, by providing guidelines for the use of the device to allow students to experience the performance trade-offs inherent in force control systems, due to the non-collocation between the force sensor and the actuator. These exercises enable students to modify the mechanical design in addition to the controllers, by assigning different levels of stiffness values to the compliant element, and characterize the effects of these design choices on the closed-loop force control performance of the device. We also report initial evaluations of the efficacy of the device for",
"title": ""
},
{
"docid": "41defd4d4926625cdb617e8482bf3177",
"text": "Common perception regards the nucleus as a densely packed object with higher refractive index (RI) and mass density than the surrounding cytoplasm. Here, the volume of isolated nuclei is systematically varied by electrostatic and osmotic conditions as well as drug treatments that modify chromatin conformation. The refractive index and dry mass of isolated nuclei is derived from quantitative phase measurements using digital holographic microscopy (DHM). Surprisingly, the cell nucleus is found to have a lower RI and mass density than the cytoplasm in four different cell lines and throughout the cell cycle. This result has important implications for conceptualizing light tissue interactions as well as biological processes in cells.",
"title": ""
},
{
"docid": "102077708fb1623c44c3b23d02387dd4",
"text": "Machine leaning apps require heavy computations, especially with the use of the deep neural network (DNN), so an embedded device with limited hardware cannot run the apps by itself. One solution for this problem is to offload DNN computations from the client to a nearby edge server. Existing approaches to DNN offloading with edge servers either specialize the edge server for fixed, specific apps, or customize the edge server for diverse apps, yet after migrating a large VM image that contains the client's back-end software system. In this paper, we propose a new and simple approach to offload DNN computations in the context of web apps. We migrate the current execution state of a web app from the client to the edge server just before executing a DNN computation, so that the edge server can execute the DNN computation with its powerful hardware. Then, we migrate the new execution state from the edge server to the client so that the client can continue to execute the app. We can save the execution state of the web app in the form of another web app called the snapshot, which immensely simplifies saving and restoring the execution state with a small overhead. We can offload any DNN app to any generic edge server, equipped with a browser and our offloading system. We address some issues related to offloading DNN apps such as how to send the DNN model and how to improve the privacy of user data. We also discuss how to install our offloading system on the edge server on demand. Our experiment with real DNN-based web apps shows that snapshot-based offloading achieves a promising performance result, comparable to running the app entirely on the server.",
"title": ""
},
{
"docid": "ee31719bce1b770e5347b7aa3189d94a",
"text": "Signature-based intrusion detection systems use a set of attack descriptions to analyze event streams, looking for evidence of malicious behavior. If the signatures are expressed in a well-defined language, it is possible to analyze the attack signatures and automatically generate events or series of events that conform to the attack descriptions. This approach has been used in tools whose goal is to force intrusion detection systems to generate a large number of detection alerts. The resulting “alert storm” is used to desensitize intrusion detection system administrators and hide attacks in the event stream. We apply a similar technique to perform testing of intrusion detection systems. Signatures from one intrusion detection system are used as input to an event stream generator that produces randomized synthetic events that match the input signatures. The resulting event stream is then fed to a number of different intrusion detection systems and the results are analyzed. This paper presents the general testing approach and describes the first prototype of a tool, called Mucus, that automatically generates network traffic using the signatures of the Snort network-based intrusion detection system. The paper describes preliminary cross-testing experiments with both an open-source and a commercial tool and reports the results. An evasion attack that was discovered as a result of analyzing the test results is also presented.",
"title": ""
},
{
"docid": "0222814440107fe89c13a790a6a3833e",
"text": "This paper presents a third method of generation and detection of a single-sideband signal. The method is basically different from either the conventional filter or phasing method in that no sharp cutoff filters or wide-band 90° phase-difference networks are needed. This system is especially suited to keeping the signal energy confined to the desired bandwidth. Any unwanted sideband occupies the same band as the desired sideband, and the unwanted sideband in the usual sense is not present.",
"title": ""
},
{
"docid": "e91c18f5509e05471d20d4e28e03b014",
"text": "This paper describes the design of a broadside circularly polarized uniform circular array based on curved planar inverted F-antenna elements. Circular polarization (CP) is obtained by exploiting the sequential rotation technique and implementing it with a series feed network. The proposed structure is first introduced, and some geometrical considerations are derived. Second, the array radiation body is designed taking into account the mutual coupling among antenna elements. Third, the series feed network usually employed for four-antenna element arrays is analyzed and extended to three and more than four antennas exploiting the special case of equal power distribution. The array is designed with three-, four-, five-, and six-antenna elements, and dimensions, impedance bandwidth (defined for <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}\\leq -10$ </tex-math></inline-formula> dB), axial ratio (AR) bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\text {AR}\\leq 3$ </tex-math></inline-formula> dB), gain, beamwidth, front-to-back ratio, and cross-polarization level are compared. Arrays with three and five elements are also prototyped to benchmark the numerical analysis results, finding good correspondence.",
"title": ""
}
] | scidocsrr |
fd6aaf8004e09273035614855bae2869 | Combining Words and Speech Prosody for Automatic Topic Segmentation | [
{
"docid": "7e9dbc7f1c3855972dbe014e2223424c",
"text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.",
"title": ""
},
{
"docid": "e1315cfdc9c1a33b7b871c130f34d6ce",
"text": "TextTiling is a technique for subdividing texts into multi-paragraph units that represent passages, or subtopics. The discourse cues for identifying major subtopic shifts are patterns of lexical co-occurrence and distribution. The algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts. Multi-paragraph subtopic segmentation should be useful for many text analysis tasks, including information retrieval and summarization.",
"title": ""
}
] | [
{
"docid": "61fd52ce6d91dcde173ee65e80167814",
"text": "We present a simple nearest-neighbor (NN) approach that synthesizes highfrequency photorealistic images from an “incomplete” signal such as a lowresolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to map the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars. We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags. 12x12 Input (x8) Our Approach (a) Low-Resolution to High-Resolution Surface Normal Map Our Approach (b) Normals-to-RGB Edges Our Approach (c) Edges-to-RGB (d) Edges-to-RGB (Multiple Outputs) (e) Normals-to-RGB (Multiple Outputs) (d) Edges-to-Shoes (Multiple Outputs) (e) Edges-to-Handbags (Multiple Outputs) Figure 1: Our approach generates photorealistic output for various “incomplete” signals such as a low resolution image, a surface normal map, and edges/boundaries for human faces, cats, dogs, shoes, and handbags. Importantly, our approach can easily generate multiple outputs for a given input which was not possible in previous approaches (Isola et al., 2016) due to mode-collapse problem. Best viewed in electronic format.",
"title": ""
},
{
"docid": "1c8a3500d9fbd7e6c10dfffc06157d74",
"text": "The issue of privacy protection in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we put forward a framework to assess the capacity of privacy protection solutions to hide distinguishing facial information and to conceal identity. We then conduct rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by privacy protection techniques. Results show the ineffectiveness of naïve privacy protection techniques such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.",
"title": ""
},
{
"docid": "fd9411cfa035139010be0935d9e52865",
"text": "This paper presents a robotic manipulation system capable of autonomously positioning a multi-segment soft fluidic elastomer robot in three dimensions. Specifically, we present an extremely soft robotic manipulator morphology that is composed entirely from low durometer elastomer, powered by pressurized air, and designed to be both modular and durable. To understand the deformation of a single arm segment, we develop and experimentally validate a static deformation model. Then, to kinematically model the multi-segment manipulator, we use a piece-wise constant curvature assumption consistent with more traditional continuum manipulators. In addition, we define a complete fabrication process for this new manipulator and use this process to make multiple functional prototypes. In order to power the robot’s spatial actuation, a high capacity fluidic drive cylinder array is implemented, providing continuously variable, closed-circuit gas delivery. Next, using real-time data from a vision system, we develop a processing and control algorithm that generates realizable kinematic curvature trajectories and controls the manipulator’s configuration along these trajectories. Lastly, we experimentally demonstrate new capabilities offered by this soft fluidic elastomer manipulation system such as entering and advancing through confined three-dimensional environments as well as conforming to goal shape-configurations within a sagittal plane under closed-loop control.",
"title": ""
},
{
"docid": "7317ba76ddba2933cdf01d8284fd687e",
"text": "In most of the cases, scientists depend on previous literature which is relevant to their research fields for developing new ideas. However, it is not wise, nor possible, to track all existed publications because the volume of literature collection grows extremely fast. Therefore, researchers generally follow, or cite merely a small proportion of publications which they are interested in. For such a large collection, it is rather interesting to forecast which kind of literature is more likely to attract scientists' response. In this paper, we use the citations as a measurement for the popularity among researchers and study the interesting problem of Citation Count Prediction (CCP) to examine the characteristics for popularity. Estimation of possible popularity is of great significance and is quite challenging. We have utilized several features of fundamental characteristics for those papers that are highly cited and have predicted the popularity degree of each literature in the future. We have implemented a system which takes a series of features of a particular publication as input and produces as output the estimated citation counts of that article after a given time period. We consider several regression models to formulate the learning process and evaluate their performance based on the coefficient of determination (R-square). Experimental results on a real-large data set show that the best predictive model achieves a mean average predictive performance of 0.740 measured in R-square, which significantly outperforms several alternative algorithms.",
"title": ""
},
{
"docid": "09d1fa9a1f9af3e9560030502be1d976",
"text": "Academic Center for Computing and Media Studies, Kyoto University Graduate School of Informatics, Kyoto University Yoshidahonmachi, Sakyo-ku, Kyoto, Japan forest@i.kyoto-u.ac.jp, maeta@ar.media.kyoto-u.ac.jp, yamakata@dl.kuis.kyoto-u.ac.jp, sasada@ar.media.kyoto-u.ac.jp Abstract In this paper, we present our attempt at annotating procedural texts with a flow graph as a representation of understanding. The domain we focus on is cooking recipe. The flow graphs are directed acyclic graphs with a special root node corresponding to the final dish. The vertex labels are recipe named entities, such as foods, tools, cooking actions, etc. The arc labels denote relationships among them. We converted 266 Japanese recipe texts into flow graphs manually. 200 recipes are randomly selected from a web site and 66 are of the same dish. We detail the annotation framework and report some statistics on our corpus. The most typical usage of our corpus may be automatic conversion from texts to flow graphs which can be seen as an entire understanding of procedural texts. With our corpus, one can also try word segmentation, named entity recognition, predicate-argument structure analysis, and coreference resolution.",
"title": ""
},
{
"docid": "0a3feaa346f4fd6bfc0bbda6ba92efc6",
"text": "We present Magic Finger, a small device worn on the fingertip, which supports always-available input. Magic Finger inverts the typical relationship between the finger and an interactive surface: with Magic Finger, we instrument the user's finger itself, rather than the surface it is touching. Magic Finger senses touch through an optical mouse sensor, enabling any surface to act as a touch screen. Magic Finger also senses texture through a micro RGB camera, allowing contextual actions to be carried out based on the particular surface being touched. A technical evaluation shows that Magic Finger can accurately sense 22 textures with an accuracy of 98.9%. We explore the interaction design space enabled by Magic Finger, and implement a number of novel interaction techniques that leverage its unique capabilities.",
"title": ""
},
{
"docid": "a7beddd461e9eba954e947d5c71debe8",
"text": "This paper presents an approach to the problem of paraphrase identification in English and Indian languages using Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Traditional machine learning approaches used features that involved using resources such as POS taggers, dependency parsers, etc. for English. The lack of similar resources for Indian languages has been a deterrent to the advancement of paraphrase detection task in Indian languages. Deep learning helps in overcoming the shortcomings of traditional machine Learning techniques. In this paper, three approaches have been proposed, a simple CNN that uses word embeddings as input, a CNN that uses WordNet scores as input and RNN based approach with both LSTM and bi-directional LSTM.",
"title": ""
},
{
"docid": "8767787aaa4590acda7812411135c168",
"text": "Automatic annotation of images is one of the fundamental problems in computer vision applications. With the increasing amount of freely available images, it is quite possible that the training data used to learn a classifier has different distribution from the data which is used for testing. This results in degradation of the classifier performance and highlights the problem known as domain adaptation. Framework for domain adaptation typically requires a classification model which can utilize several classifiers by combining their results to get the desired accuracy. This work proposes depth-based and iterative depth-based fusion methods which are basically rank-based fusion methods and utilize rank of the predicted labels from different classifiers. Two frameworks are also proposed for domain adaptation. The first framework uses traditional machine learning algorithms, while the other works with metric learning as well as transfer learning algorithm. Motivated from ImageCLEF’s 2014 domain adaptation task, these frameworks with the proposed fusion methods are validated and verified by conducting experiments on the images from five domains having varied distributions. Bing, Caltech, ImageNet, and PASCAL are used as source domains and the target domain is SUN. Twelve object categories are chosen from these domains. The experimental results show the performance improvement not only over the baseline system, but also over the winner of the ImageCLEF’s 2014 domain adaptation challenge.",
"title": ""
},
{
"docid": "ba75caedb1c9e65f14c2764157682bdf",
"text": "Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, the effect of regular data augmentation, such as random image crop, is limited since it might introduce much uncontrolled background noise. In this paper, we propose WeaklySupervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object’s discriminative parts by weakly supervised Learning. Next, we randomly choose one attention map to augment this image, including attention crop and attention drop. Weakly-supervised data augmentation network improves the classification accuracy in two folds. On the one hand, images can be seen better since multiple object parts can be activated. On the other hand, attention regions provide spatial information of objects, which can make images be looked closer to further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our method surpasses the state-of-the-art methods by a large margin, which demonstrated the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "64306a76b61bbc754e124da7f61a4fbe",
"text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.",
"title": ""
},
{
"docid": "7550ec8917588a6adb629e3d1beabd76",
"text": "This paper describes the algorithm for deriving the total column ozone from spectral radiances and irradiances measured by the Ozone Monitoring Instrument (OMI) on the Earth Observing System Aura satellite. The algorithm is based on the differential optical absorption spectroscopy technique. The main characteristics of the algorithm as well as an error analysis are described. The algorithm has been successfully applied to the first available OMI data. First comparisons with ground-based instruments are very encouraging and clearly show the potential of the method.",
"title": ""
},
{
"docid": "848dd074e4615ea5ecb164c96fac6c63",
"text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.",
"title": ""
},
{
"docid": "0ef4cf0b46b43670a3d9554aba6e2d89",
"text": "lthough banks’ lending activities draw the attention of supervisors, lawmakers, researchers, and the press, a very substantial and growing portion of the industry’s total revenue is received in the form of fee income. The amount of fee, or noninterest, income earned by the banking sector suggests that the significance of payments services has been understated or overlooked. A lack of good information about the payments area may partly explain the failure to gauge the size of this business line correctly. In reports to supervisory agencies, banking organizations provide data relating primarily to their safety and soundness. By the design of the reports, banks transmit information on profitability, capital, and the size and condition of the loan portfolio. Limited information can be extracted from regulatory reports on individual business lines; in fact, these reports imply that banks receive just 7 percent of their net revenue from payments services. A narrow definition of payments, or transactions, services may also contribute to a poor appreciation of this banking function. While checking accounts are universally recognized as a payments service, credit cards, corporate trust accounts, and securities processing should also be treated as parts of a bank’s payments business. The common but limited definition of the payments area reflects the tight focus of banking research on lending and deposit taking. In theoretical studies, economists explain the prominence of commercial banks in the financial sector in terms of these two functions. First, by developing their skills in screening applicants, monitoring borrowers, and obtaining repayment, commercial banks became the dominant lender to relatively small-sized borrowers. Second, because investors demand protection against the risk that they may need liquidity earlier than anticipated, bank deposits are a special and highly useful financial instrument. While insightful, neither rationale explains why A",
"title": ""
},
{
"docid": "8ef51eeb7705a1369103a36f60268414",
"text": "Cloud computing is a new way of delivering computing resources, not a new technology. Computing services ranging from data storage and processing to software, such as email handling, are now available instantly, commitment-free and on-demand. Since we are in a time of belt-tightening, this new economic model for computing has found fertile ground and is seeing massive global investment. According to IDC’s analysis, the worldwide forecast for cloud services in 2009 will be in the order of $17.4bn. The estimation for 2013 amounts to $44.2bn, with the European market ranging from €971m in 2008 to €6,005m in 2013 .",
"title": ""
},
{
"docid": "0b024671e04090051292b5e76a4690ae",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "f7ce06365e2c74ccbf8dcc04277cfb9d",
"text": "In this paper, we propose an enhanced method for detecting light blobs (LBs) for intelligent headlight control (IHC). The main function of the IHC system is to automatically convert high-beam headlights to low beam when vehicles are found in the vicinity. Thus, to implement the IHC, it is necessary to detect preceding or oncoming vehicles. Generally, this process of detecting vehicles is done by detecting LBs in the images. Previous works regarding LB detection can largely be categorized into two approaches by the image type they use: low-exposure (LE) images or autoexposure (AE) images. While they each have their own strengths and weaknesses, the proposed method combines them by integrating the use of the partial region of the AE image confined by the lane detection information and the LE image. Consequently, the proposed method detects headlights at various distances and taillights at close distances using LE images while handling taillights at distant locations by exploiting the confined AE images. This approach enhances the performance of detecting the distant LBs while maintaining low false detections.",
"title": ""
},
{
"docid": "1a747f8474841b6b99184487994ad6a2",
"text": "This paper discusses the effects of multivariate correlation analysis on the DDoS detection and proposes an example, a covariance analysis model for detecting SYN flooding attacks. The simulation results show that this method is highly accurate in detecting malicious network traffic in DDoS attacks of different intensities. This method can effectively differentiate between normal and attack traffic. Indeed, this method can detect even very subtle attacks only slightly different from the normal behaviors. The linear complexity of the method makes its real time detection practical. The covariance model in this paper to some extent verifies the effectiveness of multivariate correlation analysis for DDoS detection. Some open issues still exist in this model for further research.",
"title": ""
},
{
"docid": "a329c114a101a7968b67c3cd179b27f6",
"text": "The detection of text lines, as a first processing step, is critical in all text recognition systems. State-of-the-art methods to locate lines of text are based on handcrafted heuristics fine-tuned by the image processing community's experience. They succeed under certain constraints; for instance the background has to be roughly uniform. We propose to use more “agnostic” Machine Learning-based approaches to address text line location. The main motivation is to be able to process either damaged documents, or flows of documents with a high variety of layouts and other characteristics. A new method is presented in this work, inspired by the latest generation of optical models used for text recognition, namely Recurrent Neural Networks. As these models are sequential, a column of text lines in our application plays here the same role as a line of characters in more traditional text recognition settings. A key advantage of the proposed method over other data-driven approaches is that compiling a training dataset does not require labeling line boundaries: only the number of lines are required for each paragraph. Experimental results show that our approach gives similar or better results than traditional handcrafted approaches, with little engineering efforts and less hyper-parameter tuning.",
"title": ""
},
{
"docid": "919dc4727575e2ce0419d31b03ddfbf3",
"text": "In wireless ad hoc networks, although defense strategies such as intrusion detection systems (IDSs) can be deployed at each mobile node, significant constraints are imposed in terms of the energy expenditure of such systems. In this paper, we propose a game theoretic framework to analyze the interactions between pairs of attacking/defending nodes using a Bayesian formulation. We study the achievable Nash equilibrium for the attacker/defender game in both static and dynamic scenarios. The dynamic Bayesian game is a more realistic model, since it allows the defender to consistently update his belief on his opponent's maliciousness as the game evolves. A new Bayesian hybrid detection approach is suggested for the defender, in which a lightweight monitoring system is used to estimate his opponent's actions, and a heavyweight monitoring system acts as a last resort of defense. We show that the dynamic game produces energy-efficient monitoring strategies for the defender, while improving the overall hybrid detection power.",
"title": ""
},
{
"docid": "39ab78b58f6ace0fc29f18a1c4ed8ebc",
"text": "We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.",
"title": ""
}
] | scidocsrr |
21e0f18f34267496c1b1f96dcdc63e8b | Review of Image Processing Technique for Glaucoma Detection | [
{
"docid": "830a585529981bd5b61ac5af3055d933",
"text": "Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.",
"title": ""
},
{
"docid": "8a9b118ba8e3546ef70670ea45e8988f",
"text": "The retinal fundus photograph is widely used in the diagnosis and treatment of various eye diseases such as diabetic retinopathy and glaucoma. Medical image analysis and processing has great significance in the field of medicine, especially in non-invasive treatment and clinical study. Normally fundus images are manually graded by specially trained clinicians in a time-consuming and resource-intensive process. A computer-aided fundus image analysis could provide an immediate detection and characterisation of retinal features prior to specialist inspection. This paper describes a novel method to automatically localise one such feature: the optic disk. The proposed method consists of two steps: in the first step, a circular region of interest is found by first isolating the brightest area in the image by means of morphological processing, and in the second step, the Hough transform is used to detect the main circular feature (corresponding to the optical disk) within the positive horizontal gradient image within this region of interest. Initial results on a database of fundus images show that the proposed method is effective and favourable in relation to comparable techniques.",
"title": ""
}
] | [
{
"docid": "60ea2144687d867bb4f6b21e792a8441",
"text": "Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.",
"title": ""
},
{
"docid": "a5ac7aa3606ebb683d4d9de5dcd89856",
"text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.",
"title": ""
},
{
"docid": "df02dafb455e2b68035cf8c150e28a0a",
"text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.",
"title": ""
},
{
"docid": "fc77be5db198932d6cb34e334a4cdb4b",
"text": "This thesis investigates how data mining algorithms can be used to predict Bodily Injury Liability Insurance claim payments based on the characteristics of the insured customer’s vehicle. The algorithms are tested on real data provided by the organizer of the competition. The data present a number of challenges such as high dimensionality, heterogeneity and missing variables. The problem is addressed using a combination of regression, dimensionality reduction, and classification techniques. Questa tesi si propone di esaminare come alcune tecniche di data mining possano essere usate per predirre l’ammontare dei danni che un’ assicurazione dovrebbe risarcire alle persone lesionate a partire dalle caratteristiche del veicolo del cliente assicurato. I dati utilizzati sono reali e la loro analisi presenta diversi ostacoli dati dalle loro grandi dimensioni, dalla loro eterogeneitá e da alcune variabili mancanti. ll problema é stato affrontato utilizzando una combinazione di tecniche di regressione, di riduzione di dimensionalitá e di classificazione.",
"title": ""
},
{
"docid": "cb00e564a81ace6b75e776f1fe41fb8f",
"text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30",
"title": ""
},
{
"docid": "5bbd4675eb1b408895f29340c3cd074a",
"text": "We performed underground real-time tests to obtain alpha particle-induced soft error rates (α-SER) with high accuracies for SRAMs with 180 nm – 90 nm technologies and studied the scaling trend of α-SERs. In order to estimate the maximum permissive rate of alpha emission from package resin, the α-SER was compared to the neutron-induced soft error rate (n-SER) obtained from accelerated tests. We found that as devices are scaled down, the α-SER increased while the n-SER slightly decreased, and that the α-SER could be greater than the n-SER in 90 nm technology even when the ultra-low-alpha (ULA) grade, with the alpha emission rate ≫ 1 × 10<sup>−3</sup> cm<sup>−2</sup>h<sup>−1</sup>, was used for package resin. We also performed computer simulations to estimate scaling trends of both α-SER and n-SER up to 45 nm technologies, and noticed that the α-SER decreased from 65 nm technology while the n-SER increased from 45 nm technology due to direct ionization from the protons generated in the n + Si nuclear reaction.",
"title": ""
},
{
"docid": "1a8df1f14f66c0ff09679ea5bbfc2c36",
"text": "Making strategic decision on new manufacturing technology investments is difficult. New technologies are usually costly, affected by numerous factors, and the potential benefits are often hard to justify prior to implementation. Traditionally, decisions are made based upon intuition and past experience, sometimes with the support of multicriteria decision support tools. However, these approaches do not retain and reuse knowledge, thus managers are not able to make effective use of their knowledge and experience of previously completed projects to help with the prioritisation of future projects. In this paper, a hybrid intelligent system integrating case-based reasoning (CBR) and the fuzzy ARTMAP (FAM) neural network model is proposed to support managers in making timely and optimal manufacturing technology investment decisions. The system comprises a case library that holds the details of past technology investment projects. Each project proposal is characterised by a set of features determined by human experts. The FAM network is then employed to match the features of a new proposal with those from historical cases. Similar cases are retrieved and adapted, and information on these cases can be utilised as an input to prioritisation of new projects. A case study is conducted to illustrate the applicability and effectiveness of the approach, with the results presented and analysed. Implications of the proposed approach are discussed, and suggestions for further work are outlined. r 2005 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "4f2e9ff72d6e273877a978600e6fbd40",
"text": "Fraud isn't new, but in the eyes of many experts, phishing and crimeware threaten to topple society's overall stability because they erode trust in its underlying computational infrastructure. Most people agree that phishing and crimeware must be fought, but to do so effectively, we must fully understand both types of threat; that starts by quantifying how and when people fall for deceit. In this article, we look closer at how to perform fraud experiments. Researchers typically use three approaches to quantify fraud: surveys, in-lab experiments, and naturalistic experiments.",
"title": ""
},
{
"docid": "926db14af35f9682c28a64e855fb76e5",
"text": "This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",
"title": ""
},
{
"docid": "c88c4097b0cf90031bbf3778d25bb87a",
"text": "In this paper we introduce a new data set consisting of user comments posted to the website of a German-language Austrian newspaper. Professional forum moderators have annotated 11,773 posts according to seven categories they considered crucial for the efficient moderation of online discussions in the context of news articles. In addition to this taxonomy and annotated posts, the data set contains one million unlabeled posts. Our experimental results using six methods establish a first baseline for predicting these categories. The data and our code are available for research purposes from https://ofai.github.io/million-post-corpus.",
"title": ""
},
{
"docid": "df4883ac490f3a27b2dbc310867a3534",
"text": "We present OpenLambda, a new, open-source platform for building next-generation web services and applications in the burgeoning model of serverless computation. We describe the key aspects of serverless computation, and present numerous research challenges that must be addressed in the design and implementation of such systems. We also include a brief study of current web applications, so as to better motivate some aspects of serverless application construction.",
"title": ""
},
{
"docid": "b13d4d5253a116153778d0f343bf76d7",
"text": "OBJECTIVES\nThe purpose of this study was to investigate the effect of dynamic soft tissue mobilisation (STM) on hamstring flexibility in healthy male subjects.\n\n\nMETHODS\nForty five males volunteered to participate in a randomised, controlled single blind design study. Volunteers were randomised to either control, classic STM, or dynamic STM intervention. The control group was positioned prone for 5 min. The classic STM group received standard STM techniques performed in a neutral prone position for 5 min. The dynamic STM group received all elements of classic STM followed by distal to proximal longitudinal strokes performed during passive, active, and eccentric loading of the hamstring. Only specific areas of tissue tightness were treated during the dynamic phase. Hamstring flexibility was quantified as hip flexion angle (HFA) which was the difference between the total range of straight leg raise and the range of pelvic rotation. Pre- and post-testing was conducted for the subjects in each group. A one-way ANCOVA followed by pairwise post-hoc comparisons was used to determine whether change in HFA differed between groups. The alpha level was set at 0.05.\n\n\nRESULTS\nIncrease in hamstring flexibility was significantly greater in the dynamic STM group than either the control or classic STM groups with mean (standard deviation) increase in degrees in the HFA measures of 4.7 (4.8), -0.04 (4.8), and 1.3 (3.8), respectively.\n\n\nCONCLUSIONS\nDynamic soft tissue mobilisation (STM) significantly increased hamstring flexibility in healthy male subjects.",
"title": ""
},
{
"docid": "7d0105cace2150b0e76ef4b5585772ad",
"text": "Peer-to-peer (P2P) accommodation rentals continue to grow at a phenomenal rate. Examining how this business model affects the competitive landscape of accommodation services is of strategic importance to hotels and tourism destinations. This study explores the competitive edge of P2P accommodation in comparison to hotels by extracting key content and themes from online reviews to explain the key service attributes sought by guests. The results from text analytics using terminology extraction and word co-occurrence networks indicate that even though guests expect similar core services such as clean rooms and comfortable beds, different attributes support the competitive advantage of hotels and P2P rentals. While conveniences offered by hotels are unparalleled by P2P accommodation, the latter appeal to consumers driven by experiential and social motivations. Managerial implications for hotels and P2P accommodation",
"title": ""
},
{
"docid": "7808ed17e6e7fa189e6b33922573af56",
"text": "The communication needs of Earth observation satellites is steadily increasing. Within a few years, the data rate of such satellites will exceed 1 Gbps, the angular resolution of sensors will be less than 1 μrad, and the memory size of onboard data recorders will be beyond 1 Tbytes. Compared to radio frequency links, optical communications in space offer various advantages such as smaller and lighter equipment, higher data rates, limited risk of interference with other communications systems, and the effective use of frequency resources. This paper describes and compares the major features of radio and optical frequency communications systems in space and predicts the needs of future satellite communications.",
"title": ""
},
{
"docid": "2e6af4ea3a375f67ce5df110a31aeb85",
"text": "Controlled power system separation, which separates the transmission system into islands in a controlled manner, is considered the final resort against a blackout under severe disturbances, e.g., cascading events. Three critical problems of controlled separation are where and when to separate and what to do after separation, which are rarely studied together. They are addressed in this paper by a proposed unified controlled separation scheme based on synchrophasors. The scheme decouples the three problems by partitioning them into sub-problems handled strategically in three time stages: the Offline Analysis stage determines elementary generator groups, optimizes potential separation points in between, and designs post-separation control strategies; the Online Monitoring stage predicts separation boundaries by modal analysis on synchrophasor data; the Real-time Control stage calculates a synchrophasor-based separation risk index for each boundary to predict the time to perform separation. The proposed scheme is demonstrated on a 179-bus power system by case studies.",
"title": ""
},
{
"docid": "0c5ebaaf0fd85312428b5d6b7479bfb6",
"text": "BACKGROUND\nPovidone-iodine solution is an antiseptic that is used worldwide as surgical paint and is considered to have a low irritant potential. Post-surgical severe irritant dermatitis has been described after the misuse of this antiseptic in the surgical setting.\n\n\nMETHODS\nBetween January 2011 and June 2013, 27 consecutive patients with post-surgical contact dermatitis localized outside of the surgical incision area were evaluated. Thirteen patients were also available for patch testing.\n\n\nRESULTS\nAll patients developed dermatitis the day after the surgical procedure. Povidone-iodine solution was the only liquid in contact with the skin of our patients. Most typical lesions were distributed in a double lumbar parallel pattern, but they were also found in a random pattern or in areas where a protective pad or an occlusive medical device was glued to the skin. The patch test results with povidone-iodine were negative.\n\n\nCONCLUSIONS\nPovidone-iodine-induced post-surgical dermatitis may be a severe complication after prolonged surgical procedures. As stated in the literature and based on the observation that povidone-iodine-induced contact irritant dermatitis occurred in areas of pooling or occlusion, we speculate that povidone-iodine together with occlusion were the causes of the dermatitis epidemic that occurred in our surgical setting. Povidone-iodine dermatitis is a problem that is easily preventable through the implementation of minimal routine changes to adequately dry the solution in contact with the skin.",
"title": ""
},
{
"docid": "8822138c493df786296c02315bea5802",
"text": "Photodefinable Polyimides (PI) and polybenz-oxazoles (PBO) which have been widely used for various electronic applications such as buffer coating, interlayer dielectric and protection layer usually need high temperature cure condition over 300 °C to complete the cyclization and achieve good film properties. In addition, PI and PBO are also utilized recently for re-distribution layer of wafer level package. In this application, lower temperature curability is strongly required in order to prevent the thermal damage of the semi-conductor device and the other packaging material. Then, to meet this requirement, we focused on pre-cyclized polyimide with phenolic hydroxyl groups since this polymer showed the good solubility to aqueous TMAH and there was no need to apply high temperature cure condition. As a result of our study, the positive-tone photodefinable material could be obtained by using DNQ and combination of epoxy cross-linker enabled to enhance the chemical and PCT resistance of the cured film made even at 170 °C. Furthermore, the adhesion to copper was improved probably due to secondary hydroxyl groups which were generated from reacted epoxide groups. In this report, we introduce our concept of novel photodefinable positive-tone polyimide for low temperature cure.",
"title": ""
},
{
"docid": "4abdc5883ccd6b4b218ce2d86da0784d",
"text": "Crowd-based events, such as football matches, are considered generators of crime. Criminological research on the influence of football matches has consistently uncovered differences in spatial crime patterns, particularly in the areas around stadia. At the same time, social media data mining research on football matches shows a high volume of data created during football events. This study seeks to build on these two research streams by exploring the spatial relationship between crime events and nearby Twitter activity around a football stadium, and estimating the possible influence of tweets for explaining the presence or absence of crime in the area around a football stadium on match days. Aggregated hourly crime data and geotagged tweets for the same area around the stadium are analysed using exploratory and inferential methods. Spatial clustering, spatial statistics, text mining as well as a hurdle negative binomial logistic regression for spatiotemporal explanations are utilized in our analysis. Findings indicate a statistically significant spatial relationship between three crime types (criminal damage, theft and handling, and violence against the person) and tweet patterns, and that such a relationship can be used to explain future incidents of crime.",
"title": ""
},
{
"docid": "2acc2ab831aa2bc7ebe7047223ba1a30",
"text": "The seemingly unshakeable accuracy of Moore's law - which states that the speed of computers; as measured by the number of transistors that can be placed on a single chip, will double every year or two - has been credited with being the engine of the electronics revolution, and is regarded as the premier example of a self-fulfilling prophecy and technological trajectory in both the academic and popular press. Although many factors have kept Moore's law as an industry benchmark, it is the entry of foreign competition that seems to have played a critical role in maintaining the pace of Moore's law in the early VLSI transition. Many different kinds of chips used many competing logic families. DRAMs and microprocessors became critical to the semiconductor industry, yet were unknown during the original formulation of Moore's law",
"title": ""
},
{
"docid": "c273cdd1dc3e1ab52fa48d033d0c3dd4",
"text": "This paper discusses concurrent design and analysis of the first 8.5 kV electrostatic discharge (ESD) protected single-pole ten-throw (SP10T) transmit/receive (T/R) switch for quad-band (0.85/0.9/1.8/1.9 GHz) GSM and multiple-band WCDMA smartphones. Implemented in a 0.18 μm SOI CMOS, this SP10T employs a series-shunt topology for the time-division duplex (TDD) transmitting (Tx) and receiving (Rx), and frequency-division duplex (FDD) transmitting/receiving (TRx) branches to handle the high GSM transmitter power. The measured P0.1 dB, insertion loss and Tx-Rx isolation in the lower/upper bands are 36.4/34.2 dBm, 0.48/0.81 dB and 43/40 dB, respectively, comparable to commercial products with no/little ESD protection in high-cost SOS and GaAs technologies. Feed-forward capacitor (FFC) and AC-floating bias techniques are used to further improve the linearity. An ESD-switch co-design technique is developed that enables simultaneous whole-chip design optimization for both ESD protection and SP10T circuits.",
"title": ""
}
] | scidocsrr |
525f62c1cada29f217b073884f2e88a4 | Aliasing Detection and Reduction in Plenoptic Imaging | [
{
"docid": "59c83aa2f97662c168316f1a4525fd4d",
"text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.",
"title": ""
}
] | [
{
"docid": "7ea55980a5cd5fce415a24170b027d38",
"text": "We propose a mathematical model to assess the effects of irradiated (or transgenic) male insects introduction in a previously infested region. The release of sterile male insects aims to displace gradually the natural (wild) insect from the habitat. We discuss the suitability of this release technique when applied to peri-domestically adapted Aedes aegypti mosquitoes which are transmissors of Yellow Fever and Dengue disease.",
"title": ""
},
{
"docid": "24c6f0454bad7506a600483434914be0",
"text": "Query answers from on-line databases can easily be corrupted by hackers or malicious database publishers. Thus it is important to provide mechanisms which allow clients to trust the results from on-line queries. Authentic publication allows untrusted publishers to answer securely queries from clients on behalf of trusted off-line data owners. Publishers validate answers using hard-to-forge verification objects VOs), which clients can check efficiently. This approach provides greater scalability, by making it easy to add more publishers, and better security, since on-line publishers do not need to be trusted. To make authentic publication attractive, it is important for the VOs to be small, efficient to compute, and efficient to verify. This has lead researchers to develop independently several different schemes for efficient VO computation based on specific data structures. Our goal is to develop a unifying framework for these disparate results, leading to a generalized security result. In this paper we characterize a broad class of data structures which we call Search DAGs, and we develop a generalized algorithm for the construction of VOs for Search DAGs. We prove that the VOs thus constructed are secure, and that they are efficient to compute and verify. We demonstrate how this approach easily captures existing work on simple structures such as binary trees, multi-dimensional range trees, tries, and skip lists. Once these are shown to be Search DAGs, the requisite security and efficiency results immediately follow from our general theorems. Going further, we also use Search DAGs to produce and prove the security of authenticated versions of two complex data models for efficient multi-dimensional range searches. This allows efficient VOs to be computed (size O(log N + T)) for typical one- and two-dimensional range queries, where the query answer is of size T and the database is of size N. We also show I/O-efficient schemes to construct the VOs. For a system with disk blocks of size B, we answer one-dimensional and three-sided range queries and compute the VOs with O(logB N + T/B) I/O operations using linear size data structures.",
"title": ""
},
{
"docid": "1c058d6a648b2190500340f762eeff78",
"text": "An ever-increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow, and super resolution. Hardware acceleration of these algorithms is essential to adopt these improvements in embedded and mobile computer vision systems. We present a new architecture, design, and implementation, as well as the first reported silicon measurements of such an accelerator, outperforming previous work in terms of power, area, and I/O efficiency. The manufactured device provides up to 196 GOp/s on 3.09 $\\text {mm}^{2}$ of silicon in UMC 65-nm technology and can achieve a power efficiency of 803 GOp/s/W. The massively reduced bandwidth requirements make it the first architecture scalable to TOp/s performance.",
"title": ""
},
{
"docid": "8de530a30b8352e36b72f3436f47ffb2",
"text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.",
"title": ""
},
{
"docid": "4f5b76f7954779bf48da0ecf458d093f",
"text": "A probabilistic framework is presented that enables image registration, tissue classification, and bias correction to be combined within the same generative model. A derivation of a log-likelihood objective function for the unified model is provided. The model is based on a mixture of Gaussians and is extended to incorporate a smooth intensity variation and nonlinear registration with tissue probability maps. A strategy for optimising the model parameters is described, along with the requisite partial derivatives of the objective function.",
"title": ""
},
{
"docid": "b96836da7518ceccace39347f06067c6",
"text": "A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.",
"title": ""
},
{
"docid": "e264903ee2759f638dcd60a715cbb994",
"text": "Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultrahigh-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, nonvolatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal-oxide-semiconductor (CMOS) bioinspired hardware.",
"title": ""
},
{
"docid": "a41bb1fe5670cc865bf540b34848f45f",
"text": "The general idea of discovering knowledge in large amounts of data is both appealing and intuitive. Typically we focus our attention on learning algorithms, which provide the core capability of generalizing from large numbers of small, very specific facts to useful high-level rules; these learning techniques seem to hold the most excitement and perhaps the most substantive scientific content in the knowledge discovery in databases (KDD) enterprise. However, when we engage in real-world discovery tasks, we find that they can be extremely complex, and that induction of rules is only one small part of the overall process. While others have written overviews of \"the concept of KDD, and even provided block diagrams for \"knowledge discovery systems,\" no one has begun to identify all of the building blocks in a realistic KDD process. This is what we attempt to do here. Besides bringing into the discussion several parts of the process that have received inadequate attention in the KDD community, a careful elucidation of the steps in a realistic knowledge discovery process can provide a framework for comparison of different technologies and tools that are almost impossible to compare without a clean model.",
"title": ""
},
{
"docid": "ceebc0d380be2b2f5e76da5f9f006530",
"text": "This paper addresses the issue of motion estimation on image sequences. The standard motion equation used to compute the apparent motion of image irradiance patterns is an invariance brightness based hypothesis called the optical flow constraint. Other equations can be used, in particular the extended optical flow constraint, which is a variant of the optical flow constraint, inspired by the fluid mechanic mass conservation principle. In this paper, we propose a physical interpretation of this extended optical flow equation and a new model unifying the optical flow and the extended optical flow constraints. We present results obtained for synthetic and meteorological images.",
"title": ""
},
{
"docid": "af0328c3a271859d31c0e3993db7105e",
"text": "The increasing bandwidth demand in data centers and telecommunication infrastructures had prompted new electrical interface standards capable of operating up to 56Gb/s per-lane. The CEI-56G-VSR-PAM4 standard [1] defines PAM-4 signaling at 56Gb/s targeting chip-to-module interconnect. Figure 6.3.1 shows the measured S21 of a channel resembling such interconnects and the corresponding single-pulse response after TX-FIR and RX CTLE. Although the S21 is merely ∼10dB at 14GHz, the single-pulse response exhibits significant reflections from impedance discontinuities, mainly between package and PCB traces. These reflections are detrimental to PAM-4 signaling and cannot be equalized effectively by RX CTLE and/or a few taps of TX feed-forward equalization. This paper presents the design of a PAM-4 receiver using 10-tap direct decision-feedback equalization (DFE) targeting such VSR channels.",
"title": ""
},
{
"docid": "d8d95a9bccc8234fd444e14c96a4cfa5",
"text": "This paper presents a highly integrated, high performance four channel linear transimpedance amplifier (TIA) RFIC with a footprint of 2mmx3.5mm towards next generation 100G/400G miniaturized coherent receivers. A TIA of such form may become indispensable as the size, complexity and cost of receivers continue to reduce. The design has been realized in a 130nm SiGe BiCMOS process for a low cost, high performance solution towards long- haul/metro applications. The TIA is capable of providing control functions either digitally through an on-chip 4-wire serial-peripheral interface (SPI) or in analog mode. Analog mode is provided as an alternative control for real-time control and monitoring. To provide high input dynamic range, a variable gain control block is integrated for each channel, which can be used in automatic or manual mode. The TIA has a differential input, differential output configuration that exhibits state-of-the-art THD of <;0.9% up to 500mVpp output voltage swing for input currents up to 2mApp and high isolation > 40dB between adjacent channels. A high transimpedance gain (Zt) up to ~7KΩ with a large dynamic range up to 37dB and variable bandwidth up to 34GHz together with low average input noise density of 20pA/√Hz has been achieved. To the authors' knowledge, these metrics combined with diverse functionality and high integration have not been exhibited so far. This paper intends to report a state-of-the-art high-baud rate TIA and provide insight into possibilities for further integration.",
"title": ""
},
{
"docid": "2bfd884e92a26d017a7854be3dfb02e8",
"text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.",
"title": ""
},
{
"docid": "4cf2c80fe55f2b41816f23895b64a29c",
"text": "Visual question answering is fundamentally compositional in nature—a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural “modules” into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.",
"title": ""
},
{
"docid": "73015dbfed8e1ed03965779a93e14190",
"text": "The DataMiningGrid system has been designed to meet the requirements of modern and distributed data mining scenarios. Based on the Globus Toolkit and other open technology and standards, the DataMiningGrid system provides tools and services facilitating the grid-enabling of data mining applications without any intervention on the application side. Critical features of the system include flexibility, extensibility, scalability, efficiency, conceptual simplicity and ease of use. The system has been developed and evaluated on the basis of a diverse set of use cases from different sectors in science and technology. The DataMiningGrid software is freely available under Apache License 2.0. c © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b29e611c608a824009cf4ffea8892aa9",
"text": "The purpose of this study was to analyze characteristics of individuals working in the profession of neuropsychology in Latin America in order to understand their background, professional training, current work situation, assessment and diagnostic procedures used, rehabilitation techniques employed, population targeted, teaching responsibilities, and research activities. A total of 808 professionals working in neuropsychology from 17 countries in Latin America completed an online survey between July 2013 and January 2014. The majority of participants were female and the mean age was 36.76 years (range 21-74 years). The majority of professionals working in neuropsychology in Latin America have a background in psychology, with some additional specialized training and supervised clinical practice. Over half work in private practice, universities, or private clinics and are quite satisfied with their work. Those who identify themselves as clinicians primarily work with individuals with learning problems, ADHD, mental retardation, TBI, dementia, and stroke. The majority respondents cite the top barrier in the use of neuropsychological instruments to be the lack of normative data for their countries. The top perceived barriers to the field include: lack of academic training programs, lack of clinical training opportunities, lack of willingness to collaborate between professionals, and lack of access to neuropsychological instruments. There is a need in Latin America to increase regulation, improve graduate curriculums, enhance existing clinical training, develop professional certification programs, validate existing neuropsychological tests, and create new, culturally-relevant instruments.",
"title": ""
},
{
"docid": "3dee885a896e9864ff06b546d64f6df1",
"text": "BACKGROUND\nThe 12-item Short Form Health Survey (SF-12) as a shorter alternative of the SF-36 is largely used in health outcomes surveys. The aim of this study was to validate the SF-12 in Iran.\n\n\nMETHODS\nA random sample of the general population aged 15 years and over living in Tehran, Iran completed the SF-12. Reliability was estimated using internal consistency and validity was assessed using known groups comparison and convergent validity. In addition, the factor structure of the questionnaire was extracted by performing both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).\n\n\nRESULTS\nIn all, 5587 individuals were studied (2721 male and 2866 female). The mean age and formal education of the respondents were 35.1 (SD = 15.4) and 10.2 (SD = 4.4) years respectively. The results showed satisfactory internal consistency for both summary measures, that are the Physical Component Summary (PCS) and the Mental Component Summary (MCS); Cronbach's alpha for PCS-12 and MCS-12 was 0.73 and 0.72, respectively. Known-groups comparison showed that the SF-12 discriminated well between men and women and those who differed in age and educational status (P < 0.001). In addition, correlations between the SF-12 scales and single items showed that the physical functioning, role physical, bodily pain and general health subscales correlated higher with the PCS-12 score, while the vitality, social functioning, role emotional and mental health subscales more correlated with the MCS-12 score lending support to its good convergent validity. Finally the principal component analysis indicated a two-factor structure (physical and mental health) that jointly accounted for 57.8% of the variance. The confirmatory factory analysis also indicated a good fit to the data for the two-latent structure (physical and mental health).\n\n\nCONCLUSION\nIn general the findings suggest that the SF-12 is a reliable and valid measure of health related quality of life among Iranian population. However, further studies are needed to establish stronger psychometric properties for this alternative form of the SF-36 Health Survey in Iran.",
"title": ""
},
{
"docid": "d55b50d30542099f8f55cfeb1aafd4dc",
"text": "Many avian species persist in human-dominated landscapes; however, little is known about the demographic consequences of urbanization in these populations. Given that urban habitats introduce novel benefits (e.g., anthropogenic resources) and pressures (e.g., mortality risks), conflicting mechanisms have been hypothesized to drive the dynamics of urban bird populations. Top-down processes such as predation predict reduced survivorship in suburban and urban habitats, whereas bottom-up processes, such as increased resource availability, predict peak survival in suburban habitats. In this study, we use mark–recapture data of seven focal species encountered between 2000 and 2012 to test hypotheses about the processes that regulate avian survival along an urbanization gradient in greater Washington, D.C., USA. American Robin, Gray Catbird, Northern Cardinal, and Song Sparrow exhibited peak survival at intermediate and upper portions of the rural-to-urban gradient; this pattern supports the hypothesis that bottom-up processes (e.g., resource availability) can drive patterns of avian survival in some species. In contrast, Carolina Chickadee showed no response and Carolina and House Wren showed a slightly negative response to urban land cover. These contrasting results underscore the need for comparative studies documenting the mechanisms that drive demography and how those factors differentially affect urban adapted and urban avoiding species.",
"title": ""
},
{
"docid": "2ed57c4430810b2b72a64f2315bf1160",
"text": "This study was an attempt to identify the interlingual strategies employed to translate English subtitles into Persian and to determine their frequency, as well. Contrary to many countries, subtitling is a new field in Iran. The study, a corpus-based, comparative, descriptive, non-judgmental analysis of an English-Persian parallel corpus, comprised English audio scripts of five movies of different genres, with Persian subtitles. The study’s theoretical framework was based on Gottlieb’s (1992) classification of subtitling translation strategies. The results indicated that all Gottlieb’s proposed strategies were applicable to the corpus with some degree of variation of distribution among different film genres. The most frequently used strategy was “transfer” at 54.06%; the least frequently used strategies were “transcription” and “decimation” both at 0.81%. It was concluded that the film genre plays a crucial role in using different strategies.",
"title": ""
},
{
"docid": "fe383fbca6d67d968807fb3b23489ad1",
"text": "In this project, we attempt to apply machine-learning algorithms to predict Bitcoin price. For the first phase of our investigation, we aimed to understand and better identify daily trends in the Bitcoin market while gaining insight into optimal features surrounding Bitcoin price. Our data set consists of over 25 features relating to the Bitcoin price and payment network over the course of five years, recorded daily. Using this information we were able to predict the sign of the daily price change with an accuracy of 98.7%. For the second phase of our investigation, we focused on the Bitcoin price data alone and leveraged data at 10-minute and 10-second interval timepoints, as we saw an opportunity to evaluate price predictions at varying levels of granularity and noisiness. By predicting the sign of the future change in price, we are modeling the price prediction problem as a binomial classification task, experimenting with a custom algorithm that leverages both random forests and generalized linear models. These results had 50-55% accuracy in predicting the sign of future price change using 10 minute time intervals.",
"title": ""
},
{
"docid": "149de84d7cbc9ea891b4b1297957ade7",
"text": "Deep convolutional neural networks (CNNs) have had a major impact in most areas of image understanding, including object category detection. In object detection, methods such as R-CNN have obtained excellent results by integrating CNNs with region proposal generation algorithms such as selective search. In this paper, we investigate the role of proposal generation in CNN-based detectors in order to determine whether it is a necessary modelling component, carrying essential geometric information not contained in the CNN, or whether it is merely a way of accelerating detection. We do so by designing and evaluating a detector that uses a trivial region generation scheme, constant for each image. Combined with SPP, this results in an excellent and fast detector that does not require to process an image with algorithms other than the CNN itself. We also streamline and simplify the training of CNN-based detectors by integrating several learning steps in a single algorithm, as well as by proposing a number of improvements that accelerate detection.",
"title": ""
}
] | scidocsrr |
2f421be3d10cc8988a5c134cf0852ec9 | Semantically Decomposing the Latent Spaces of Generative Adversarial Networks | [
{
"docid": "fdf1b2f49540d5d815f2d052f2570afe",
"text": "It has been recently shown that Generative Adversarial Networks (GANs) can produce synthetic images of exceptional visual fidelity. In this work, we propose the first GAN-based method for automatic face aging. Contrary to previous works employing GANs for altering of facial attributes, we make a particular emphasize on preserving the original person's identity in the aged version of his/her face. To this end, we introduce a novel approach for “Identity-Preserving” optimization of GAN's latent vectors. The objective evaluation of the resulting aged and rejuvenated face images by the state-of-the-art face recognition and age estimation solutions demonstrate the high potential of the proposed method.",
"title": ""
},
{
"docid": "7c799fdfde40289ba4e0ce549f02a5ad",
"text": "In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.",
"title": ""
}
] | [
{
"docid": "21bd6f42c74930c8e9876ff4f5ef1ee2",
"text": "Dynamic channel allocation (DCA) is the key technology to efficiently utilize the spectrum resources and decrease the co-channel interference for multibeam satellite systems. Most works allocate the channel on the basis of the beam traffic load or the user terminal distribution of the current moment. These greedy-like algorithms neglect the intrinsic temporal correlation among the sequential channel allocation decisions, resulting in the spectrum resources underutilization. To solve this problem, a novel deep reinforcement learning (DRL)-based DCA (DRL-DCA) algorithm is proposed. Specifically, the DCA optimization problem, which aims at minimizing the service blocking probability, is formulated in the multibeam satellite systems. Due to the temporal correlation property, the DCA optimization problem is modeled as the Markov decision process (MDP) which is the dominant analytical approach in DRL. In modeled MDP, the system state is reformulated into an image-like fashion, and then, convolutional neural network is used to extract useful features. Simulation results show that the DRL-DCA algorithm can decrease the blocking probability and improve the carried traffic and spectrum efficiency compared with other channel allocation algorithms.",
"title": ""
},
{
"docid": "3614bf0a54290ea80a2d6f061e830c91",
"text": "0749-5978/$ see front matter 2008 Elsevier Inc. A doi:10.1016/j.obhdp.2008.04.002 * Corresponding author. Fax: +1 407 823 3725. E-mail address: dmayer@bus.ucf.edu (D.M. Mayer) This research examines the relationships between top management and supervisory ethical leadership and group-level outcomes (e.g., deviance, OCB) and suggests that ethical leadership flows from one organizational level to the next. Drawing on social learning theory [Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice-Hall.; Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall.] and social exchange theory [Blau, p. (1964). Exchange and power in social life. New York: John Wiley.], the results support our theoretical model using a sample of 904 employees and 195 managers in 195 departments. We find a direct negative relationship between both top management and supervisory ethical leadership and group-level deviance, and a positive relationship with group-level OCB. Finally, consistent with the proposed trickle-down model, the effects of top management ethical leadership on group-level deviance and OCB are mediated by supervisory ethical leadership. 2008 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "39861e2759b709883f3d37a65d13834b",
"text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.",
"title": ""
},
{
"docid": "a2df7bbce7247125ef18a17d7dbb2166",
"text": "Few studies have evaluated the effectiveness of cyberbullying prevention/intervention programs. The goals of the present study were to develop a Theory of Reasoned Action (TRA)-based video program to increase cyberbullying knowledge (1) and empathy toward cyberbullying victims (2), reduce favorable attitudes toward cyberbullying (3), decrease positive injunctive (4) and descriptive norms about cyberbullying (5), and reduce cyberbullying intentions (6) and cyberbullying behavior (7). One hundred sixty-seven college students were randomly assigned to an online video cyberbullying prevention program or an assessment-only control group. Immediately following the program, attitudes and injunctive norms for all four types of cyberbullying behavior (i.e., unwanted contact, malice, deception, and public humiliation), descriptive norms for malice and public humiliation, empathy toward victims of malice and deception, and cyberbullying knowledge significantly improved in the experimental group. At one-month follow-up, malice and public humiliation behavior, favorable attitudes toward unwanted contact, deception, and public humiliation, and injunctive norms for public humiliation were significantly lower in the experimental than the control group. Cyberbullying knowledge was significantly higher in the experimental than the control group. These findings demonstrate a brief cyberbullying video is capable of improving, at one-month follow-up, cyberbullying knowledge, cyberbullying perpetration behavior, and TRA constructs known to predict cyberbullying perpetration. Considering the low cost and ease with which a video-based prevention/intervention program can be delivered, this type of approach should be considered to reduce cyberbullying.",
"title": ""
},
{
"docid": "6d0259e1c4047964bdba90dc1ecb0a68",
"text": "In order to further understand what physiological characteristics make a human hand irreplaceable for many dexterous tasks, it is necessary to develop artificial joints that are anatomically correct while sharing similar dynamic features. In this paper, we address the problem of designing a two degree of freedom metacarpophalangeal (MCP) joint of an index finger. The artificial MCP joint is composed of a ball joint, crocheted ligaments, and a silicon rubber sleeve which as a whole provides the functions required of a human finger joint. We quantitatively validate the efficacy of the artificial joint by comparing its dynamic characteristics with that of two human subjects' index fingers by analyzing their impulse response with linear regression. Design parameters of the artificial joint are varied to highlight their effect on the joint's dynamics. A modified, second-order model is fit which accounts for non-linear stiffness and damping, and a higher order model is considered. Good fits are observed both in the human (R2 = 0.97) and the artificial joint of the index finger (R2 = 0.95). Parameter estimates of stiffness and damping for the artificial joint are found to be similar to those in the literature, indicating our new joint is a good approximation for an index finger's MCP joint.",
"title": ""
},
{
"docid": "80aa839635765902dc7631d8f9a6934c",
"text": "3D volumetric object generation/prediction from single 2D image is a quite challenging but meaningful task in 3D visual computing. In this paper, we propose a novel neural network architecture, named \"3DensiNet\", which uses density heat-map as an intermediate supervision tool for 2D-to-3D transformation. Specifically, we firstly present a 2D density heat-map to 3D volumetric object encoding-decoding network, which outperforms classical 3D autoencoder. Then we show that using 2D image to predict its density heat-map via a 2D to 2D encoding-decoding network is feasible. In addition, we leverage adversarial loss to fine tune our network, which improves the generated/predicted 3D voxel objects to be more similar to the ground truth voxel object. Experimental results on 3D volumetric prediction from 2D images demonstrates superior performance of 3DensiNet over other state-of-the-art techniques in handling 3D volumetric object generation/prediction from single 2D image.",
"title": ""
},
{
"docid": "c38a6685895c23620afb6570be4c646b",
"text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.",
"title": ""
},
{
"docid": "009d79972bd748d7cf5206bb188aba00",
"text": "Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performanc e o a wide variety of tasks and enjoy super-linear convergence to the optimal s olution. For largescale learning problems, stochastic Quasi-Newton methods ave been recently proposed. However, these typically only achieve sub-linea r convergence rates and have not been shown to consistently perform well in practice s nce noisy Hessian approximations can exacerbate the effect of high-variance stochastic gradient estimates. In this work we propose V ITE, a novel stochastic Quasi-Newton algorithm that uses an existing first-order technique to reduce this va r ance. Without exploiting the specific form of the approximate Hessian, we show that V ITE reaches the optimum at a geometric rate with a constant step-size when de aling with smooth strongly convex functions. Empirically, we demonstrate im provements over existing stochastic Quasi-Newton and variance reduced stochast i gradient methods.",
"title": ""
},
{
"docid": "405a1e8badfb85dcd1d5cc9b4a0026d2",
"text": "It is of great practical importance to improve yield and quality of vegetables in soilless cultures. This study investigated the effects of iron-nutrition management on yield and quality of hydroponic-cultivated spinach (Spinacia oleracea L.). The results showed that mild Fe-deficient treatment (1 μM FeEDTA) yielded a greater biomass of edible parts than Fe-omitted treatment (0 μM FeEDTA) or Fe-sufficient treatments (10 and 50 μM FeEDTA). Conversely, mild Fe-deficient treatment had the lowest nitrate concentration in the edible parts out of all the Fe treatments. Interestingly, all the concentrations of soluble sugar, soluble protein and ascorbate in mild Fe-deficient treatments were higher than Fe-sufficient treatments. In addition, both phenolic concentration and DPPH scavenging activity in mild Fe-deficient treatments were comparable with those in Fe-sufficient treatments, but were higher than those in Fe-omitted treatments. Therefore, we concluded that using a mild Fe-deficient nutrition solution to cultivate spinach not only would increase yield, but also would improve quality.",
"title": ""
},
{
"docid": "e5d474fc8c0d2c97cc798eda4f9c52dd",
"text": "Gesture typing is an efficient input method for phones and tablets using continuous traces created by a pointed object (e.g., finger or stylus). Translating such continuous gestures into textual input is a challenging task as gesture inputs exhibit many features found in speech and handwriting such as high variability, co-articulation and elision. In this work, we address these challenges with a hybrid approach, combining a variant of recurrent networks, namely Long Short Term Memories [1] with conventional Finite State Transducer decoding [2]. Results using our approach show considerable improvement relative to a baseline shape-matching-based system, amounting to 4% and 22% absolute improvement respectively for small and large lexicon decoding on real datasets and 2% on a synthetic large scale dataset.",
"title": ""
},
{
"docid": "eab86ab18bd47e883b184dcd85f366cd",
"text": "We study corporate bond default rates using an extensive new data set spanning the 1866–2008 period. We find that the corporate bond market has repeatedly suffered clustered default events much worse than those experienced during the Great Depression. For example, during the railroad crisis of 1873–1875, total defaults amounted to 36% of the par value of the entire corporate bond market. Using a regime-switching model, we examine the extent to which default rates can be forecast by financial and macroeconomic variables. We find that stock returns, stock return volatility, and changes in GDP are strong predictors of default rates. Surprisingly, however, credit spreads are not. Over the long term, credit spreads are roughly twice as large as default losses, resulting in an average credit risk premium of about 80 basis points. We also find that credit spreads do not adjust in response to realized default rates. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2cb298a8fc8102d61964a884c20e7d78",
"text": "In this paper, the concept of data mining was summarized and its significance towards its methodologies was illustrated. The data mining based on Neural Network and Genetic Algorithm is researched in detail and the key technology and ways to achieve the data mining on Neural Network and Genetic Algorithm are also surveyed. This paper also conducts a formal review of the area of rule extraction from ANN and GA.",
"title": ""
},
{
"docid": "ec673efa5f837ba4c997ee7ccd845ce1",
"text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.",
"title": ""
},
{
"docid": "8fa34eb8d0ab6b1248a98936ddad7c5c",
"text": "Planning with temporally extended goals and uncontrollable events has recently been introduced as a formal model for system reconfiguration problems. An important application is to automatically reconfigure a real-life system in such a way that its subsequent internal evolution is consistent with a temporal goal formula. In this paper we introduce an incremental search algorithm and a search-guidance heuristic, two generic planning enhancements. An initial problem is decomposed into a series of subproblems, providing two main ways of speeding up a search. Firstly, a subproblem focuses on a part of the initial goal. Secondly, a notion of action relevance allows to explore with higher priority actions that are heuristically considered to be more relevant to the subproblem at hand. Even though our techniques are more generally applicable, we restrict our attention to planning with temporally extended goals and uncontrollable events. Our ideas are implemented on top of a successful previous system that performs online learning to better guide planning and to safely avoid potentially expensive searches. In experiments, the system speed performance is further improved by a convincing margin.",
"title": ""
},
{
"docid": "8c54780de6c8d8c3fa71b31015ad044e",
"text": "Integrins are cell surface receptors for extracellular matrix proteins and play a key role in cell survival, proliferation, migration and gene expression. Integrin signaling has been shown to be deregulated in several types of cancer, including prostate cancer. This review is focused on integrin signaling pathways known to be deregulated in prostate cancer and known to promote prostate cancer progression.",
"title": ""
},
{
"docid": "3510bcd9d52729766e2abe2111f8be95",
"text": "Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.",
"title": ""
},
{
"docid": "1d6b58df486d618341cea965724a7da9",
"text": "The focus on human capital as a driver of economic growth for developing countries has led to undue attention on school attainment. Developing countries have made considerable progress in closing the gap with developed countries in terms of school attainment, but recent research has underscored the importance of cognitive skills for economic growth. This result shifts attention to issues of school quality, and there developing countries have been much less successful in closing the gaps with developed countries. Without improving school quality, developing countries will find it difficult to improve their long run economic performance. JEL Classification: I2, O4, H4 Highlights: ! ! Improvements in long run growth are closely related to the level of cognitive skills of the population. ! ! Development policy has inappropriately emphasized school attainment as opposed to educational achievement, or cognitive skills. ! ! Developing countries, while improving in school attainment, have not improved in quality terms. ! ! School policy in developing countries should consider enhancing both basic and advanced skills.",
"title": ""
},
{
"docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a",
"text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.",
"title": ""
},
{
"docid": "c84d41e54b12cca847135dfc2e9e13f8",
"text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.",
"title": ""
},
{
"docid": "57e71550633cdb4a37d3fa270f0ad3a7",
"text": "Classifiers based on sparse representations have recently been shown to provide excellent results in many visual recognition and classification tasks. However, the high cost of computing sparse representations at test time is a major obstacle that limits the applicability of these methods in large-scale problems, or in scenarios where computational power is restricted. We consider in this paper a simple yet efficient alternative to sparse coding for feature extraction. We study a classification scheme that applies the soft-thresholding nonlinear mapping in a dictionary, followed by a linear classifier. A novel supervised dictionary learning algorithm tailored for this low complexity classification architecture is proposed. The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver. We conduct experiments on several datasets, and show that our learning algorithm that leverages the structure of the classification problem outperforms generic learning procedures. Our simple classifier based on soft-thresholding also competes with the recent sparse coding classifiers, when the dictionary is learned appropriately. The adopted classification scheme further requires less computational time at the testing stage, compared to other classifiers. The proposed scheme shows the potential of the adequately trained soft-thresholding mapping for classification and paves the way towards the development of very efficient classification methods for vision problems.",
"title": ""
}
] | scidocsrr |
e36b448d2407944c4f7bccb9bd28f791 | Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis. | [
{
"docid": "b51fcfa32dbcdcbcc49f1635b44601ed",
"text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.",
"title": ""
}
] | [
{
"docid": "ff1f503123ce012b478a3772fa9568b5",
"text": "Cementoblastoma is a rare odontogenic tumor that has distinct clinical and radiographical features normally suggesting the correct diagnosis. The clinicians and oral pathologists must have in mind several possible differential diagnoses that can lead to a misdiagnosed lesion, especially when unusual clinical features are present. A 21-year-old male presented with dull pain in lower jaw on right side. The clinical inspection of the region was non-contributory to the diagnosis but the lesion could be appreciated on palpation. A swelling was felt in the alveolar region of mandibular premolar-molar on right side. Radiographic examination was suggestive of benign cementoblastoma and the tumor was removed surgically along with tooth. The diagnosis was confirmed by histopathologic study. Although this neoplasm is rare, the dental practitioner should be aware of the clinical, radiographical and histopathological features that will lead to its early diagnosis and treatment.",
"title": ""
},
{
"docid": "3732f96144d7f28c88670dd63aff63a1",
"text": "The problem of defining and classifying power system stability has been addressed by several previous CIGRE and IEEE Task Force reports. These earlier efforts, however, do not completely reflect current industry needs, experiences and understanding. In particular, the definitions are not precise and the classifications do not encompass all practical instability scenarios. This report developed by a Task Force, set up jointly by the CIGRE Study Committee 38 and the IEEE Power System Dynamic Performance Committee, addresses the issue of stability definition and classification in power systems from a fundamental viewpoint and closely examines the practical ramifications. The report aims to define power system stability more precisely, provide a systematic basis for its classification, and discuss linkages to related issues such as power system reliability and security.",
"title": ""
},
{
"docid": "876dd0a985f00bb8145e016cc8593a84",
"text": "This paper presents how to synthesize a texture in a procedural way that preserves the features of the input exemplar. The exemplar is analyzed in both spatial and frequency domains to be decomposed into feature and non-feature parts. Then, the non-feature parts are reproduced as a procedural noise, whereas the features are independently synthesized. They are combined to output a non-repetitive texture that also preserves the exemplar’s features. The proposed method allows the user to control the extent of extracted features and also enables a texture to edited quite effectively.",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "c8d4fad2d3f5c7c2402ca60bb4f6dcca",
"text": "The Pix2pix [17] and CycleGAN [40] losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.",
"title": ""
},
{
"docid": "aa83aa0a030e14449504ad77dd498b90",
"text": "An organization has to make the right decisions in time depending on demand information to enhance the commercial competitive advantage in a constantly fluctuating business environment. Therefore, estimating the demand quantity for the next period most likely appears to be crucial. This work presents a comparative forecasting methodology regarding to uncertain customer demands in a multi-level supply chain (SC) structure via neural techniques. The objective of the paper is to propose a new forecasting mechanism which is modeled by artificial intelligence approaches including the comparison of both artificial neural networks and adaptive network-based fuzzy inference system techniques to manage the fuzzy demand with incomplete information. The effectiveness of the proposed approach to the demand forecasting issue is demonstrated using real-world data from a company which is active in durable consumer goods industry in Istanbul, Turkey. Crown Copyright 2008 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cd877197b06304b379d5caf9b5b89d30",
"text": "Research is now required on factors influencing adults' sedentary behaviors, and effective approaches to behavioral-change intervention must be identified. The strategies for influencing sedentary behavior will need to be informed by evidence on the most important modifiable behavioral determinants. However, much of the available evidence relevant to understanding the determinants of sedentary behaviors is from cross-sectional studies, which are limited in that they identify only behavioral \"correlates.\" As is the case for physical activity, a behavior- and context-specific approach is needed to understand the multiple determinants operating in the different settings within which these behaviors are most prevalent. To this end, an ecologic model of sedentary behaviors is described, highlighting the behavior settings construct. The behaviors and contexts of primary concern are TV viewing and other screen-focused behaviors in domestic environments, prolonged sitting in the workplace, and time spent sitting in automobiles. Research is needed to clarify the multiple levels of determinants of prolonged sitting time, which are likely to operate in distinct ways in these different contexts. Controlled trials on the feasibility and efficacy of interventions to reduce and break up sedentary behaviors among adults in domestic, workplace, and transportation environments are particularly required. It would be informative for the field to have evidence on the outcomes of \"natural experiments,\" such as the introduction of nonseated working options in occupational environments or new transportation infrastructure in communities.",
"title": ""
},
{
"docid": "6b70a42b41de6831604e14904f682b69",
"text": "A large proportion of the Indian population is excluded from basic banking services. Just one in two Indians has access to a savings bank account and just one in seven Indians has access to bank credit (Business Standard, June 28 2013). There are merely 684 million savings bank accounts in the country with a population of 1.2 billion. Branch per 100,000 adult ratio in India stands at 747 compared to 1,065 for Brazil and 2,063 for Malaysia (World Bank Financial Access Report 2010). As more people, especially the poor, gain access to financial services, they will be able to save better and get access to funding in a more structured manner. This will reduce income inequality, help the poor up the ladder, and contribute to economic development. There is a need for transactions and savings accounts for the under-served in the population. Mobile banking has been evolved in last couple of years with the help of Mobile penetration, which has shown phenomenal growth in rural areas of India. The rural subscription increased from 398.68 million at the end of December 2014 to 404.16 million at the end of January 2015, said in a statement by the Telecom Regulatory Authority of India. Banks in India are already investing in mobile technology and security from last couple of years. They are adding value in services such as developing smartphone apps, mobile wallets and educating consumers about the benefits of using the mobile banking resulting in adoption of mobile banking faster among consumers as compared to internet banking.\n The objective of this study is:\n 1. To understand the scope of mobile banking to reach unbanked population in India.\n 2. To analyze the learnings of M-PESA and Payments Bank Opportunity.\n 3. To evaluate the upcoming challenges for the payments bank success in India.",
"title": ""
},
{
"docid": "a67f7593ea049be1e2785108b6181f7d",
"text": "This paper describes torque characteristics of the interior permanent magnet synchronous motor (IPMSM) using the inexpensive ferrite magnets. IPMSM model used in this study has the spoke and the axial type magnets in the rotor, and torque characteristics are analyzed by the three-dimensional finite element method (3D-FEM). As a result, torque characteristics can be improved by using both the spoke type magnets and the axial type magnets in the rotor.",
"title": ""
},
{
"docid": "4247314290ffa50098775e2bbc41b002",
"text": "Heterogeneous integration enables the construction of silicon (Si) photonic systems, which are fully integrated with a range of passive and active elements including lasers and detectors. Numerous advancements in recent years have shown that heterogeneous Si platforms can be extended beyond near-infrared telecommunication wavelengths to the mid-infrared (MIR) (2–20 μm) regime. These wavelengths hold potential for an extensive range of sensing applications and the necessary components for fully integrated heterogeneous MIR Si photonic technologies have now been demonstrated. However, due to the broad wavelength range and the diverse assortment of MIR technologies, the optimal platform for each specific application is unclear. Here, we overview Si photonic waveguide platforms and lasers at the MIR, including quantum cascade lasers on Si. We also discuss progress toward building an integrated multispectral source, which can be constructed by wavelength beam combining the outputs from multiple lasers with arrayed waveguide gratings and duplexing adiabatic couplers.",
"title": ""
},
{
"docid": "af6c98814dbd1301b16afb562c524842",
"text": "Online anomaly detection (AD) is an important technique for monitoring wireless sensor networks (WSNs), which protects WSNs from cyberattacks and random faults. As a scalable and parameter-free unsupervised AD technique, k-nearest neighbor (kNN) algorithm has attracted a lot of attention for its applications in computer networks and WSNs. However, the nature of lazy-learning makes the kNN-based AD schemes difficult to be used in an online manner, especially when communication cost is constrained. In this paper, a new kNN-based AD scheme based on hypergrid intuition is proposed for WSN applications to overcome the lazy-learning problem. Through redefining anomaly from a hypersphere detection region (DR) to a hypercube DR, the computational complexity is reduced significantly. At the same time, an attached coefficient is used to convert a hypergrid structure into a positive coordinate space in order to retain the redundancy for online update and tailor for bit operation. In addition, distributed computing is taken into account, and position of the hypercube is encoded by a few bits only using the bit operation. As a result, the new scheme is able to work successfully in any environment without human interventions. Finally, the experiments with a real WSN data set demonstrate that the proposed scheme is effective and robust.",
"title": ""
},
{
"docid": "d114f37ccb079106a728ad8fe1461919",
"text": "This paper describes a stochastic hill climbing algorithm named SHCLVND to optimize arbitrary vectorial < n ! < functions. It needs less parameters. It uses normal (Gaussian) distributions to represent probabilities which are used for generating more and more better argument vectors. The-parameters of the normal distributions are changed by a kind of Hebbian learning. Kvasnicka et al. KPP95] used algorithm Stochastic Hill Climbing with Learning (HCwL) to optimize a highly multimodal vectorial function on real numbers. We have tested proposed algorithm by optimizations of the same and a similar function and show the results in comparison to HCwL. In opposite to it algorithm SHCLVND desribed here works directly on vectors of numbers instead their bit-vector representations and uses normal distributions instead of numbers to represent probabilities. 1 Overview In Section 2 we give an introduction with the way to the algorithm. Then we describe it exactly in Section 3. There is also given a compact notation in pseudo PASCAL-code, see Section 3.4. After that we give an example: we optimize highly multimodal functions with the proposed algorithm and give some visualisations of the progress in Section 4. In Section 5 there are a short summary and some ideas for future works. At last in Section 6 we give some hints for practical use of the algorithm. 2 Introduction This paper describes a hill climbing algorithm to optimize vectorial functions on real numbers. 2.1 Motivation Flexible algorithms for optimizing any vectorial function are interesting if there is no or only a very diicult mathematical solution known, e.g. parameter adjustments to optimize with respect to some relevant property the recalling behavior of a (trained) neuronal net HKP91, Roj93], or the resulting image of some image-processing lter.",
"title": ""
},
{
"docid": "f93ee5c9de994fa07e7c3c1fe6e336d1",
"text": "Sleep bruxism (SB) is characterized by repetitive and coordinated mandible movements and non-functional teeth contacts during sleep time. Although the etiology of SB is controversial, the literature converges on its multifactorial origin. Occlusal factors, smoking, alcoholism, drug usage, stress, and anxiety have been described as SB trigger factors. Recent studies on this topic discussed the role of neurotransmitters on the development of SB. Thus, the purpose of this study was to detect and quantify the urinary levels of catecholamines, specifically of adrenaline, noradrenaline and dopamine, in subjects with SB and in control individuals. Urine from individuals with SB (n = 20) and without SB (n = 20) was subjected to liquid chromatography. The catecholamine data were compared by Mann–Whitney’s test (p ≤ 0.05). Our analysis showed higher levels of catecholamines in subjects with SB (adrenaline = 111.4 µg/24 h; noradrenaline = 261,5 µg/24 h; dopamine = 479.5 µg/24 h) than in control subjects (adrenaline = 35,0 µg/24 h; noradrenaline = 148,7 µg/24 h; dopamine = 201,7 µg/24 h). Statistical differences were found for the three catecholamines tested. It was concluded that individuals with SB have higher levels of urinary catecholamines.",
"title": ""
},
{
"docid": "e0b7efd5d3bba071ada037fc5b05a622",
"text": "Social exclusion can thwart people's powerful need for social belonging. Whereas prior studies have focused primarily on how social exclusion influences complex and cognitively downstream social outcomes (e.g., memory, overt social judgments and behavior), the current research examined basic, early-in-the-cognitive-stream consequences of exclusion. Across 4 experiments, the threat of exclusion increased selective attention to smiling faces, reflecting an attunement to signs of social acceptance. Compared with nonexcluded participants, participants who experienced the threat of exclusion were faster to identify smiling faces within a \"crowd\" of discrepant faces (Experiment 1), fixated more of their attention on smiling faces in eye-tracking tasks (Experiments 2 and 3), and were slower to disengage their attention from smiling faces in a visual cueing experiment (Experiment 4). These attentional attunements were specific to positive, social targets. Excluded participants did not show heightened attention to faces conveying social disapproval or to positive nonsocial images. The threat of social exclusion motivates people to connect with sources of acceptance, which is manifested not only in \"downstream\" choices and behaviors but also at the level of basic, early-stage perceptual processing.",
"title": ""
},
{
"docid": "aad2d6385cb8c698a521caea00fe56d2",
"text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some",
"title": ""
},
{
"docid": "7a56ca5ad5483aef5b886836c24bbb3b",
"text": "Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal for stylistic depiction applications. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper reviews the DoG operator, including recent improvements, and offers many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and binary black-and-white images. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, and we explain how all of these are obtained without, or only with slight modifications to an extended DoG formulation. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.",
"title": ""
},
{
"docid": "d93dbf04604d9e60a554f39b0f7e3122",
"text": "BACKGROUND\nThe World Health Organization (WHO) estimates that 1.9 million deaths worldwide are attributable to physical inactivity and at least 2.6 million deaths are a result of being overweight or obese. In addition, WHO estimates that physical inactivity causes 10% to 16% of cases each of breast cancer, colon, and rectal cancers as well as type 2 diabetes, and 22% of coronary heart disease and the burden of these and other chronic diseases has rapidly increased in recent decades.\n\n\nOBJECTIVES\nThe purpose of this systematic review was to summarize the evidence of the effectiveness of school-based interventions in promoting physical activity and fitness in children and adolescents.\n\n\nSEARCH METHODS\nThe search strategy included searching several databases to October 2011. In addition, reference lists of included articles and background papers were reviewed for potentially relevant studies, as well as references from relevant Cochrane reviews. Primary authors of included studies were contacted as needed for additional information.\n\n\nSELECTION CRITERIA\nTo be included, the intervention had to be relevant to public health practice (focused on health promotion activities), not conducted by physicians, implemented, facilitated, or promoted by staff in local public health units, implemented in a school setting and aimed at increasing physical activity, included all school-attending children, and be implemented for a minimum of 12 weeks. In addition, the review was limited to randomized controlled trials and those that reported on outcomes for children and adolescents (aged 6 to 18 years). Primary outcomes included: rates of moderate to vigorous physical activity during the school day, time engaged in moderate to vigorous physical activity during the school day, and time spent watching television. Secondary outcomes related to physical health status measures including: systolic and diastolic blood pressure, blood cholesterol, body mass index (BMI), maximal oxygen uptake (VO2max), and pulse rate.\n\n\nDATA COLLECTION AND ANALYSIS\nStandardized tools were used by two independent reviewers to assess each study for relevance and for data extraction. In addition, each study was assessed for risk of bias as specified in the Cochrane Handbook for Systematic Reviews of Interventions. Where discrepancies existed, discussion occurred until consensus was reached. The results were summarized narratively due to wide variations in the populations, interventions evaluated, and outcomes measured.\n\n\nMAIN RESULTS\nIn the original review, 13,841 records were identified and screened, 302 studies were assessed for eligibility, and 26 studies were included in the review. There was some evidence that school-based physical activity interventions had a positive impact on four of the nine outcome measures. Specifically positive effects were observed for duration of physical activity, television viewing, VO2 max, and blood cholesterol. Generally, school-based interventions had little effect on physical activity rates, systolic and diastolic blood pressure, BMI, and pulse rate. At a minimum, a combination of printed educational materials and changes to the school curriculum that promote physical activity resulted in positive effects.In this update, given the addition of three new inclusion criteria (randomized design, all school-attending children invited to participate, minimum 12-week intervention) 12 of the original 26 studies were excluded. In addition, studies published between July 2007 and October 2011 evaluating the effectiveness of school-based physical interventions were identified and if relevant included. In total an additional 2378 titles were screened of which 285 unique studies were deemed potentially relevant. Of those 30 met all relevance criteria and have been included in this update. This update includes 44 studies and represents complete data for 36,593 study participants. Duration of interventions ranged from 12 weeks to six years.Generally, the majority of studies included in this update, despite being randomized controlled trials, are, at a minimum, at moderate risk of bias. The results therefore must be interpreted with caution. Few changes in outcomes were observed in this update with the exception of blood cholesterol and physical activity rates. For example blood cholesterol was no longer positively impacted upon by school-based physical activity interventions. However, there was some evidence to suggest that school-based physical activity interventions led to an improvement in the proportion of children who engaged in moderate to vigorous physical activity during school hours (odds ratio (OR) 2.74, 95% confidence interval (CI), 2.01 to 3.75). Improvements in physical activity rates were not observed in the original review. Children and adolescents exposed to the intervention also spent more time engaged in moderate to vigorous physical activity (with results across studies ranging from five to 45 min more), spent less time watching television (results range from five to 60 min less per day), and had improved VO2max (results across studies ranged from 1.6 to 3.7 mL/kg per min). However, the overall conclusions of this update do not differ significantly from those reported in the original review.\n\n\nAUTHORS' CONCLUSIONS\nThe evidence suggests the ongoing implementation of school-based physical activity interventions at this time, given the positive effects on behavior and one physical health status measure. However, given these studies are at a minimum of moderate risk of bias, and the magnitude of effect is generally small, these results should be interpreted cautiously. Additional research on the long-term impact of these interventions is needed.",
"title": ""
},
{
"docid": "88fb71e503e0d0af7515dd8489061e25",
"text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e61e7d5ade8946c74d288d75aca93961",
"text": "The ill-posed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicitly quantified using postulated prior distributions. However, the means by which these priors are chosen, as well as the estimation and inference procedures that are subsequently adopted to affect localization, have led to a daunting array of algorithms with seemingly very different properties and assumptions. From the vantage point of a simple Gaussian scale mixture model with flexible covariance components, this paper analyzes and extends several broad categories of Bayesian inference directly applicable to source localization including empirical Bayesian approaches, standard MAP estimation, and multiple variational Bayesian (VB) approximations. Theoretical properties related to convergence, global and local minima, and localization bias are analyzed and fast algorithms are derived that improve upon existing methods. This perspective leads to explicit connections between many established algorithms and suggests natural extensions for handling unknown dipole orientations, extended source configurations, correlated sources, temporal smoothness, and computational expediency. Specific imaging methods elucidated under this paradigm include the weighted minimum l(2)-norm, FOCUSS, minimum current estimation, VESTAL, sLORETA, restricted maximum likelihood, covariance component estimation, beamforming, variational Bayes, the Laplace approximation, and automatic relevance determination, as well as many others. Perhaps surprisingly, all of these methods can be formulated as particular cases of covariance component estimation using different concave regularization terms and optimization rules, making general theoretical analyses and algorithmic extensions/improvements particularly relevant.",
"title": ""
},
{
"docid": "16f5686c1675d0cf2025cf812247ab45",
"text": "This paper presents the system analysis and implementation of a soft switching Sepic-Cuk converter to achieve zero voltage switching (ZVS). In the proposed converter, the Sepic and Cuk topologies are combined together in the output side. The features of the proposed converter are to reduce the circuit components (share the power components in the transformer primary side) and to share the load current. Active snubber is connected in parallel with the primary side of transformer to release the energy stored in the leakage inductor of transformer and to limit the peak voltage stress of switching devices when the main switch is turned off. The active snubber can achieve ZVS turn-on for power switches. Experimental results, taken from a laboratory prototype rated at 300W, are presented to verify the effectiveness of the proposed converter. I. Introduction Modern",
"title": ""
}
] | scidocsrr |
b735a5acf90500cf0a0a049380468b19 | Bunny Ear Combline Antennas for Compact Wide-Band Dual-Polarized Aperture Array | [
{
"docid": "c0600c577850c8286f816396ead9649f",
"text": "A parameter study of dual-polarized tapered slot antenna (TSA) arrays shows the key features that affect the wide-band and widescan performance of these arrays. The overall performance can be optimized by judiciously choosing a combination of parameters. In particular, it is found that smaller circular slot cavities terminating the bilateral slotline improve the performance near the low end of the operating band, especially when scanning in the -plane. The opening rate of the tapered slotline mainly determines the mid-band performance and it is possible to choose an opening rate to obtain balanced overall performance in the mid-band. Longer tapered slotline is shown to increase the bandwidth, especially in the lower end of the operating band. Finally, it is shown that the -plane anomalies are affected by the array element spacing. A design example demonstrates that the results from the parameter study can be used to design a dual-polarized TSA array with about 4.5 : 1 bandwidth for a scan volume of not less than = 45 from broadside in all planes.",
"title": ""
}
] | [
{
"docid": "74c7ffaf4064218920f503a31a0f97b0",
"text": "In this paper, we present a new method for the control of soft robots with elastic behavior, piloted by several actuators. The central contribution of this work is the use of the Finite Element Method (FEM), computed in real-time, in the control algorithm. The FEM based simulation computes the nonlinear deformations of the robots at interactive rates. The model is completed by Lagrange multipliers at the actuation zones and at the end-effector position. A reduced compliance matrix is built in order to deal with the necessary inversion of the model. Then, an iterative algorithm uses this compliance matrix to find the contribution of the actuators (force and/or position) that will deform the structure so that the terminal end of the robot follows a given position. Additional constraints, like rigid or deformable obstacles, or the internal characteristics of the actuators are integrated in the control algorithm. We illustrate our method using simulated examples of both serial and parallel structures and we validate it on a real 3D soft robot made of silicone.",
"title": ""
},
{
"docid": "002a86f6e0611a7b705a166e05ef3988",
"text": "Due to a wide range of potential applications, research on mobile commerce has received a lot of interests from both of the industry and academia. Among them, one of the active topic areas is the mining and prediction of users' mobile commerce behaviors such as their movements and purchase transactions. In this paper, we propose a novel framework, called Mobile Commerce Explorer (MCE), for mining and prediction of mobile users' movements and purchase transactions under the context of mobile commerce. The MCE framework consists of three major components: 1) Similarity Inference Model (SIM) for measuring the similarities among stores and items, which are two basic mobile commerce entities considered in this paper; 2) Personal Mobile Commerce Pattern Mine (PMCP-Mine) algorithm for efficient discovery of mobile users' Personal Mobile Commerce Patterns (PMCPs); and 3) Mobile Commerce Behavior Predictor (MCBP) for prediction of possible mobile user behaviors. To our best knowledge, this is the first work that facilitates mining and prediction of mobile users' commerce behaviors in order to recommend stores and items previously unknown to a user. We perform an extensive experimental evaluation by simulation and show that our proposals produce excellent results.",
"title": ""
},
{
"docid": "b10c2eb2d074054721959ce5b1a35dbc",
"text": "With the coming of the era of big data, it is most urgent to establish the knowledge computational engine for the purpose of discovering implicit and valuable knowledge from the huge, rapidly dynamic, and complex network data. In this paper, we first survey the mainstream knowledge computational engines from four aspects and point out their deficiency. To cover these shortages, we propose the open knowledge network (OpenKN), which is a self-adaptive and evolutionable knowledge computational engine for network big data. To the best of our knowledge, this is the first work of designing the end-to-end and holistic knowledge processing pipeline in regard with the network big data. Moreover, to capture the evolutionable computing capability of OpenKN, we present the evolutionable knowledge network for knowledge representation. A case study demonstrates the effectiveness of the evolutionable computing of OpenKN.",
"title": ""
},
{
"docid": "3f86b345cc6b566957f8480bd89a4b59",
"text": "The concept of ecosystems services has become an important model for linking the functioning of ecosystems to human welfare benefits. Understanding this link is critical in decision-making contexts. While there have been several attempts to come up with a classification scheme for ecosystem services, there has not been an agreed upon, meaningful and consistent definition for ecosystem services. In this paper we offer a definition of ecosystem services that is likely to be operational for ecosystem service research and several classification schemes. We argue that any attempt at classifying ecosystem services should be based on both the characteristics of interest and a decisioncontext. Because of this there is not one classification scheme that will be adequate for the many context in which ecosystem service research may be utilized. We discuss several examples of how classification schemes will be a function of both ecosystem and ecosystem service characteristics and the decision-making context.",
"title": ""
},
{
"docid": "365a402b992bf06ab50d0ea2f591f74e",
"text": "In this paper, the ability to determine the wellness of an elderly living alone in a smart home using a lowcost, robust, flexible and data driven intelligent system is presented. A framework integrating temporal and spatial contextual information for determining the wellness of an elderly has been modeled. A novel behavior detection process based on the observed sensor data in performing essential daily activities has been designed and developed. The developed prototype is used to forecast the behavior and wellness of the elderly by monitoring the daily usages of appliances in a smart home. Wellness models are tested at various elderly houses, and the experimental results are encouraging. The wellness models are updated based on the time series analysis. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5f77218388ee927565a993a8e8c48ef3",
"text": "The paper presents an idea of Lexical Platform proposed as a means for a lightweight integration of various lexical resources into one complex (from the perspective of non-technical users). All LRs will be represented as software web components implementing a minimal set of predefined programming interfaces providing functionality for querying and generating simple common presentation format. A common data format for the resources will not be required. Users will be able to search, browse and navigate via resources on the basis of anchor elements of a limited set of types. Lexical resources linked to the platform via components will preserve their identity.",
"title": ""
},
{
"docid": "dba24c6bf3e04fc6d8b99a64b66cb464",
"text": "Recommender systems have to serve in online environments which can be highly non-stationary.1. Traditional recommender algorithmsmay periodically rebuild their models, but they cannot adjust to quick changes in trends caused by timely information. In our experiments, we observe that even a simple, but online trained recommender model can perform significantly better than its batch version. We investigate online learning based recommender algorithms that can efficiently handle non-stationary data sets. We evaluate our models over seven publicly available data sets. Our experiments are available as an open source project2.",
"title": ""
},
{
"docid": "3849284adb68f41831434afbf23be9ed",
"text": "Automatic estrus detection techniques in dairy cows have been present by different traits. Pedometers and accelerators are the most common sensor equipment. Most of the detection methods are associated with the supervised classification technique, which the training set becomes a crucial reference. The training set obtained by visual observation is subjective and time consuming. Another limitation of this approach is that it usually does not consider the factors affecting successful alerts, such as the discriminative figure, activity type of cows, the location and direction of the sensor node placed on the neck collar of a cow. This paper presents a novel estrus detection method that uses k-means clustering algorithm to create the training set online for each cow. And the training set is finally used to build an activity classification model by SVM. The activity index counted by the classification results in each sampling period can measure cow’s activity variation for assessing the onset of estrus. The experimental results indicate that the peak of estrus time are higher than that of non-estrus time at least twice in the activity index curve, and it can enhance the sensitivity and significantly reduce the error rate.",
"title": ""
},
{
"docid": "feb57c831158e03530d59725ae23af00",
"text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine the success of MTL. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary task configurations, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, because significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.",
"title": ""
},
{
"docid": "2125930409d54f6770f03a76f5ecdc59",
"text": "Why do certain combinations of words such as “disadvantageous peace” or “metal to the petal” appeal to our minds as interesting expressions with a sense of creativity, while other phrases such as “quiet teenager”, or “geometrical base” not as much? We present statistical explorations to understand the characteristics of lexical compositions that give rise to the perception of being original, interesting, and at times even artistic. We first examine various correlates of perceived creativity based on information theoretic measures and the connotation of words, then present experiments based on supervised learning that give us further insights on how different aspects of lexical composition collectively contribute to the perceived creativity.",
"title": ""
},
{
"docid": "7b0e63115a7d085a180e047ae1ab2139",
"text": "We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.",
"title": ""
},
{
"docid": "9bdddbd6b3619aa4c23566eea33b4ff7",
"text": "This was a prospective controlled study to compare the beneficial effects of office microlaparoscopic ovarian drilling (OMLOD) under augmented local anesthesia, as a new modality treatment option, compared to those following ovarian drilling with the conventional traditional 10-mm laparoscope (laparoscopic ovarian drilling, LOD) under general anesthesia. The study included 60 anovulatory women with polycystic ovary syndrome (PCOS) who underwent OMLOD (study group) and 60 anovulatory PCOS women, in whom conventional LOD using 10-mm laparoscope under general anesthesia was performed (comparison group). Transvaginal ultrasound scan and blood sampling to measure the serum concentrations of LH, FSH, testosterone and androstenedione were performed before and after the procedure. Intraoperative and postoperative pain scores in candidate women were evaluated during the office microlaparoscopic procedure, in addition to the number of candidates who needed extra analgesia. Women undergoing OMLOD showed good intraoperative and postoperative pain scores. The number of patients discharged within 2 h after the office procedure was significantly higher, without the need for postoperative analgesia in most patients. The LH:FSH ratio, mean serum concentrations of LH and testosterone and free androgen index decreased significantly after both OMLOD and LOD. The mean ovarian volume decreased significantly (P < 0.05) a year after both OMLOD and LOD. There were no significant differences in those results after both procedures. Intra- and postoperatively augmented local anesthesia allows outpatient bilateral ovarian drilling by microlaparoscopy without general anesthesia. The high pregnancy rate, the simplicity of the method and the faster discharge time offer a new option for patients with PCOS who are resistant to clomiphene citrate. Moreover, ovarian drilling could be performed simultaneously during the routine diagnostic microlaparoscopy and integrated into the fertility workup of these patients.",
"title": ""
},
{
"docid": "3d52248b140f516b82abc452336fa40c",
"text": "Requirements engineering is a creative process in which stakeholders and designers work together to create ideas for new systems that are eventually expressed as requirements. This paper describes RESCUE, a scenario-driven requirements engineering process that includes workshops that integrate creativity techniques with different types of use case and system context modelling. It reports a case study in which RESCUE creativity workshops were used to discover stakeholder and system requirements for DMAN, a future air traffic management system for managing departures from major European airports. The workshop was successful in that it provided new and important outputs for subsequent requirements processes. The paper describes the workshop structure and wider RESCUE process, important results and key lessons learned.",
"title": ""
},
{
"docid": "9539b057f14a48cec48468cb97a4a9c1",
"text": "Fuzzy-match repair (FMR), which combines a human-generated translation memory (TM) with the flexibility of machine translation (MT), is one way of using MT to augment resources available to translators. We evaluate rule-based, phrase-based, and neural MT systems as black-box sources of bilingual information for FMR. We show that FMR success varies based on both the quality of the MT system and the type of MT system being used.",
"title": ""
},
{
"docid": "9a5ef746c96a82311e3ebe8a3476a5f4",
"text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.",
"title": ""
},
{
"docid": "62501a588824f70daaf4c2dbc49223da",
"text": "ORB-SLAM2 is one of the better-known open source SLAM implementations available. However, the dependence of visual features causes it to fail in featureless environments. With the present work, we propose a new technique to improve visual odometry results given by ORB-SLAM2 using a tightly Sensor Fusion approach to integrate camera and odometer data. In this work, we use odometer readings to improve the tracking results by adding graph constraints between frames and introduce a new method for preventing the tracking loss. We test our method using three different datasets, and show an improvement in the estimated trajectory, allowing a continuous tracking without losses.",
"title": ""
},
{
"docid": "125b3b5ad3855bfb3206793657661e7d",
"text": "Dependency parsers are among the most crucial tools in natural language processing as they have many important applications in downstream tasks such as information retrieval, machine translation and knowledge acquisition. We introduce the Yara Parser, a fast and accurate open-source dependency parser based on the arc-eager algorithm and beam search. It achieves an unlabeled accuracy of 93.32 on the standard WSJ test set which ranks it among the top dependency parsers. At its fastest, Yara can parse about 4000 sentences per second when in greedy mode (1 beam). When optimizing for accuracy (using 64 beams and Brown cluster features), Yara can parse 45 sentences per second. The parser can be trained on any syntactic dependency treebank and different options are provided in order to make it more flexible and tunable for specific tasks. It is released with the Apache version 2.0 license and can be used for both commercial and academic purposes. The parser can be found at https: //github.com/yahoo/YaraParser.",
"title": ""
},
{
"docid": "dd47b07c8233fe069b5d6999da3af0b2",
"text": "Many students play (computer) games in their leisure time, thus acquiring skills which can easily be utilized when it comes to teaching more sophisticated knowledge. Nevertheless many educators today are wasting this opportunity. Some have evaluated gaming scenarios and methods for teaching students and have created the term “gamification”. This paper describes the history of this new term and explains the possible impact on teaching. It will take well-researched facts into consideration to discuss the potential of games. Moreover, scenarios will be illustrated and evaluated for educators to adopt and use on their own.",
"title": ""
},
{
"docid": "7a720c34f461728bab4905716f925ace",
"text": "We introduce the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control. These physical artifacts, which we call \"bricks,\" are essentially new input devices that can be tightly coupled or \"attached\" to virtual objects for manipulation or for expressing action (e.g., to set parameters or for initiating processes). Our bricks operate on top of a large horizontal display surface known as the \"ActiveDesk.\" We present four stages in the development of Graspable UIs: (1) a series of exploratory studies on hand gestures and grasping; (2) interaction simulations using mock-ups and rapid prototyping tools; (3) a working prototype and sample application called GraspDraw; and (4) the initial integrating of the Graspable UI concepts into a commercial application. Finally, we conclude by presenting a design space for Bricks which lay the foundation for further exploring and developing Graspable User Interfaces.",
"title": ""
},
{
"docid": "b7e42b4dbcd34d57c25c184f72ed413e",
"text": "How smart can a micron-sized bag of chemicals be? How can an artificial or real cell make inferences about its environment? From which kinds of probability distributions can chemical reaction networks sample? We begin tackling these questions by showing four ways in which a stochastic chemical reaction network can implement a Boltzmann machine, a stochastic neural network model that can generate a wide range of probability distributions and compute conditional probabilities. The resulting models, and the associated theorems, provide a road map for constructing chemical reaction networks that exploit their native stochasticity as a computational resource. Finally, to show the potential of our models, we simulate a chemical Boltzmann machine to classify and generate MNIST digits in-silico.",
"title": ""
}
] | scidocsrr |
37c29a17b493e1ce267ec285962f06c3 | ChronoStream: Elastic stateful stream computation in the cloud | [
{
"docid": "7add673c4f72e6a7586109ac3bdab2ec",
"text": "Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this article, we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.",
"title": ""
},
{
"docid": "60e06e3eebafa9070eecf1ab1e9654f8",
"text": "In most enterprises, databases are deployed on dedicated database servers. Often, these servers are underutilized much of the time. For example, in traces from almost 200 production servers from different organizations, we see an average CPU utilization of less than 4%. This unused capacity can be potentially harnessed to consolidate multiple databases on fewer machines, reducing hardware and operational costs. Virtual machine (VM) technology is one popular way to approach this problem. However, as we demonstrate in this paper, VMs fail to adequately support database consolidation, because databases place a unique and challenging set of demands on hardware resources, which are not well-suited to the assumptions made by VM-based consolidation.\n Instead, our system for database consolidation, named Kairos, uses novel techniques to measure the hardware requirements of database workloads, as well as models to predict the combined resource utilization of those workloads. We formalize the consolidation problem as a non-linear optimization program, aiming to minimize the number of servers and balance load, while achieving near-zero performance degradation. We compare Kairos against virtual machines, showing up to a factor of 12× higher throughput on a TPC-C-like benchmark. We also tested the effectiveness of our approach on real-world data collected from production servers at Wikia.com, Wikipedia, Second Life, and MIT CSAIL, showing absolute consolidation ratios ranging between 5.5:1 and 17:1.",
"title": ""
}
] | [
{
"docid": "8d3c4598b7d6be5894a1098bea3ed81a",
"text": "Retrieval enhances long-term retention. However, reactivation of a memory also renders it susceptible to modifications as shown by studies on memory reconsolidation. The present study explored whether retrieval diminishes or enhances subsequent retroactive interference (RI) and intrusions. Participants learned a list of objects. Two days later, they were either asked to recall the objects, given a subtle reminder, or were not reminded of the first learning session. Then, participants learned a second list of objects or performed a distractor task. After another two days, retention of List 1 was tested. Although retrieval enhanced List 1 memory, learning a second list impaired memory in all conditions. This shows that testing did not protect memory from RI. While a subtle reminder before List 2 learning caused List 2 items to later intrude into List 1 recall, very few such intrusions were observed in the testing and the no reminder conditions. The findings are discussed in reference to the reconsolidation account and the testing effect literature, and implications for educational practice are outlined. © 2015 Elsevier Inc. All rights reserved. Retrieval practice or testing is one of the most powerful memory enhancers. Testing that follows shortly after learning benefits long-term retention more than studying the to-be-remembered material again (Roediger & Karpicke, 2006a, 2006b). This effect has been shown using a variety of materials and paradigms, such as text passages (e.g., Roediger & Karpicke, 2006a), paired associates (Allen, Mahler, & Estes, 1969), general knowledge questions (McDaniel & Fisher, 1991), and word and picture lists (e.g., McDaniel & Masson, 1985; Wheeler & Roediger, 1992; Wheeler, Ewers, & Buonanno, 2003). Testing effects have been observed in traditional lab as well as educational settings (Grimaldi & Karpicke, 2015; Larsen, Butler, & Roediger, 2008; McDaniel, Anderson, Derbish, & Morrisette, 2007). Testing not only improves long-term retention, it also enhances subsequent encoding (Pastötter, Schicker, Niedernhuber, & Bäuml, 2011), protects memories from the buildup of proactive interference (PI; Nunes & Weinstein, 2012; Wahlheim, 2014), and reduces the probability that the tested items intrude into subsequently studied lists (Szpunar, McDermott, & Roediger, 2008; Weinstein, McDermott, & Szpunar, 2011). The reduced PI and intrusion rates are assumed to reflect enhanced list discriminability or improved within-list organization. Enhanced list discriminability in turn helps participants distinguish different sets or sources of information and allows them to circumscribe the search set during retrieval to the relevant list (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). ∗ Correspondence to: Department of Psychology, Lehigh University, 17 Memorial Drive East, Bethlehem, PA 18015, USA. E-mail address: hupbach@lehigh.edu http://dx.doi.org/10.1016/j.lmot.2015.01.004 0023-9690/© 2015 Elsevier Inc. All rights reserved. 24 A. Hupbach / Learning and Motivation 49 (2015) 23–30 If testing increases list discriminability, then it should also protect the tested list(s) from RI and intrusions from material that is encoded after retrieval practice. However, testing also necessarily reactivates a memory, and according to the reconsolidation account reactivation re-introduces plasticity into the memory trace, making it especially vulnerable to modifications (e.g., Dudai, 2004; Nader, Schafe, & LeDoux, 2000; for a recent review, see e.g., Hupbach, Gomez, & Nadel, 2013). Increased vulnerability to modification would suggest increased rather than reduced RI and intrusions. The few studies addressing this issue have yielded mixed results, with some suggesting that retrieval practice diminishes RI (Halamish & Bjork, 2011; Potts & Shanks, 2012), and others showing that retrieval practice can exacerbate the potential negative effects of post-retrieval learning (e.g., Chan & LaPaglia, 2013; Chan, Thomas, & Bulevich, 2009; Walker, Brakefield, Hobson, & Stickgold, 2003). Chan and colleagues (Chan & Langley, 2011; Chan et al., 2009; Thomas, Bulevich, & Chan, 2010) assessed the effects of testing on suggestibility in a misinformation paradigm. After watching a television episode, participants answered cuedrecall questions about it (retrieval practice) or performed an unrelated distractor task. Then, all participants read a narrative, which summarized the video but also contained some misleading information. A final cued-recall test revealed that participants in the retrieval practice condition recalled more misleading details and fewer correct details than participants in the distractor condition; that is, retrieval increased the misinformation effect (retrieval-enhanced suggestibility, RES). Chan et al. (2009) discuss two mechanisms that can explain this finding. First, since testing can potentiate subsequent new learning (e.g., Izawa, 1967; Tulving & Watkins, 1974), initial testing might have improved encoding of the misinformation. Indeed, when a modified final test was used, which encouraged the recall of both the correct information and the misinformation, participants in the retrieval practice condition recalled more misinformation than participants in the distractor condition (Chan et al., 2009). Second, retrieval might have rendered the memory more susceptible to interference by misinformation, an explanation that is in line with the reconsolidation account. Indeed, Chan and LaPaglia (2013) found reduced recognition of the correct information when retrieval preceded the presentation of misinformation (cf. Walker et al., 2003 for a similar effect in procedural memory). In contrast to Chan and colleagues’ findings, a study by Potts and Shanks (2012) suggests that testing protects memories from the negative influences of post-retrieval encoding of related material. Potts and Shanks asked participants to learn English–Swahili word pairs (List 1, A–B). One day later, one group of participants took a cued recall test of List 1 (testing condition) immediately before learning English–Finnish word pairs with the same English cues as were used in List 1 (List 2, A–C). Additionally, several control groups were implemented: one group was tested on List 1 without learning a second list, one group learned List 2 without prior retrieval practice, and one group did not participate in this session at all. On the third day, all participants took a final cued-recall test of List 1. Although retrieval practice per se did not enhance List 1 memory (i.e., no testing effect in the groups that did not learn List 2), it protected memory from RI (see Halamish & Bjork, 2011 for a similar result in a one-session study). Crucial for assessing the reconsolidation account is the comparison between the groups that learned List 2 either after List 1 recall or without prior List 1 recall. Contrary to the predictions derived from the reconsolidation account, final List 1 recall was enhanced when retrieval of List 1 preceded learning of List 2.1 While this clearly shows that testing counteracts RI, it would be premature to conclude that testing prevented the disruption of memory reconsolidation, because (a) retrieval practice without List 2 learning led to minimal forgetting between Day 2 and 3, while retrieval practice followed by List 2 learning led to significant memory decline, and (b) a reactivation condition that is independent from retrieval practice is missing. One could argue that repeating the cue words in List 2 likely reactivated memory for the original associations. It has been shown that the strength of reactivation (Detre, Natarajan, Gershman, & Norman, 2013) and the specific reminder structure (Forcato, Argibay, Pedreira, & Maldonado, 2009) determine whether or not a memory will be affected by post-reactivation procedures. The current study re-evaluates the question of how testing affects RI and intrusions. It uses a reconsolidation paradigm (Hupbach, Gomez, Hardt, & Nadel, 2007; Hupbach, Hardt, Gomez, & Nadel, 2008; Hupbach, Gomez, & Nadel, 2009; Hupbach, Gomez, & Nadel, 2011) to assess how testing in comparison to other reactivation procedures affects declarative memory. This paradigm will allow for a direct evaluation of the hypotheses that testing makes declarative memories vulnerable to interference, or that testing protects memories from the potential negative effects of subsequently learned material, as suggested by the list-separation hypothesis (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). This question has important practical implications. For instance, when students test their memory while preparing for an exam, will such testing increase or reduce interference and intrusions from information that is learned afterwards?",
"title": ""
},
{
"docid": "02a3b3034bb6c58eee37b462236a9e7d",
"text": "Short Message Service (SMS) is a text messaging service component of phone, web, or mobile communication systems, using standardized communications protocols that allow the exchange of short text messages between fixed line or mobile phone devices. Security of SMS’s is still an open challenging task. Various Cryptographic algorithms have been applied to secure the mobile SMS. The success of any cryptography technique depends on various factors like complexity, time, memory requirement, cost etc. In this paper we survey the most common and widely used SMS Encryption techniques. Each has its own advantages and disadvantages. Recent trends on Cryptography on android message applications have also been discussed. The latest cryptographic algorithm is based on lookup table and dynamic key which is easy to implement and to use and improve the efficiency. In this paper, an improvement in lookup table and dynamic algorithm is proposed. Rather than using the Static Lookup Table, Dynamic Lookup Table may be used which will improve the overall efficiency. KeywordsSMS, AES, DES, Blowfish, RSA, 3DES, LZW.",
"title": ""
},
{
"docid": "2b7d91c38a140628199cbdbee65c008a",
"text": "Edges in man-made environments, grouped according to vanishing point directions, provide single-view constraints that have been exploited before as a precursor to both scene understanding and camera calibration. A Bayesian approach to edge grouping was proposed in the \"Manhattan World\" paper by Coughlan and Yuille, where they assume the existence of three mutually orthogonal vanishing directions in the scene. We extend the thread of work spawned by Coughlan and Yuille in several significant ways. We propose to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene. Because EM behaves well in high-dimensional spaces, our method can optimize over many more parameters than the exhaustive and stochastic algorithms used previously for this task. Among other things, this lets us optimize over multiple groups of orthogonal vanishing directions, each of which induces one additional degree of freedom. EM is also well suited to recursive estimation of the kind needed for image sequences and/or in mobile robotics. We present experimental results on images of \"Atlanta worlds\", complex urban scenes with multiple orthogonal edge-groups, that validate our approach. We also show results for continuous relative orientation estimation on a mobile robot.",
"title": ""
},
{
"docid": "91dd10428713ab2bbf1d07bf543cd2da",
"text": "Recent findings showed that users on Facebook tend to select information that adhere to their system of beliefs and to form polarized groups - i.e., echo chambers. Such a tendency dominates information cascades and might affect public debates on social relevant issues. In this work we explore the structural evolution of communities of interest by accounting for users emotions and engagement. Focusing on the Facebook pages reporting on scientific and conspiracy content, we characterize the evolution of the size of the two communities by fitting daily resolution data with three growth models - i.e. the Gompertz model, the Logistic model, and the Log-logistic model. Although all the models appropriately describe the data structure, the Logistic one shows the best fit. Then, we explore the interplay between emotional state and engagement of users in the group dynamics. Our findings show that communities' emotional behavior is affected by the users' involvement inside the echo chamber. Indeed, to an higher involvement corresponds a more negative approach. Moreover, we observe that, on average, more active users show a faster shift towards the negativity than less active ones.",
"title": ""
},
{
"docid": "325772543e172b1a5bd08d20092b1069",
"text": "Despite considerable research on passwords, empirical studies of password strength have been limited by lack of access to plaintext passwords, small data sets, and password sets specifically collected for a research study or from low-value accounts. Properties of passwords used for high-value accounts thus remain poorly understood.\n We fill this gap by studying the single-sign-on passwords used by over 25,000 faculty, staff, and students at a research university with a complex password policy. Key aspects of our contributions rest on our (indirect) access to plaintext passwords. We describe our data collection methodology, particularly the many precautions we took to minimize risks to users. We then analyze how guessable the collected passwords would be during an offline attack by subjecting them to a state-of-the-art password cracking algorithm. We discover significant correlations between a number of demographic and behavioral factors and password strength. For example, we find that users associated with the computer science school make passwords more than 1.5 times as strong as those of users associated with the business school. while users associated with computer science make strong ones. In addition, we find that stronger passwords are correlated with a higher rate of errors entering them.\n We also compare the guessability and other characteristics of the passwords we analyzed to sets previously collected in controlled experiments or leaked from low-value accounts. We find more consistent similarities between the university passwords and passwords collected for research studies under similar composition policies than we do between the university passwords and subsets of passwords leaked from low-value accounts that happen to comply with the same policies.",
"title": ""
},
{
"docid": "d22f3bbb7af0ce2a221a17a12381de25",
"text": "Ambient occlusion is a technique that computes the amount of light reaching a point on a diffuse surface based on its directly visible occluders. It gives perceptual clues of depth, curvature, and spatial proximity and thus is important for realistic rendering. Traditionally, ambient occlusion is calculated by integrating the visibility function over the normal-oriented hemisphere around any given surface point. In this paper we show this hemisphere can be partitioned into two regions by a horizon line defined by the surface in a local neighborhood of such point. We introduce an image-space algorithm for finding an approximation of this horizon and, furthermore, we provide an analytical closed form solution for the occlusion below the horizon, while the rest of the occlusion is computed by sampling based on a distribution to improve the convergence. The proposed ambient occlusion algorithm operates on the depth buffer of the scene being rendered and the associated per-pixel normal buffer. It can be implemented on graphics hardware in a pixel shader, independently of the scene geometry. We introduce heuristics to reduce artifacts due to the incompleteness of the input data and we include parameters to make the algorithm easy to customize for quality or performance purposes. We show that our technique can render high-quality ambient occlusion at interactive frame rates on current GPUs. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation—; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—;",
"title": ""
},
{
"docid": "95a36969ad22c9ad42639cf0e4a824d6",
"text": "We consider the problems of kinematic and dynamic constraints, with actuator saturation and wheel slippage avoidance, for motion planning of a holonomic three-wheeled omni-directional robot. That is, the motion planner must not demand more velocity and acceleration at each time instant than the robot can provide. A new coupled non-linear dynamics model is derived. The novel concepts of Velocity and Acceleration Cones are proposed for determining the kinematic and dynamic constraints. The Velocity Cone is based on kinematics; we propose two Acceleration Cones, one for avoiding actuator saturation and the other for avoiding wheel slippage. The wheel slippage Acceleration Cone was found to dominate. In practical motion, all commanded velocities and accelerations from the motion planner must lie within these cones for successful motion. Case studies, simulations, and experimental validations are presented for our dynamic model and controller, plus the Velocity and Acceleration Cones.",
"title": ""
},
{
"docid": "0bcff493580d763dbc1dd85421546201",
"text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
},
{
"docid": "363a465d626fec38555563722ae92bb1",
"text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.",
"title": ""
},
{
"docid": "0f2023682deaf2eb70c7becd8b3375dd",
"text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.",
"title": ""
},
{
"docid": "9dab38b961f4be434c95ca6696ba52bd",
"text": "The widespread use and increasing capabilities of mobiles devices are making them a viable platform for offering mobile services. However, the increasing resource demands of mobile services and the inherent constraints of mobile devices limit the quality and type of functionality that can be offered, preventing mobile devices from exploiting their full potential as reliable service providers. Computation offloading offers mobile devices the opportunity to transfer resource-intensive computations to more resourcefulcomputing infrastructures. We present a framework for cloud-assisted mobile service provisioning to assist mobile devices in delivering reliable services. The framework supports dynamic offloading based on the resource status of mobile systems and current network conditions, while satisfying the user-defined energy constraints. It also enables the mobile provider to delegate the cloud infrastructure to forward the service response directly to the user when no further processing is required by the provider. Performance evaluation shows up to 6x latency improvement for computation-intensive services that do not require large data transfer. Experiments show that the operation of the cloud-assisted service provisioning framework does not pose significant overhead on mobile resources, yet it offers robust and efficient computation offloading.",
"title": ""
},
{
"docid": "82bfc1bc10247a23f45e30481db82245",
"text": "The performance of automatic speech recognition (ASR) has improved tremendously due to the application of deep neural networks (DNNs). Despite this progress, building a new ASR system remains a challenging task, requiring various resources, multiple training stages and significant expertise. This paper presents our Eesen framework which drastically simplifies the existing pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen involves learning a single recurrent neural network (RNN) predicting context-independent targets (phonemes or characters). To remove the need for pre-generated frame labels, we adopt the connectionist temporal classification (CTC) objective function to infer the alignments between speech and label sequences. A distinctive feature of Eesen is a generalized decoding approach based on weighted finite-state transducers (WFSTs), which enables the efficient incorporation of lexicons and language models into CTC decoding. Experiments show that compared with the standard hybrid DNN systems, Eesen achieves comparable word error rates (WERs), while at the same time speeding up decoding significantly.",
"title": ""
},
{
"docid": "028eb05afad2183bdf695b4268c438ed",
"text": "OBJECTIVE\nChoosing an appropriate method for regression analyses of cost data is problematic because it must focus on population means while taking into account the typically skewed distribution of the data. In this paper we illustrate the use of generalised linear models for regression analysis of cost data.\n\n\nMETHODS\nWe consider generalised linear models with either an identity link function (providing additive covariate effects) or log link function (providing multiplicative effects), and with gaussian (normal), overdispersed poisson, gamma, or inverse gaussian distributions. These are applied to estimate the treatment effects in two randomised trials adjusted for baseline covariates. Criteria for choosing an appropriate model are presented.\n\n\nRESULTS\nIn both examples considered, the gaussian model fits poorly and other distributions are to be preferred. When there are variables of prognostic importance in the model, using different distributions can materially affect the estimates obtained; it may also be possible to discriminate between additive and multiplicative covariate effects.\n\n\nCONCLUSIONS\nGeneralised linear models are attractive for the regression of cost data because they provide parametric methods of analysis where a variety of non-normal distributions can be specified and the way covariates act can be altered. Unlike the use of data transformation in ordinary least-squares regression, generalised linear models make inferences about the mean cost directly.",
"title": ""
},
{
"docid": "324e67e78d8786448106b25871c91ed6",
"text": "Interpretation of image contents is one of the objectives in computer vision specifically in image processing. In this era it has received much awareness of researchers. In image interpretation the partition of the image into object and background is a severe step. Segmentation separates an image into its component regions or objects. Image segmentation t needs to segment the object from the background to read the image properly and identify the content of the image carefully. In this context, edge detection is a fundamental tool for image segmentation. In this paper an attempt is made to study the performance of most commonly used edge detection techniques for image segmentation and also the comparison of these techniques is carried out with an experiment by using MATLAB software.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
},
{
"docid": "389a8e74f6573bd5e71b7c725ec3a4a7",
"text": "Paucity of large curated hand-labeled training data forms a major bottleneck in the deployment of machine learning models in computer vision and other fields. Recent work (Data Programming) has shown how distant supervision signals in the form of labeling functions can be used to obtain labels for given data in near-constant time. In this work, we present Adversarial Data Programming (ADP), which presents an adversarial methodology to generate data as well as a curated aggregated label, given a set of weak labeling functions. We validated our method on the MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many state-of-the-art models. We conducted extensive experiments to study its usefulness, as well as showed how the proposed ADP framework can be used for transfer learning as well as multi-task learning, where data from two domains are generated simultaneously using the framework along with the label information. Our future work will involve understanding the theoretical implications of this new framework from a game-theoretic perspective, as well as explore the performance of the method on more complex datasets.",
"title": ""
},
{
"docid": "ae85cf24c079ff446b76f0ba81146369",
"text": "Subgraph Isomorphism is a fundamental problem in graph data processing. Most existing subgraph isomorphism algorithms are based on a backtracking framework which computes the solutions by incrementally matching all query vertices to candidate data vertices. However, we observe that extensive duplicate computation exists in these algorithms, and such duplicate computation can be avoided by exploiting relationships between data vertices. Motivated by this, we propose a novel approach, BoostIso, to reduce duplicate computation. Our extensive experiments with real datasets show that, after integrating our approach, most existing subgraph isomorphism algorithms can be speeded up significantly, especially for some graphs with intensive vertex relationships, where the improvement can be up to several orders of magnitude.",
"title": ""
}
] | scidocsrr |
e8872a10a902f508cb71148612dc6224 | Bucket Elimination: A Unifying Framework for Reasoning | [
{
"docid": "34b3c5ee3ea466c23f5c7662f5ce5b33",
"text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.",
"title": ""
}
] | [
{
"docid": "8923cd83f3283ef27fca8dd0ecf2a08f",
"text": "This paper investigates when users create profiles in different social networks, whether they are redundant expressions of the same persona, or they are adapted to each platform. Using the personal webpages of 116,998 users on About.me, we identify and extract matched user profiles on several major social networks including Facebook, Twitter, LinkedIn, and Instagram. We find evidence for distinct site-specific norms, such as differences in the language used in the text of the profile self-description, and the kind of picture used as profile image. By learning a model that robustly identifies the platform given a user’s profile image (0.657–0.829 AUC) or self-description (0.608–0.847 AUC), we confirm that users do adapt their behaviour to individual platforms in an identifiable and learnable manner. However, different genders and age groups adapt their behaviour differently from each other, and these differences are, in general, consistent across different platforms. We show that differences in social profile construction correspond to differences in how formal or informal",
"title": ""
},
{
"docid": "79c2623b0e1b51a216fffbc6bbecd9ec",
"text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.",
"title": ""
},
{
"docid": "a6d3a8fcf10ee1fed6e3a933987db365",
"text": "This interdisciplinary conference explores exoticism, understood as a highly contested discourse on cultural difference as well as an alluring form of alterity that promotes a sense of cosmopolitan connectivity. Presentations and discussions will revolve around the question how the collapsed distances of globalisation and the transnational flows of media and people have transformed exoticism, which is no longer exclusively the projection of Orientalist fantasies of the Other from one centre, the West, but which emanates from multiple localities and is multidirectional in perspective.",
"title": ""
},
{
"docid": "dfbf284e97000e884281e4f25e7b615e",
"text": "Due to its popularity and open-source nature, Android is the mobile platform that has been targeted the most by malware that aim to steal personal information or to control the users' devices. More specifically, mobile botnets are malware that allow an attacker to remotely control the victims' devices through different channels like HTTP, thus creating malicious networks of bots. In this paper, we show how it is possible to effectively group mobile botnets families by analyzing the HTTP traffic they generate. To do so, we create malware clusters by looking at specific statistical information that are related to the HTTP traffic. This approach also allows us to extract signatures with which it is possible to precisely detect new malware that belong to the clustered families. Contrarily to x86 malware, we show that using fine-grained HTTP structural features do not increase detection performances. Finally, we point out how the HTTP information flow among mobile bots contains more information when compared to the one generated by desktop ones, allowing for a more precise detection of mobile threats.",
"title": ""
},
{
"docid": "610629d3891c10442fe5065e07d33736",
"text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.",
"title": ""
},
{
"docid": "2c8dc61a5dbdfcf8f086a5e6a0d920c1",
"text": "This work achieves a two-and-a-half-dimensional (2.5D) wafer-level radio frequency (RF) energy harvesting rectenna module with a compact size and high power conversion efficiency (PCE) that integrates a 2.45 GHz antenna in an integrated passive device (IPD) and a rectifier in a tsmcTM 0.18 μm CMOS process. The proposed rectifier provides a master-slave voltage doubling full-wave topology which can reach relatively high PCE by means of a relatively simple circuitry. The IPD antenna was stacked on top of the CMOS rectifier. The rectenna (including an antenna and rectifier) achieves an output voltage of 1.2 V and PCE of 47 % when the operation frequency is 2.45 GHz, with −12 dBm input power. The peak efficiency of the circuit is 83 % with −4 dBm input power. The die size of the RF harvesting module is less than 1 cm2. The performance of this module makes it possible to energy mobile device and it is also very suitable for wearable and implantable wireless sensor networks (WSN).",
"title": ""
},
{
"docid": "a82a658a8200285cf5a6eab8035a3fce",
"text": "This paper examines the magnitude of informational problems associated with the implementation and interpretation of simple monetary policy rules. Using Taylor’s rule as an example, I demonstrate that real-time policy recommendations differ considerably from those obtained with ex post revised data. Further, estimated policy reaction functions based on ex post revised data provide misleading descriptions of historical policy and obscure the behavior suggested by information available to the Federal Reserve in real time. These results indicate that reliance on the information actually available to policy makers in real time is essential for the analysis of monetary policy rules. (JEL E52, E58)",
"title": ""
},
{
"docid": "01b35a491b36f9c90f37237ef3975e33",
"text": "Wide bandgap semiconductors show superior material properties enabling potential power device operation at higher temperatures, voltages, and switching speeds than current Si technology. As a result, a new generation of power devices is being developed for power converter applications in which traditional Si power devices show limited operation. The use of these new power semiconductor devices will allow both an important improvement in the performance of existing power converters and the development of new power converters, accounting for an increase in the efficiency of the electric energy transformations and a more rational use of the electric energy. At present, SiC and GaN are the more promising semiconductor materials for these new power devices as a consequence of their outstanding properties, commercial availability of starting material, and maturity of their technological processes. This paper presents a review of recent progresses in the development of SiC- and GaN-based power semiconductor devices together with an overall view of the state of the art of this new device generation.",
"title": ""
},
{
"docid": "a055a3799dbf1f1cf1c389262a882d65",
"text": "This paper constitutes a first study of the Particle Swarm Optimization (PSO) method in Multiobjective Optimization (MO) problems. The ability of PSO to detect Pareto Optimal points and capture the shape of the Pareto Front is studied through experiments on well-known non-trivial test functions. The Weighted Aggregation technique with fixed or adaptive weights is considered. Furthermore, critical aspects of the VEGA approach for Multiobjective Optimization using Genetic Algorithms are adapted to the PSO framework in order to develop a multi-swarm PSO that can cope effectively with MO problems. Conclusions are derived and ideas for further research are proposed.",
"title": ""
},
{
"docid": "18c9eb47a76d2320f3d42bcf0129d5fe",
"text": "In his article Open Problems in the Philosophy of Information (Metaphilosophy 2004, 35 (4)), Luciano Floridi presented a Philosophy of Information research programme in the form of eighteen open problems, covering the following fundamental areas: information definition, information semantics, intelligence/cognition, informational universe/nature and values/ethics. We revisit Floridi’s programme, highlighting some of the major advances, commenting on unsolved problems and rendering the new landscape of the Philosophy of Information (PI) emerging at present. As we analyze the progress of PI we try to situate Floridi’s programme in the context of scientific and technological development that have been made last ten years. We emphasize that Philosophy of Information is a huge and vibrant research field, with its origins dating before Open Problems, and its domains extending even outside their scope. In this paper, we have been able only to sketch some of the developments during the past ten years. Our hope is that, even if fragmentary, this review may serve as a contribution to the effort of understanding the present state of the art and the paths of development of Philosophy of Information as seen through the lens of Open Problems.",
"title": ""
},
{
"docid": "009d79972bd748d7cf5206bb188aba00",
"text": "Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performanc e o a wide variety of tasks and enjoy super-linear convergence to the optimal s olution. For largescale learning problems, stochastic Quasi-Newton methods ave been recently proposed. However, these typically only achieve sub-linea r convergence rates and have not been shown to consistently perform well in practice s nce noisy Hessian approximations can exacerbate the effect of high-variance stochastic gradient estimates. In this work we propose V ITE, a novel stochastic Quasi-Newton algorithm that uses an existing first-order technique to reduce this va r ance. Without exploiting the specific form of the approximate Hessian, we show that V ITE reaches the optimum at a geometric rate with a constant step-size when de aling with smooth strongly convex functions. Empirically, we demonstrate im provements over existing stochastic Quasi-Newton and variance reduced stochast i gradient methods.",
"title": ""
},
{
"docid": "ed3b4ace00c68e9ad2abe6d4dbdadfcb",
"text": "With decreasing costs of high-quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in real time by applying a novel image projection that reduces the dimensionality of the data and, thus, reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require 2-D data. We evaluate the performance of our algorithm using the i-LIDS vehicle detection challenge datasets as well as videos we have taken ourselves. These videos test the algorithm in a variety of outdoor conditions, including nighttime video and instances of sudden changes in weather.",
"title": ""
},
{
"docid": "d2d3c47010566662eeaa2df01c768d5f",
"text": "To be rational is to be able to reason. Thirty years ago psychologists believed that human reasoning depended on formal rules of inference akin to those of a logical calculus. This hypothesis ran into difficulties, which led to an alternative view: reasoning depends on envisaging the possibilities consistent with the starting point--a perception of the world, a set of assertions, a memory, or some mixture of them. We construct mental models of each distinct possibility and derive a conclusion from them. The theory predicts systematic errors in our reasoning, and the evidence corroborates this prediction. Yet, our ability to use counterexamples to refute invalid inferences provides a foundation for rationality. On this account, reasoning is a simulation of the world fleshed out with our knowledge, not a formal rearrangement of the logical skeletons of sentences.",
"title": ""
},
{
"docid": "2ce90f045706cf98f3a0d624828b99b8",
"text": "A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.",
"title": ""
},
{
"docid": "b46498351a95cbb9ce21b34b58eb3d94",
"text": "Under normal circumstances, mammalian adult skeletal muscle is a stable tissue with very little turnover of nuclei. However, upon injury, skeletal muscle has the remarkable ability to initiate a rapid and extensive repair process preventing the loss of muscle mass. Skeletal muscle repair is a highly synchronized process involving the activation of various cellular responses. The initial phase of muscle repair is characterized by necrosis of the damaged tissue and activation of an inflammatory response. This phase is rapidly followed by activation of myogenic cells to proliferate, differentiate, and fuse leading to new myofiber formation and reconstitution of a functional contractile apparatus. Activation of adult muscle satellite cells is a key element in this process. Muscle satellite cell activation resembles embryonic myogenesis in several ways including the de novo induction of the myogenic regulatory factors. Signaling factors released during the regenerating process have been identified, but their functions remain to be fully defined. In addition, recent evidence supports the possible contribution of adult stem cells in the muscle regeneration process. In particular, bone marrow-derived and muscle-derived stem cells contribute to new myofiber formation and to the satellite cell pool after injury.",
"title": ""
},
{
"docid": "e21f4c327c0006196fde4cf53ed710a7",
"text": "To focus the efforts of security experts, the goals of this empirical study are to analyze which security vulnerabilities can be discovered by code review, identify characteristics of vulnerable code changes, and identify characteristics of developers likely to introduce vulnerabilities. Using a three-stage manual and automated process, we analyzed 267,046 code review requests from 10 open source projects and identified 413 Vulnerable Code Changes (VCC). Some key results include: (1) code review can identify common types of vulnerabilities; (2) while more experienced contributors authored the majority of the VCCs, the less experienced contributors' changes were 1.8 to 24 times more likely to be vulnerable; (3) the likelihood of a vulnerability increases with the number of lines changed, and (4) modified files are more likely to contain vulnerabilities than new files. Knowing which code changes are more prone to contain vulnerabilities may allow a security expert to concentrate on a smaller subset of submitted code changes. Moreover, we recommend that projects should: (a) create or adapt secure coding guidelines, (b) create a dedicated security review team, (c) ensure detailed comments during review to help knowledge dissemination, and (d) encourage developers to make small, incremental changes rather than large changes.",
"title": ""
},
{
"docid": "f77495366909b9713463bebf2b4ff2fc",
"text": "This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.",
"title": ""
},
{
"docid": "e7ba504d2d9a80c0a10bfa4830a1fc54",
"text": "BACKGROUND\nGlobal and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment.\n\n\nMETHODS\nWe did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12).\n\n\nFINDINGS\nGlobally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9-65·4) were blind (crude prevalence 0·48%; 80% UI 0·17-0·87; 56% female), 216·6 million (80% UI 98·5-359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34-4·89; 55% female), and 188·5 million (80% UI 64·5-350·2) had mild visual impairment (2·57%, 80% UI 0·88-4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1-1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9-997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9-57·3) in 1990 to 36·0 million (80% UI 12·9-65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (-36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3-270·0) in 1990 to 216·6 million (80% UI 98·5-359·1) in 2015.\n\n\nINTERPRETATION\nThere is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world's population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels.\n\n\nFUNDING\nBrien Holden Vision Institute.",
"title": ""
},
{
"docid": "18288c42186b7fec24a5884454e69989",
"text": "This article addresses the problem of multichannel audio source separation. We propose a framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information. The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter. We present an extensive experimental study to show the impact of different design choices on the performance of the proposed technique. We consider different cost functions for the training of DNNs, namely the probabilistically motivated Itakura-Saito divergence, and also Kullback-Leibler, Cauchy, mean squared error, and phase-sensitive cost functions. We also study the number of EM iterations and the use of multiple DNNs, where each DNN aims to improve the spectra estimated by the preceding EM iteration. Finally, we present its application to a speech enhancement problem. The experimental results show the benefit of the proposed multichannel approach over a single-channel DNN-based approach and the conventional multichannel nonnegative matrix factorization-based iterative EM algorithm.",
"title": ""
},
{
"docid": "f438c1b133441cd46039922c8a7d5a7d",
"text": "This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.",
"title": ""
}
] | scidocsrr |
486882875a939d90011d3c367eed9e06 | An Exhaustive DPLL Algorithm for Model Counting | [
{
"docid": "3a0ce7b1e1b1e599954a467cd780ec4f",
"text": "Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.",
"title": ""
}
] | [
{
"docid": "3e5fd66795e92999aacf6e39cc668aed",
"text": "A couple of popular methods are presented with their benefits and drawbacks. Commonly used methods are using wrapped phase and impulse response. With real time FFT analysis, magnitude and time domain can be analyzed simultaneously. Filtered impulse response and Cepstrum analysis are helpful tools when the spectral content differs and make it hard to analyse the impulse response. To make a successful time alignment the measurements must be anechoic. Methods such as multiple time windowing and averaging in frequency domain are presented. Group-delay and wavelets analysis are used to evaluate the measurements.",
"title": ""
},
{
"docid": "c29f8aeed7f7ccfe3687d300da310c25",
"text": "Global investment in ICT to improve teaching and learning in schools have been initiated by many governments. Despite all these investments on ICT infrastructure, equipments and professional development to improve education in many countries, ICT adoption and integration in teaching and learning have been limited. This article reviews personal, institutional and technological factors that encourage teachers’ use of computer technology in teaching and learning processes. Also teacher-level, school-level and system-level factors that prevent teachers from ICT use are reviewed. These barriers include lack of teacher ICT skills; lack of teacher confidence; lack of pedagogical teacher training; l lack of suitable educational software; limited access to ICT; rigid structure of traditional education systems; restrictive curricula, etc. The article concluded that knowing the extent to which these barriers affect individuals and institutions may help in taking a decision on how to tackle them.",
"title": ""
},
{
"docid": "8d6a33661e281516433df5caa1f35c3a",
"text": "The main contribution of this work is the comparison of three user modeling strategies based on job titles, educational fields and skills in LinkedIn profiles, for personalized MOOC recommendations in a cold start situation. Results show that the skill-based user modeling strategy performs best, followed by the job- and edu-based strategies.",
"title": ""
},
{
"docid": "98c4f94eb35489d452cbd16c817e2bec",
"text": "Many defect prediction techniques are proposed to improve software reliability. Change classification predicts defects at the change level, where a change is the modifications to one file in a commit. In this paper, we conduct the first study of applying change classification in practice.\n We identify two issues in the prediction process, both of which contribute to the low prediction performance. First, the data are imbalanced---there are much fewer buggy changes than clean changes. Second, the commonly used cross-validation approach is inappropriate for evaluating the performance of change classification. To address these challenges, we apply and adapt online change classification, resampling, and updatable classification techniques to improve the classification performance.\n We perform the improved change classification techniques on one proprietary and six open source projects. Our results show that these techniques improve the precision of change classification by 12.2-89.5% or 6.4--34.8 percentage points (pp.) on the seven projects. In addition, we integrate change classification in the development process of the proprietary project. We have learned the following lessons: 1) new solutions are needed to convince developers to use and believe prediction results, and prediction results need to be actionable, 2) new and improved classification algorithms are needed to explain the prediction results, and insensible and unactionable explanations need to be filtered or refined, and 3) new techniques are needed to improve the relatively low precision.",
"title": ""
},
{
"docid": "a0306096725c0d4b6bdd648bfa396f13",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "f296b374b635de4f4c6fc9c6f415bf3e",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "1af3be5ed92448095c8a82738e003855",
"text": "OBJECTIVE\nThe aim of this review is to identify, critically evaluate, and summarize the laughter literature across a number of fields related to medicine and health care to assess to what extent laughter health-related benefits are currently supported by empirical evidence.\n\n\nDATA SOURCES AND STUDY SELECTION\nA comprehensive laughter literature search was performed. A thorough search of the gray literature was also undertaken. A list of inclusion and exclusion criteria was identified.\n\n\nDATA EXTRACTION\nIt was necessary to distinguish between humor and laughter to assess health-related outcomes elicited by laughter only.\n\n\nDATA SYNTHESIS\nThematic analysis was applied to summarize laughter health-related outcomes, relationships, and general robustness.\n\n\nCONCLUSIONS\nLaughter has shown physiological, psychological, social, spiritual, and quality-of-life benefits. Adverse effects are very limited, and laughter is practically lacking in contraindications. Therapeutic efficacy of laughter is mainly derived from spontaneous laughter (triggered by external stimuli or positive emotions) and self-induced laughter (triggered by oneself at will), both occurring with or without humor. The brain is not able to distinguish between these types; therefore, it is assumed that similar benefits may be achieved with one or the other. Although there is not enough data to demonstrate that laughter is an all-around healing agent, this review concludes that there exists sufficient evidence to suggest that laughter has some positive, quantifiable effects on certain aspects of health. In this era of evidence-based medicine, it would be appropriate for laughter to be used as a complementary/alternative medicine in the prevention and treatment of illnesses, although further well-designed research is warranted.",
"title": ""
},
{
"docid": "e35f6f4e7b6589e992ceeccb4d25c9f1",
"text": "One of the key success factors of lending organizations in general and banks in particular is the assessment of borrower credit worthiness in advance during the credit evaluation process. Credit scoring models have been applied by many researchers to improve the process of assessing credit worthiness by differentiating between prospective loans on the basis of the likelihood of repayment. Thus, credit scoring is a very typical Data Mining (DM) classification problem. Many traditional statistical and modern computational intelligence techniques have been presented in the literature to tackle this problem. The main objective of this paper is to describe an experiment of building suitable Credit Scoring Models (CSMs) for the Sudanese banks. Two commonly discussed data mining classification techniques are chosen in this paper namely: Decision Tree (DT) and Artificial Neural Networks (ANN). In addition Genetic Algorithms (GA) and Principal Component Analysis (PCA) are also applied as feature selection techniques. In addition to a Sudanese credit dataset, German credit dataset is also used to evaluate these techniques. The results reveal that ANN models outperform DT models in most cases. Using GA as a feature selection is more effective than PCA technique. The highest accuracy of German data set (80.67%) and Sudanese credit scoring models (69.74%) are achieved by a hybrid GA-ANN model. Although DT and its hybrid models (PCA-DT, GA-DT) are outperformed by ANN and its hybrid models (PCA-ANN, GA-ANN) in most cases, they produced interpretable loan granting decisions.",
"title": ""
},
{
"docid": "b01c62a4593254df75c1e390487982fa",
"text": "This paper addresses the question \"why and how is it that we say the same thing differently to different people, or even to the same person in different circumstances?\" We vary the content and form of our text in order to convey more information than is contained in the literal meanings of our words. This information expresses the speaker's interpersonal goals toward the hearer and, in general, his or her perception of the pragmatic aspects of the conversation. This paper discusses two insights that arise when one studies this question: the existence of a level of organization that mediates between communicative goals and generator decisions, and the interleaved planningrealization regime and associated monitoring required for generation. To illustrate these ideas, a computer program is described which contains plans and strategies to produce stylistically appropriate texts from a single representation under various settings that model pragmatic circumstances.",
"title": ""
},
{
"docid": "a1fffeaf5f28fe5795ba207ae926d32b",
"text": "This paper presents mathematical models, design and experimental validation, and calibration of a model-based diagnostic algorithm for an electric-power generation and storage automotive system, including a battery and an alternator with a rectifier and a voltage regulator. Mathematical models of these subsystems are derived, based on the physics of processes involved as characterized by time-varying nonlinear ordinary differential equations. The diagnostic problem focuses on detection and isolation of a specific set of alternator faults, including belt slipping, rectifier fault, and voltage regulator fault. The proposed diagnostic approach is based on the generation of residuals obtained using system models and comparing predicted and measured value of selected variables, including alternator output current, field voltage, and battery voltage. An equivalent input-output alternator model, which is used in the diagnostic scheme, is also formulated and parameterized. The test bench used for calibration of thresholds of the diagnostic algorithm and overall validation process are discussed. The effectiveness of the fault diagnosis algorithm and threshold selection is experimentally demonstrated.",
"title": ""
},
{
"docid": "9f9910c9b51c6da269dd2eb0279bb6a1",
"text": "The distribution between sediments and water plays a key role in the food-chain transfer of hydrophobic organic chemicals. Current models and assessment methods of sediment-water distribution predominantly rely on chemical equilibrium partitioning despite several observations reporting an \"enrichment\" of chemical concentrations in suspended sediments. In this study we propose and derive a fugacity based model of chemical magnification due to organic carbon decomposition throughout the process of sediment diagenesis. We compare the behavior of the model to observations of bottom sediment-water, suspended sediments-water, and plankton-water distribution coefficients of a range of hydrophobic organic chemicals in five Great Lakes. We observe that (i) sediment-water distribution coefficients of organic chemicals between bottom sediments and water and between suspended sediments and water are considerably greaterthan expected from chemical partitioning and that the degree sediment-water disequilibrium appears to follow a relationship with the depth of the lake; (ii) concentrations increase from plankton to suspended sediments to bottom sediments and follow an inverse ratherthan a proportional relationship with the organic carbon content and (iii) the degree of disequilibrium between bottom sediment and water, suspended sediments and water, and plankton and water increases when the octanol-water partition coefficient K(ow) drops. We demonstrate that these observations can be explained by a proposed organic carbon mineralization model. Our findings imply that sediment-water distribution is not solely a chemical partitioning process but is to a large degree controlled by lake specific organic carbon mineralization processes.",
"title": ""
},
{
"docid": "82edffdadaee9ac0a5b11eb686e109a1",
"text": "This paper highlights different security threats and vulnerabilities that is being challenged in smart-grid utilizing Distributed Network Protocol (DNP3) as a real time communication protocol. Experimentally, we will demonstrate two scenarios of attacks, unsolicited message attack and data set injection. The experiments were run on a computer virtual environment and then simulated in DETER testbed platform. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Therefore, mitigation techniques will be used to ensure a healthy check of the network and we will propose the use of host-based intrusion detection agent at each Intelligent Electronic Device (IED) for the purpose of detecting the intrusion and mitigating it. Performing attacks, attack detection, prevention and counter measures will be our primary goal to achieve in this research paper.",
"title": ""
},
{
"docid": "a9e26514ffc78c1018e00c63296b9584",
"text": "When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.",
"title": ""
},
{
"docid": "ed3ce0f0ae0a89fad2242bd2c61217ba",
"text": "We present MegaMIMO, a joint multi-user beamforming system that enables independent access points (APs) to beamform their signals, and communicate with their clients on the same channel as if they were one large MIMO transmitter. The key enabling technology behind MegaMIMO is a new low-overhead technique for synchronizing the phase of multiple transmitters in a distributed manner. The design allows a wireless LAN to scale its throughput by continually adding more APs on the same channel. MegaMIMO is implemented and tested with both software radio clients and off-the-shelf 802.11n cards, and evaluated in a dense congested deployment resembling a conference room. Results from a 10-AP software-radio testbed show a linear increase in network throughput with a median gain of 8.1 to 9.4×. Our results also demonstrate that MegaMIMO’s joint multi-user beamforming can provide throughput gains with unmodified 802.11n cards.",
"title": ""
},
{
"docid": "a2aa3c023f2cf2363bac0b97b3e1e65c",
"text": "Digital data collected for forensics analysis often contain valuable information about the suspects’ social networks. However, most collected records are in the form of unstructured textual data, such as e-mails, chat messages, and text documents. An investigator often has to manually extract the useful information from the text and then enter the important pieces into a structured database for further investigation by using various criminal network analysis tools. Obviously, this information extraction process is tedious and errorprone. Moreover, the quality of the analysis varies by the experience and expertise of the investigator. In this paper, we propose a systematic method to discover criminal networks from a collection of text documents obtained from a suspect’s machine, extract useful information for investigation, and then visualize the suspect’s criminal network. Furthermore, we present a hypothesis generation approach to identify potential indirect relationships among the members in the identified networks. We evaluated the effectiveness and performance of the method on a real-life cybercrimine case and some other datasets. The proposed method, together with the implemented software tool, has received positive feedback from the digital forensics team of a law enforcement unit in Canada. a 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "81459e136452983861ac154f2013dc70",
"text": "Semantic segmentation has been widely investigated for its important role in computer vision. However, some challenges still exist. The first challenge is how to perceive semantic regions with various attributes, which can result in unbalanced distribution of training samples. Another challenge is accurate semantic boundary determination. In this paper, a contour-aware network for semantic segmentation via adaptive depth is proposed which particularly exploits the power of adaptive-depth neural network and contouraware neural network on pixel-level semantic segmentation. Specifically, an adaptive-depth model, which can adaptively determine the feedback and forward procedure of neural network, is constructed. Moreover, a contour-aware neural network is respectively built to enhance the coherence and the localization accuracy of semantic regions. By formulating the contour information and coarse semantic segmentation results in a unified manner, global inference is proposed to obtain the final segmentation results. Three contributions are claimed: (1) semantic segmentation via adaptive depth neural network; (2) contouraware neural network for semantic segmentation; and (3) global inference for final decision. Experiments on three popular datasets are conducted and experimental results have verified the superiority of the proposed method compared with the state-of-the-art methods. © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "528812aa635d6b9f0b65cc784fb256e1",
"text": "Pointing tasks are commonly studied in HCI research, for example to evaluate and compare different interaction techniques or devices. A recent line of work has modelled user-specific touch behaviour with machine learning methods to reveal spatial targeting error patterns across the screen. These models can also be applied to improve accuracy of touchscreens and keyboards, and to recognise users and hand postures. However, no implementation of these techniques has been made publicly available yet, hindering broader use in research and practical deployments. Therefore, this paper presents a toolkit which implements such touch models for data analysis (Python), mobile applications (Java/Android), and the web (JavaScript). We demonstrate several applications, including hand posture recognition, on touch targeting data collected in a study with 24 participants. We consider different target types and hand postures, changing behaviour over time, and the influence of hand sizes.",
"title": ""
},
{
"docid": "e63a5af56d8b20c9e3eac658940413ce",
"text": "OBJECTIVE\nThis study examined the effects of various backpack loads on elementary schoolchildren's posture and postural compensations as demonstrated by a change in forward head position.\n\n\nSUBJECTS\nA convenience sample of 11 schoolchildren, aged 8-11 years participated.\n\n\nMETHODS\nSagittal digital photographs were taken of each subject standing without a backpack, and then with the loaded backpack before and after walking 6 minutes (6MWT) at free walking speed. This was repeated over three consecutive weeks using backpacks containing randomly assigned weights of 10%, 15%, or 20% body weight of each respective subject. The craniovertebral angle (CVA) was measured using digitizing software, recorded and analyzed.\n\n\nRESULTS\nSubjects demonstrated immediate and statistically significant changes in CVA, indicating increased forward head positions upon donning the backpacks containing 15% and 20% body weight. Following the 6MWT, the CVA demonstrated further statistically significant changes for all backpack loads indicating increased forward head postures. For the 15 & 20%BW conditions, more than 50% of the subjects reported discomfort after walking, with the neck as the primary location of reported pain.\n\n\nCONCLUSIONS\nBackpack loads carried by schoolchildren should be limited to 10% body weight due to increased forward head positions and subjective complaints at 15% and 20% body weight loads.",
"title": ""
},
{
"docid": "bb6314a8e6ec728d09aa37bfffe5c835",
"text": "In recent years, Convolutional Neural Network (CNN) has been extensively applied in the field of computer vision, which has also made remarkable achievements. However, the CNN models are computation-intensive and memory-consuming, which hinders the deployment of CNN-based methods on resource-limited embedded platforms. Therefore, this paper gives insight into low numerical precision Convolutional Neural Networks. At first, an image classification CNN model is quantized into 8-bit dynamic fixed-point with no more than 1% accuracy drop and then the method of conducting inference on low-cost ARM processor has been proposed. Experimental results verified the effectiveness of this method. Besides, our proof-of-concept prototype implementation can obtain a frame rate of 4.54fps when running on single Cortex-A72 core under 1.8GHz working frequency and 6.48 watts of gross power consumption.",
"title": ""
}
] | scidocsrr |
9e0a74c262f81b9db1581f0c44a1ba3e | Motivation and cognitive control: from behavior to neural mechanism. | [
{
"docid": "bb160a96afd23e45501ccc7e08ca3a54",
"text": "Individuals may be motivated to limit their use of self-control resources, especially when they have depleted some of that resource. Expecting to need self-control strength in the future should heighten the motivation to conserve strength. In 4 experiments, it was found that depleted participants who anticipated exerting self-control in the future performed more poorly in an intervening test of self-control than participants who were not depleted, and more poorly than those who did not expect to exert self-control in the future. Conversely, those who conserved strength performed better on tasks that they conserved the strength for as compared with those who did not conserve. The underlying economic or conservation of resource model sheds some light on the operation of self-control strength.",
"title": ""
},
{
"docid": "ca9a7a1f7be7d494f6c0e3e4bb408a95",
"text": "An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation.",
"title": ""
},
{
"docid": "58eebe0e55f038fea268b6a7a6960939",
"text": "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means that fit their regulatory orientation, and this regulatory fit increases the value of what they are doing. The following postulates of this value from fit proposal are examined: (a) People will be more inclined toward goal means that have higher regulatory fit, (b) people's motivation during goal pursuit will be stronger when regulatory fit is higher, (c) people's (prospective) feelings about a choice they might make will be more positive for a desirable choice and more negative for an undesirable choice when regulatory fit is higher, (d) people's (retrospective) evaluations of past decisions or goal pursuits will be more positive when regulatory fit was higher, and (e) people will assign higher value to an object that was chosen with higher regulatory fit. Studies testing each of these postulates support the value-from-fit proposal. How value from fit can enhance or diminish the value of goal pursuits and the quality of life itself is discussed.",
"title": ""
}
] | [
{
"docid": "44672e9dc60639488800ad4ae952f272",
"text": "The GPS technology and new forms of urban geography have changed the paradigm for mobile services. As such, the abundant availability of GPS traces has enabled new ways of doing taxi business. Indeed, recent efforts have been made on developing mobile recommender systems for taxi drivers using Taxi GPS traces. These systems can recommend a sequence of pick-up points for the purpose of maximizing the probability of identifying a customer with the shortest driving distance. However, in the real world, the income of taxi drivers is strongly correlated with the effective driving hours. In other words, it is more critical for taxi drivers to know the actual driving routes to minimize the driving time before finding a customer. To this end, in this paper, we propose to develop a cost-effective recommender system for taxi drivers. The design goal is to maximize their profits when following the recommended routes for finding passengers. Specifically, we first design a net profit objective function for evaluating the potential profits of the driving routes. Then, we develop a graph representation of road networks by mining the historical taxi GPS traces and provide a Brute-Force strategy to generate optimal driving route for recommendation. However, a critical challenge along this line is the high computational cost of the graph based approach. Therefore, we develop a novel recursion strategy based on the special form of the net profit function for searching optimal candidate routes efficiently. Particularly, instead of recommending a sequence of pick-up points and letting the driver decide how to get to those points, our recommender system is capable of providing an entire driving route, and the drivers are able to find a customer for the largest potential profit by following the recommendations. This makes our recommender system more practical and profitable than other existing recommender systems. Finally, we carry out extensive experiments on a real-world data set collected from the San Francisco Bay area and the experimental results clearly validate the effectiveness of the proposed recommender system.",
"title": ""
},
{
"docid": "e9f9e36d9b5194f1ebad9eda51d193ac",
"text": "In unattended and hostile environments, node compromise can become a disastrous threat to wireless sensor networks and introduce uncertainty in the aggregation results. A compromised node often tends to completely reveal its secrets to the adversary which in turn renders purely cryptography-based approaches vulnerable. How to secure the information aggregation process against compromised-node attacks and quantify the uncertainty existing in the aggregation results has become an important research issue. In this paper, we address this problem by proposing a trust based framework, which is rooted in sound statistics and some other distinct and yet closely coupled techniques. The trustworthiness (reputation) of each individual sensor node is evaluated by using an information theoretic concept, Kullback-Leibler (KL) distance, to identify the compromised nodes through an unsupervised learning algorithm. Upon aggregating, an opinion, a metric of the degree of belief, is generated to represent the uncertainty in the aggregation result. As the result is being disseminated and assembled through the routes to the sink, this opinion will be propagated and regulated by Josang's belief model. Following this model, the uncertainty within the data and aggregation results can be effectively quantified throughout the network. Simulation results demonstrate that our trust based framework provides a powerful mechanism for detecting compromised nodes and reasoning about the uncertainty in the network. It further can purge false data to accomplish robust aggregation in the presence of multiple compromised nodes",
"title": ""
},
{
"docid": "b2ff879a41647b978118aacbcf9a2108",
"text": "In this paper we present two new variable step size (VSS) methods for adaptive filters. These VSS methods are so effective, they eliminate the need for a separate double-talk detection algorithm in echo cancellation applications. The key feature of both approaches is the introduction of a new near-end signal energy estimator (NESEE) that provides accurate and computationally efficient estimates even during double-talk and echo path change events. The first VSS algorithm applies the NESEE to the recently proposed Nonparametric VSS NLMS (NPVSS-NLMS) algorithm. The resulting algorithm has excellent convergence characteristics with an intrinsic immunity to double-talk. The second approach is somewhat more ad hoc. It is composed of a combination of an efficient echo path change detector and the NESEE. This VSS method also has excellent convergence, double talk immunity, and computational efficiency. Simulations demonstrate the efficacy of both proposed algorithms.",
"title": ""
},
{
"docid": "ef089e236b937e8410c70c251dfbe923",
"text": "the fast development of Graphics Processing Unit (GPU) leads to the popularity of General-purpose usage of GPU (GPGPU). So far, most modern computers are CPU-GPGPU heterogeneous architecture and CPU is used as host processor. In this work, we promote a multithread file chunking prototype system, which is able to exploit the hardware organization of the CPU-GPGPU heterogeneous computer and determine which device should be used to chunk the file to accelerate the content based file chunking operation of deduplication. We built rules for the system to choose which device should be used to chunk file and also found the optimal choice of other related parameters of both CPU and GPGPU subsystem like segment size and block dimension. This prototype was implemented and tested. The result of using GTX460(336 cores) and Intel i5 (four cores) shows that this system can increase the chunking speed 63% compared to using GPGPU alone and 80% compared to using CPU alone.",
"title": ""
},
{
"docid": "54f5af4ced366eeebccc973a081497e2",
"text": "Visual quality of color images is an important aspect in various applications of digital image processing and multimedia. A large number of visual quality metrics (indices) has been proposed recently. In order to assess their reliability, several databases of color images with various sets of distortions have been exploited. Here we present a new database called TID2013 that contains a larger number of images. Compared to its predecessor TID2008, seven new types and one more level of distortions are included. The need for considering these new types of distortions is briefly described. Besides, preliminary results of experiments with a large number of volunteers for determining the mean opinion score (MOS) are presented. Spearman and Kendall rank order correlation factors between MOS and a set of popular metrics are calculated and presented. Their analysis shows that adequateness of the existing metrics is worth improving. Special attention is to be paid to accounting for color information and observers focus of attention to locally active areas in images.",
"title": ""
},
{
"docid": "299f17dca15e2eab1692e82869fc2f6d",
"text": "the \"dark figure\" of crime—that is, about occurrences that by some criteria are called crime yet that are not registered in the statistics of whatever agency was the source of the data being used. Contending arguments arose about the dark figure between the \"realists\" who emphasized the virtues of completeness with which data represent the \"real crime\" that takes place and the \"institutionalists\" who emphasize that crime can have valid meaning only in terms of organized, legitimate social responses to it. This paper examines these arguments in the context of police and survey statistics as measures of crime in a population. It concludes that in exploring the dark figure of crime, the primary question is not how much of it",
"title": ""
},
{
"docid": "6ac602c39220d42d063be7b79b1faa97",
"text": "Inferring human gaze from low-resolution eye images is still a challenging task despite its practical importance in many application scenarios. This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person- and head pose-independent. Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator. We collect the largest and fully calibrated multi-view gaze dataset and perform a 3D reconstruction in order to generate dense training data of eye images. By using the synthesized dataset to learn a random regression forest, we show that our method outperforms existing methods that use low-resolution eye images.",
"title": ""
},
{
"docid": "c18aad29529e40220bc519472be10988",
"text": "Informative and discriminative feature descriptors play a fundamental role in deformable shape analysis. For example, they have been successfully employed in correspondence, registration, and retrieval tasks. In recent years, significant attention has been devoted to descriptors obtained from the spectral decomposition of the Laplace-Beltrami operator associated with the shape. Notable examples in this family are the heat kernel signature (HKS) and the recently introduced wave kernel signature (WKS). The Laplacian-based descriptors achieve state-of-the-art performance in numerous shape analysis tasks; they are computationally efficient, isometry-invariant by construction, and can gracefully cope with a variety of transformations. In this paper, we formulate a generic family of parametric spectral descriptors. We argue that to be optimized for a specific task, the descriptor should take into account the statistics of the corpus of shapes to which it is applied (the \"signal\") and those of the class of transformations to which it is made insensitive (the \"noise\"). While such statistics are hard to model axiomatically, they can be learned from examples. Following the spirit of the Wiener filter in signal processing, we show a learning scheme for the construction of optimized spectral descriptors and relate it to Mahalanobis metric learning. The superiority of the proposed approach in generating correspondences is demonstrated on synthetic and scanned human figures. We also show that the learned descriptors are robust enough to be learned on synthetic data and transferred successfully to scanned shapes.",
"title": ""
},
{
"docid": "0e2eae0e320b68806aa4180fab00fd34",
"text": "This study presents a facile and green route for the synthesis of (La0.95Eu0.05)2O2S red phosphors of controllable morphologies, with the sulfate-type layered hydroxides of Ln2(OH)4SO4·2H2O (Ln = La and Eu) as a new type of precursor. The technique takes advantage of the fact that the precursor has had the exact Ln:S molar ratio of the targeted phosphor, thus saving the hazardous sulfurization reagents indispensable to traditional synthesis. Controlled hydrothermal processing at 120 °C yielded phase-pure Ln2(OH)4SO4·2H2O crystallites in the form of either nanoplates or microprisms, which can both be converted into Ln2O2S phosphor via a Ln2O2SO4 intermediate upon annealing in flowing H2 at a minimum temperature of ∼ 700 °C. The nanoplates collapse into relatively rounded Ln2O2S particles while the microprisms retain well their initial morphologies at 1 200 °C, thus yielding two types of red phosphors. Photoluminescence excitation (PLE) studies found two distinct charge transfer (CT) excitation bands of O2- → Eu3+ at ∼ 270 nm and S2- → Eu3+ at ∼ 340 nm for the Ln2O2S phosphors, with the latter being stronger and both significantly stronger than the intrinsic intra-f transitions of Eu3+. The two types of phosphors share high similarities in the positions of PLE/PL (photoluminescence) bands and both show the strongest red emission at 627 nm (5D0 → 7F2 transition of Eu3+) under S2- → Eu3+ CT excitation at 340 nm. The PLE/PL intensities show clear dependence on particle morphology and calcination temperature, which were investigated in detail. Fluorescence decay analysis reveals that the 627 nm red emission has a lifetime of ∼ 0.5 ms for both types of the phosphors.",
"title": ""
},
{
"docid": "5d2230e6d7f560576231f52209703595",
"text": "This paper presents a twofold tunable planar hairpin filter to simultaneously control center frequency and bandwidth. Tunability is achieved by using functional thick film layers of the ferroelectric material Barium-Strontium-Titanate (BST). The center frequency of the filter is adjusted by varactors which are loading the hairpin resonators. Coupling varactors between the hairpin resonators enable the control of the bandwidth. The proposed filter structure is designed for a center frequency range from 650 MHz to 920 MHz and a bandwidth between 25 MHz and 85 MHz. This covers the specifications of the lower GSM bands. The functionality of the design is experimentally validated and confirmed by simulation results.",
"title": ""
},
{
"docid": "1cb2ffad7243e3e0b5c16fae12c7ee49",
"text": "OBJECTIVE\nTo determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects.\n\n\nDESIGN\nAn observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects.\n\n\nDATA SOURCES\nMeta-analyses from the Cochrane Pregnancy and Childbirth Database.\n\n\nMAIN OUTCOME MEASURES\nThe associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding.\n\n\nRESULTS\nCompared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects (P < .001). Odds ratios were exaggerated by 41% for inadequately concealed trials and by 30% for unclearly concealed trials (adjusted for other aspects of quality). Trials in which participants had been excluded after randomization did not yield larger estimates of effects, but that lack of association may be due to incomplete reporting. Trials that were not double-blind also yielded larger estimates of effects (P = .01), with odds ratios being exaggerated by 17%.\n\n\nCONCLUSIONS\nThis study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials.",
"title": ""
},
{
"docid": "e8bbbc1864090b0246735868faa0e11f",
"text": "A pre-trained deep convolutional neural network (DCNN) is the feed-forward computation perspective which is widely used for the embedded vision systems. In the DCNN, the 2D convolutional operation occupies more than 90% of the computation time. Since the 2D convolutional operation performs massive multiply-accumulation (MAC) operations, conventional realizations could not implement a fully parallel DCNN. The RNS decomposes an integer into a tuple of L integers by residues of moduli set. Since no pair of modulus have a common factor with any other, the conventional RNS decomposes the MAC unit into circuits with different sizes. It means that the RNS could not utilize resources of an FPGA with uniform size. In this paper, we propose the nested RNS (NRNS), which recursively decompose the RNS. It can decompose the MAC unit into circuits with small sizes. In the DCNN using the NRNS, a 48-bit MAC unit is decomposed into 4-bit ones realized by look-up tables of the FPGA. In the system, we also use binary to NRNS converters and NRNS to binary converters. The binary to NRNS converter is realized by on-chip BRAMs, while the NRNS to binary one is realized by DSP blocks and BRAMs. Thus, a balanced usage of FPGA resources leads to a high clock frequency with less hardware. The ImageNet DCNN using the NRNS is implemented on a Xilinx Virtex VC707 evaluation board. As for the performance per area GOPS (Giga operations per second) per a slice, the proposed one is 5.86 times better than the existing best realization.",
"title": ""
},
{
"docid": "982406008800456eaa147e6155963683",
"text": "[1] This study investigates how drought‐induced change in semiarid grassland community affected runoff and sediment yield in a small watershed in southeast Arizona, USA. Three distinct periods in ecosystem composition and associated runoff and sediment yield were identified according to dominant species: native bunchgrass (1974–2005), forbs (2006), and the invasive grass, Eragrostis lehmanniana (2007–2009). Precipitation, runoff, and sediment yield for each period were analyzed and compared at watershed and plot scales. Average watershed annual sediment yield was 0.16 t ha yr. Despite similarities in precipitation characteristics, decline in plant canopy cover during the transition period of 2006 caused watershed sediment yield to increase 23‐fold to 1.64 t ha yr comparing with preceding period under native bunchgrasses (0.06 t ha yr) or succeeding period under E. lehmanniana (0.06 t ha yr). In contrast, measurements on small runoff plots on the hillslopes of the same watershed showed a significant increase in sediment discharge that continued after E. lehmanniana replaced native grasses. Together, these findings suggest alteration in plant community increased sediment yield but that hydrological responses to this event differ at watershed and plot scales, highlighting the geomorphological controls at the watershed scale that determine sediment transport efficiency and storage. Resolving these scalar issues will help identify critical landform features needed to preserve watershed integrity under changing climate conditions.",
"title": ""
},
{
"docid": "2271085513d9239225c9bfb2f6b155b1",
"text": "Information Security has become an important issue in data communication. Encryption algorithms have come up as a solution and play an important role in information security system. On other side, those algorithms consume a significant amount of computing resources such as CPU time, memory and battery power. Therefore it is essential to measure the performance of encryption algorithms. In this work, three encryption algorithms namely DES, AES and Blowfish are analyzed by considering certain performance metrics such as execution time, memory required for implementation and throughput. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
},
{
"docid": "309a8f69647fae26a39305cdf0115ad0",
"text": "Three-dimensional synthetic aperture radar (SAR) image formation provides the scene reflectivity estimation along azimuth, range, and elevation coordinates. It is based on multipass SAR data obtained usually by nonuniformly spaced acquisition orbits. A common 3-D SAR focusing approach is Fourier-based SAR tomography, but this technique brings about image quality problems because of the low number of acquisitions and their not regular spacing. Moreover, attained resolution in elevation is limited by the overall acquisitions baseline extent. In this paper, a novel 3-D SAR data imaging based on Compressive Sampling theory is presented. It is shown that since the image to be focused has usually a sparse representation along the elevation direction (i.e., only few scatterers with different elevation are present in the same range-azimuth resolution cell), it suffices to have a small number of measurements to construct the 3-D image. Furthermore, the method allows super-resolution imaging, overcoming the limitation imposed by the overall baseline span. Tomographic imaging is performed by solving an optimization problem which enforces sparsity through ℓ1-norm minimization. Numerical results on simulated and real data validate the method and have been compared with the truncated singular value decomposition technique.",
"title": ""
},
{
"docid": "da8a41e844c519842de524d791527ace",
"text": "Advances in NLP techniques have led to a great demand for tagging and analysis of the sentiments from unstructured natural language data over the last few years. A typical approach to sentiment analysis is to start with a lexicon of positive and negative words and phrases. In these lexicons, entries are tagged with their prior out of context polarity. Unfortunately all efforts found in literature deal mostly with English texts. In this squib, we propose a computational technique of generating an equivalent SentiWordNet (Bengali) from publicly available English Sentiment lexicons and English-Bengali bilingual dictionary. The target language for the present task is Bengali, though the methodology could be replicated for any new language. There are two main lexical resources widely used in English for Sentiment analysis: SentiWordNet (Esuli et. al., 2006) and Subjectivity Word List (Wilson et. al., 2005). SentiWordNet is an automatically constructed lexical resource for English which assigns a positivity score and a negativity score to each WordNet synset. The subjectivity lexicon was compiled from manually developed resources augmented with entries learned from corpora. The entries in the Subjectivity lexicon have been labelled for part of speech (POS) as well as either strong or weak subjective tag depending on reliability of the subjective nature of the entry.",
"title": ""
},
{
"docid": "86889526d71a853cb2055040c4f987d4",
"text": "Traceability underlies many important software and systems engineering activities, such as change impact analysis and regression testing. Despite important research advances, as in the automated creation and maintenance of trace links, traceability implementation and use is still not pervasive in industry. A community of traceability researchers and practitioners has been collaborating to understand the hurdles to making traceability ubiquitous. Over a series of years, workshops have been held to elicit and enhance research challenges and related tasks to address these shortcomings. A continuing discussion of the community has resulted in the research roadmap of this paper. We present a brief view of the state of the art in traceability, the grand challenge for traceability and future directions for the field.",
"title": ""
},
{
"docid": "18c507d6624f153cb1b7beaf503b0d54",
"text": "The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.",
"title": ""
},
{
"docid": "25d60ca2cbbb49cf025de9c97923ec3e",
"text": "We studied the thermophoretic motion of wrinkles formed in substrate-supported graphene sheets by nonequilibrium molecular dynamics simulations. We found that a single wrinkle moves along applied temperature gradient with a constant acceleration that is linearly proportional to temperature deviation between the heating and cooling sides of the graphene sheet. Like a solitary wave, the atoms of the single wrinkle drift upwards and downwards, which prompts the wrinkle to move forwards. The driving force for such thermophoretic movement can be mainly attributed to a lower free energy of the wrinkle back root when it is transformed from the front root. We establish a motion equation to describe the soliton-like thermophoresis of a single graphene wrinkle based on the Korteweg-de Vries equation. Similar motions are also observed for wrinkles formed in a Cu-supported graphene sheet. These findings provide an energy conversion mechanism by using graphene wrinkle thermophoresis.",
"title": ""
},
{
"docid": "e90b3e8e42e213aab85f10ab325aec06",
"text": "In the strategic human resource management (SHRM) field three approaches have dominated, namely, the universal or best-practice, best-fit or contingency and resourcebased view (RBV). This study investigates evidence for the simultaneous or mixed adoption of these approaches by eight case study firms in the international hotel industry. Findings suggest there is considerable evidence of the combined use of the first two approaches but that the SHRM RBV approach was difficult to achieve by all companies. Overall, gaining differentiation through SHRM practices was found to be challenging due to specific industry forces. The study identifies that where companies derive some competitive advantage from their human resources and HRM practices they have closely aligned their managers’ expertise with their corporate market entry mode expertise and developed some distinctive, complex and integrated HRM interventions, which have a mutually reinforcing effect.",
"title": ""
}
] | scidocsrr |
ad039ffae4d42ba98915c60f27c3ed0c | Adaptive Stochastic Gradient Descent Optimisation for Image Registration | [
{
"docid": "607797e37b056dab866d175767343353",
"text": "We propose a new method for the intermodal registration of images using a criterion known as mutual information. Our main contribution is an optimizer that we specifically designed for this criterion. We show that this new optimizer is well adapted to a multiresolution approach because it typically converges in fewer criterion evaluations than other optimizers. We have built a multiresolution image pyramid, along with an interpolation process, an optimizer, and the criterion itself, around the unifying concept of spline-processing. This ensures coherence in the way we model data and yields good performance. We have tested our approach in a variety of experimental conditions and report excellent results. We claim an accuracy of about a hundredth of a pixel under ideal conditions. We are also robust since the accuracy is still about a tenth of a pixel under very noisy conditions. In addition, a blind evaluation of our results compares very favorably to the work of several other researchers.",
"title": ""
},
{
"docid": "990c8e69811a8ebafd6e8c797b36349d",
"text": "Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with these obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. The authors present results by comparing their automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75%/spl plusmn/2.29% (mean/spl plusmn/standard deviation).",
"title": ""
}
] | [
{
"docid": "f333bc03686cf85aee0a65d4a81e8b34",
"text": "A large portion of data mining and analytic services use modern machine learning techniques, such as deep learning. The state-of-the-art results by deep learning come at the price of an intensive use of computing resources. The leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end servers in datacenters. On the other end, there is a proliferation of personal devices with possibly free CPU cycles; this can enable services to run in users' homes, embedding machine learning operations. In this paper, we ask the following question: Is distributed deep learning computation on WAN connected devices feasible, in spite of the traffic caused by learning tasks? We show that such a setup rises some important challenges, most notably the ingress traffic that the servers hosting the up-to-date model have to sustain. In order to reduce this stress, we propose AdaComp, a novel algorithm for compressing worker updates to the model on the server. Applicable to stochastic gradient descent based approaches, it combines efficient gradient selection and learning rate modulation. We then experiment and measure the impact of compression, device heterogeneity and reliability on the accuracy of learned models, with an emulator platform that embeds TensorFlow into Linux containers. We report a reduction of the total amount of data sent by workers to the server by two order of magnitude (e.g., 191-fold reduction for a convolutional network on the MNIST dataset), when compared to a standard asynchronous stochastic gradient descent, while preserving model accuracy.",
"title": ""
},
{
"docid": "26282a6d69b021755e5b02f8798bdcb9",
"text": "Recently, extensive research efforts have been dedicated to view-based methods for 3-D object retrieval due to the highly discriminative property of multiviews for 3-D object representation. However, most of state-of-the-art approaches highly depend on their own camera array settings for capturing views of 3-D objects. In order to move toward a general framework for 3-D object retrieval without the limitation of camera array restriction, a camera constraint-free view-based (CCFV) 3-D object retrieval algorithm is proposed in this paper. In this framework, each object is represented by a free set of views, which means that these views can be captured from any direction without camera constraint. For each query object, we first cluster all query views to generate the view clusters, which are then used to build the query models. For a more accurate 3-D object comparison, a positive matching model and a negative matching model are individually trained using positive and negative matched samples, respectively. The CCFV model is generated on the basis of the query Gaussian models by combining the positive matching model and the negative matching model. The CCFV removes the constraint of static camera array settings for view capturing and can be applied to any view-based 3-D object database. We conduct experiments on the National Taiwan University 3-D model database and the ETH 3-D object database. Experimental results show that the proposed scheme can achieve better performance than state-of-the-art methods.",
"title": ""
},
{
"docid": "6db6dccccbdcf77068ae4270a1d6b408",
"text": "In many engineering disciplines, abstract models are used to describe systems on a high level of abstraction. On this abstract level, it is often easier to gain insights about that system that is being described. When models of a system change – for example because the system itself has changed – any analyses based on these models have to be invalidated and thus have to be reevaluated again in order for the results to stay meaningful. In many cases, the time to get updated analysis results is critical. However, as most often only small parts of the model change, large parts of this reevaluation could be saved by using previous results but such an incremental execution is barely done in practice as it is non-trivial and error-prone. The approach of implicit incrementalization o ers a solution by deriving an incremental evaluation strategy implicitly from a batch speci cation of the analysis. This works by deducing a dynamic dependency graph that allows to only reevaluate those parts of an analysis that are a ected by a given model change. Thus advantages of an incremental execution can be gained without changes to the code that would potentially degrade its understandability. However, current approaches to implicit incremental computation only support narrow classes of analysis, are restricted to an incremental derivation at instruction level or require an explicit state management. In addition, changes are only propagated sequentially, meanwhile modern multi-core architectures would allow parallel change propagation. Even with such improvements, it is unclear whether incremental execution in fact brings advantages as changes may easily cause butter y e ects, making a reuse of previous analysis results pointless (i.e. ine cient). This thesis deals with the problems of implicit incremental model analyses by proposing multiple approaches that mostly can be combined. Further, the",
"title": ""
},
{
"docid": "8250046c31b18d9c4996e8f285949e1f",
"text": "This article models the detection and prediction of managerial fraud in the financial statements of Tunisian banks. The methodology used consist of examining a battery of financial ratios used by the Federal Deposit Insurance Corporation (FDIC) as indicators of the financial situation of a bank. We test the predictive power of these ratios using logistic regression. The results show that we can detect managerial fraud in the financial statements of Tunisian banks using performance ratios three years before its occurrence with a classification rate of 71.1%. JEL: M41, M42, C23, C25, G21",
"title": ""
},
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "aad742025eba642c23533d34337a6255",
"text": "Obtaining good probability estimates is imperative for many applications. The increased uncertainty and typically asymmetric costs surrounding rare events increase this need. Experts (and classification systems) often rely on probabilities to inform decisions. However, we demonstrate that class probability estimates obtained via supervised learning in imbalanced scenarios systematically underestimate the probabilities for minority class instances, despite ostensibly good overall calibration. To our knowledge, this problem has not previously been explored. We propose a new metric, the stratified Brier score, to capture class-specific calibration, analogous to the per-class metrics widely used to assess the discriminative performance of classifiers in imbalanced scenarios. We propose a simple, effective method to mitigate the bias of probability estimates for imbalanced data that bags estimators independently calibrated over balanced bootstrap samples. This approach drastically improves performance on the minority instances without greatly affecting overall calibration. We extend our previous work in this direction by providing ample additional empirical evidence for the utility of this strategy, using both support vector machines and boosted decision trees as base learners. Finally, we show that additional uncertainty can be exploited via a Bayesian approach by considering posterior distributions over bagged probability estimates.",
"title": ""
},
{
"docid": "1fadb803baf3593fef6628d841532a9b",
"text": "Three studies examined the impact of sexual-aggressive song lyrics on aggressive thoughts, emotions, and behavior toward the same and the opposite sex. In Study 1, the authors directly manipulated whether male or female participants listened to misogynous or neutral song lyrics and measured actual aggressive behavior. Male participants who were exposed to misogynous song lyrics administered more hot chili sauce to a female than to a male confederate. Study 2 shed some light on the underlying psychological processes: Male participants who heard misogynous song lyrics recalled more negative attributes of women and reported more feelings of vengeance than when they heard neutral song lyrics. In addition, men-hating song lyrics had a similar effect on aggression-related responses of female participants toward men. Finally, Study 3 replicated the findings of the previous two studies with an alternative measure of aggressive behavior as well as a more subtle measure of aggressive cognitions. The results are discussed in the framework of the General Aggression Model.",
"title": ""
},
{
"docid": "dc2f4cbd2c18e4f893750a0a1a40002b",
"text": "A microstrip half-grid array antenna (HGA) based on low temperature co-fired ceramic (LTCC) technology is presented in this paper. The antenna is designed for the 77-81 GHz radar frequency band and uses a high permittivity material (εr = 7.3). The traditional single-grid array antenna (SGA) uses two radiating elements in the H-plane. For applications using digital beam forming, the focusing of an SGA in the scanning plane (H-plane) limits the field of view (FoV) of the radar system and the width of the SGA enlarges the minimal spacing between the adjacent channels. To overcome this, an array antenna using only half of the grid as radiating element was designed. As feeding network, a laminated waveguide with a vertically arranged power divider was adopted. For comparison, both an SGA and an HGA were fabricated. The measured results show: using an HGA, an HPBW increment in the H-plane can be achieved and their beam patterns in the E-plane remain similar. This compact LTCC antenna is suitable for radar application with a large FoV requirement.",
"title": ""
},
{
"docid": "bef119e43fcc9f2f0b50fdf521026680",
"text": "Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution. 2009 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "07425e53be0f6314d52e3b4de4d1b601",
"text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.",
"title": ""
},
{
"docid": "c049f188b31bbc482e16d22a8061abfa",
"text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.",
"title": ""
},
{
"docid": "136a2f401b3af00f0f79b991ab65658f",
"text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.",
"title": ""
},
{
"docid": "cd058902ed470efc022c328765a40b34",
"text": "Secure signal authentication is arguably one of the most challenging problems in the Internet of Things (IoT), due to the large-scale nature of the system and its susceptibility to man-in-the-middle and data-injection attacks. In this paper, a novel watermarking algorithm is proposed for dynamic authentication of IoT signals to detect cyber-attacks. The proposed watermarking algorithm, based on a deep learning long short-term memory structure, enables the IoT devices (IoTDs) to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT gateway, which collects signals from the IoTDs, to effectively authenticate the reliability of the signals. Moreover, in massive IoT scenarios, since the gateway cannot authenticate all of the IoTDs simultaneously due to computational limitations, a game-theoretic framework is proposed to improve the gateway’s decision making process by predicting vulnerable IoTDs. The mixed-strategy Nash equilibrium (MSNE) for this game is derived, and the uniqueness of the expected utility at the equilibrium is proven. In the massive IoT system, due to the large set of available actions for the gateway, the MSNE is shown to be analytically challenging to derive, and thus, a learning algorithm that converges to the MSNE is proposed. Moreover, in order to handle incomplete information scenarios, in which the gateway cannot access the state of the unauthenticated IoTDs, a deep reinforcement learning algorithm is proposed to dynamically predict the state of unauthenticated IoTDs and allow the gateway to decide on which IoTDs to authenticate. Simulation results show that with an attack detection delay of under 1 s, the messages can be transmitted from IoTDs with an almost 100% reliability. The results also show that by optimally predicting the set of vulnerable IoTDs, the proposed deep reinforcement learning algorithm reduces the number of compromised IoTDs by up to 30%, compared to an equal probability baseline.",
"title": ""
},
{
"docid": "4f40700ccdc1b6a8a306389f1d7ea107",
"text": "Skin cancer exists in different forms like Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most dangerous and unpredictable. In this paper, we implement an image processing technique for the detection of Melanoma Skin Cancer using the software MATLAB which is easy for implementation as well as detection of Melanoma skin cancer. The input to the system is the skin lesion image. This image proceeds with the image pre-processing methods such as conversion of RGB image to Grayscale image, noise removal and so on. Further Otsu thresholding is used to segment the images followed by feature extraction that includes parameters like Asymmetry, Border Irregularity, Color and Diameter (ABCD) and then Total Dermatoscopy Score (TDS) is calculated. The calculation of TDS determines the presence of Melanoma skin cancer by classifying it as benign, suspicious or highly suspicious skin lesion.",
"title": ""
},
{
"docid": "7f479783ccab6c705bc1d76533f0b1c6",
"text": "The purpose of this research, computerized hotel management system with Satellite Motel Ilorin, Nigeria as the case study is to understand and make use of the computer to solve some of the problems which are usually encountered during manual operations of the hotel management. Finding an accommodation or a hotel after having reached a particular destination is quite time consuming as well as expensive. Here comes the importance of online hotel booking facility. Online hotel booking is one of the latest techniques in the arena of internet that allows travelers to book a hotel located anywhere in the world and that too according to your tastes and preferences. In other words, online hotel booking is one of the awesome facilities of the internet. Booking a hotel online is not only fast as well as convenient but also very cheap. Nowadays, many of the hotel providers have their sites on the web, which in turn allows the users to visit these sites and view the facilities and amenities offered by each of them. So, the proposed computerized of an online hotel management system is set to find a more convenient, well organized, faster, reliable and accurate means of processing the current manual system of the hotel for both near and far customer.",
"title": ""
},
{
"docid": "cdaa99f010b20906fee87d8de08e1106",
"text": "We propose a novel hierarchical clustering algorithm for data-sets in which only pairwise distances between the points are provided. The classical Hungarian method is an efficient algorithm for solving the problem of minimal-weight cycle cover. We utilize the Hungarian method as the basic building block of our clustering algorithm. The disjoint cycles, produced by the Hungarian method, are viewed as a partition of the data-set. The clustering algorithm is formed by hierarchical merging. The proposed algorithm can handle data that is arranged in non-convex sets. The number of the clusters is automatically found as part of the clustering process. We report an improved performance of our algorithm in a variety of examples and compare it to the spectral clustering algorithm.",
"title": ""
},
{
"docid": "c7106bb2ec2c41979ebacdba7dd55217",
"text": "Till recently, the application of the detailed combustion chemistry approach as a predictive tool for engine modeling has been a sort of a ”taboo” motivated by different reasons, but, mainly, by an exaggerated rigor to the chemistry/turbulence interaction modeling. The situation has drastically changed only recently, when STAR-CD and Reaction Design declared in the Newsletter of Compuatational Dynamics (2000/1) the aim to combine multi-dimensional flow solver with the detailed chemistry analysis based on CHEMKIN and SURFACE CHEMKIN packages. Relying on their future developments, we present here the methodology based on the KIVA code. The basic novelty of the proposed methodology is the coupling of a generalized partially stirred reactor, PaSR, model with a high efficiency numerics based on a sparse matrix algebra technique to treat detailed oxidation kinetics of hydrocarbon fuels assuming that chemical processes proceed in two successive steps: the reaction act follows after the micro-mixing resolved on a sub-grid scale. In a completed form, the technique represents detailed chemistry extension of the classic EDCturbulent combustion model. The model application is illustrated by results of numerical simulation of spray combustion and emission formation in the Volvo D12C DI Diesel engine. The results of the 3-D engine modeling on a sector mesh are in reasonable agreement with video data obtained using an endoscopic technique. INTRODUCTION As pollutant emission regulations are becoming more stringent, it turns increasingly more difficult to reconcile emission requirements with the engine economy and thermal efficiency. Soot formation in DI Diesel engines is the key environmental problem whose solution will define the future of these engines: will they survive or are they doomed to disappear? To achieve the design goals, the understanding of the salient features of spray combustion and emission formation processes is required. Diesel spray combustion is nonstationary, three-dimensional, multi-phase process that proAddress all correspondence to this author. ceeds in a high-pressure and high-temperature environment. Recent attempts to develop a ”conceptual model” of diesel spray combustion, see Dec (1997), represent it as a relatively well organized process in which events take place in a logical sequence as the fuel evolves along the jet, undergoing the various stages: spray atomization, droplet ballistics and evaporation, reactant mixing, (macroand micromixing), and, finally, heat release and emissions formation. This opens new perspectives for the modeling based on realization of idealized patterns well confirmed by optical diagnostics data. The success of engine CFD simulations depends on submodels of the physical processes incorporated into the main solver. The KIVA-3v computer code developed by Amsden (1993, July 1997) has been selected for the reason that the code source is available, thus, representing an ideal platform for modification, validation and evaluation. For Diesel engine applications, the KIVA codes solve the conservation equations for evaporating fuel sprays coupled with the three-dimensional turbulent fluid dynamics of compressible, multicomponent, reactive gases in engine cylinders with arbitrary shaped piston geometries. The code treats in different ways ”fast” chemical reactions, which are assumed to be in equilibrium, and ”slow” reactions proceeding kinetically, albeit the general trimolecular processes with different third body efficiencies are not incorporated in the mechanism. The turbulent combustion is realized in the form of Magnussen-Hjertager approach not accounting for chemistry/turbulence interaction. This is why the chemical routines in the original code were replaced with our specialized sub-models. The code fuel library has been also updated using modern property data compiled in Daubert and Danner (1989-1994). The detailed mechanism integrating the n-heptane oxidation chemistry with the kinetics of aromatics (up to four aromatic rings) formation for rich acetylene flames developed by Wang and Frenklach (1997) consisting of 117 species and 602 reactions has been validated in conventional kinetic analysis, and a reduced mechanism (60 species, including soot forming agents and N2O and NOx species, 237 reactions) has been incorporated into the KIVA-3v code. This extends capabilities of the code to predict spray combustion of hydrocarbon fuels with particulate emission.",
"title": ""
},
{
"docid": "bde03a5d90507314ce5f034b9b764417",
"text": "Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.",
"title": ""
},
{
"docid": "85693811a951a191d573adfe434e9b18",
"text": "Diagnosing problems in data centers has always been a challenging problem due to their complexity and heterogeneity. Among recent proposals for addressing this challenge, one promising approach leverages provenance, which provides the fundamental functionality that is needed for performing fault diagnosis and debugging—a way to track direct and indirect causal relationships between system states and their changes. This information is valuable, since it permits system operators to tie observed symptoms of a faults to their potential root causes. However, capturing provenance in a data center is challenging because, at high data rates, it would impose a substantial cost. In this paper, we introduce techniques that can help with this: We show how to reduce the cost of maintaining provenance by leveraging structural similarities for compression, and by offloading expensive but highly parallel operations to hardware. We also discuss our progress towards transforming provenance into compact actionable diagnostic decisions to repair problems caused by misconfigurations and program bugs.",
"title": ""
},
{
"docid": "be5b0dd659434e77ce47034a51fd2767",
"text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks",
"title": ""
}
] | scidocsrr |
aab4d0acc19c2e8c86480233f7bc7d40 | Unmanned aerial vehicle smart device ground control station cyber security threat model | [
{
"docid": "3d78d929b1e11b918119abba4ef8348d",
"text": "Recent developments in mobile technologies have produced a new kind of device, a programmable mobile phone, the smartphone. Generally, smartphone users can program any application which is customized for needs. Furthermore, they can share these applications in online market. Therefore, smartphone and its application are now most popular keywords in mobile technology. However, to provide these customized services, smartphone needs more private information and this can cause security vulnerabilities. Therefore, in this work, we analyze security of smartphone based on its environments and describe countermeasures.",
"title": ""
}
] | [
{
"docid": "f6472cbb2beb8f36a3473759951a1cfa",
"text": "Hair highlighting procedures are very common throughout the world. While rarely reported, potential adverse events to such procedures include allergic and irritant contact dermatitis, thermal burns, and chemical burns. Herein, we report two cases of female adolescents who underwent a hair highlighting procedure at local salons and sustained a chemical burn to the scalp. The burn etiology, clinical and histologic features, the expected sequelae, and a review of the literature are described.",
"title": ""
},
{
"docid": "e28feb56ebc33a54d13452a2ea3a49f7",
"text": "Ping Yan, Hsinchun Chen, and Daniel Zeng Department of Management Information Systems University of Arizona, Tucson, Arizona pyan@email.arizona.edu; {hchen, zeng}@eller.arizona.edu",
"title": ""
},
{
"docid": "83cfa05fc29b4eb4eb7b954ba53498f5",
"text": "Smartphones, the devices we carry everywhere with us, are being heavily tracked and have undoubtedly become a major threat to our privacy. As “Tracking the trackers” has become a necessity, various static and dynamic analysis tools have been developed in the past. However, today, we still lack suitable tools to detect, measure and compare the ongoing tracking across mobile OSs. To this end, we propose MobileAppScrutinator, based on a simple yet efficient dynamic analysis approach, that works on both Android and iOS (the two most popular OSs today). To demonstrate the current trend in tracking, we select 140 most representative Apps available on both Android and iOS AppStores and test them with MobileAppScrutinator. In fact, choosing the same set of apps on both Android and iOS also enables us to compare the ongoing tracking on these two OSs. Finally, we also discuss the effectiveness of privacy safeguards available on Android and iOS. We show that neither Android nor iOS privacy safeguards in their present state are completely satisfying.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "613f0bf05fb9467facd2e58b70d2b09e",
"text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.",
"title": ""
},
{
"docid": "f93b332ba576d1095ba33e976db5cab0",
"text": "Recent publications have argued that the welfare state is an important determinant of population health, and that social democracy in office and higher levels of health expenditure promote health progress. In the period 1950-2000, Greece, Portugal, and Spain were the poorest market economies in Europe, with a fragmented system of welfare provision, and many years of military or authoritarian right-wing regimes. In contrast, the five Nordic countries were the richest market economies in Europe, governed mostly by center or center-left coalitions often including the social democratic parties, and having a generous and universal welfare state. In spite of the socioeconomic and political differences, and a large gap between the five Nordic and the three southern nations in levels of health in 1950, population health indicators converged among these eight countries. Mean decadal gains in longevity of Portugal and Spain between 1950 and 2000 were almost three times greater than gains in Denmark, and about twice as great as those in Iceland, Norway and Sweden during the same period. All this raises serious doubts regarding the hypothesis that the political regime, the political party in office, the level of health care spending, and the type of welfare state exert major influences on population health. Either these factors are not major determinants of mortality decline, or their impact on population health in Nordic countries was more than offset by other health-promoting factors present in Southern Europe.",
"title": ""
},
{
"docid": "e7ce1d8ecab61d0a414223426e114a46",
"text": "Sentence ordering is a general and critical task for natural language generation applications. Previous works have focused on improving its performance in an external, downstream task, such as multi-document summarization. Given its importance, we propose to study it as an isolated task. We collect a large corpus of academic texts, and derive a data driven approach to learn pairwise ordering of sentences, and validate the efficacy with extensive experiments. Source codes1 and dataset2 of this paper will be made publicly available.",
"title": ""
},
{
"docid": "6140255e69aa292bf8c97c9ef200def7",
"text": "Food production requires application of fertilizers containing phosphorus, nitrogen and potassium on agricultural fields in order to sustain crop yields. However modern agriculture is dependent on phosphorus derived from phosphate rock, which is a non-renewable resource and current global reserves may be depleted in 50–100 years. While phosphorus demand is projected to increase, the expected global peak in phosphorus production is predicted to occur around 2030. The exact timing of peak phosphorus production might be disputed, however it is widely acknowledged within the fertilizer industry that the quality of remaining phosphate rock is decreasing and production costs are increasing. Yet future access to phosphorus receives little or no international attention. This paper puts forward the case for including long-term phosphorus scarcity on the priority agenda for global food security. Opportunities for recovering phosphorus and reducing demand are also addressed together with institutional challenges. 2009 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3b9e33ca0f2e479c58e3290f5c3ee2d5",
"text": "BACKGROUND\nCardiac complications due to iron overload are the most common cause of death in patients with thalassemia major. The aim of this study was to compare iron chelation effects of deferoxamine, deferasirox, and combination of deferoxamine and deferiprone on cardiac and liver iron load measured by T2* MRI.\n\n\nMETHODS\nIn this study, 108 patients with thalassemia major aged over 10 years who had iron overload in cardiac T2* MRI were studied in terms of iron chelators efficacy on the reduction of myocardial siderosis. The first group received deferoxamine, the second group only deferasirox, and the third group, a combination of deferoxamine and deferiprone. Myocardial iron was measured at baseline and 12 months later through T2* MRI technique.\n\n\nRESULTS\nThe three groups were similar in terms of age, gender, ferritin level, and mean myocardial T2* at baseline. In the deferoxamine group, myocardial T2* was increased from 12.0±4.1 ms at baseline to 13.5±8.4 ms at 12 months (p=0.10). Significant improvement was observed in myocardial T2* of the deferasirox group (p<0.001). In the combined treatment group, myocardial T2* was significantly increased (p<0.001). These differences among the three groups were not significant at the 12 months. A significant improvement was observed in liver T2* at 12 months compared to baseline in the deferasirox and the combination group.\n\n\nCONCLUSION\nIn comparison to deferoxamine monotherapy, combination therapy and deferasirox monotherapy have a significant impact on reducing iron overload and improvement of myocardial and liver T2* MRI.",
"title": ""
},
{
"docid": "18a317b8470b4006ccea0e436f54cfcd",
"text": "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.",
"title": ""
},
{
"docid": "08bef09a01414bafcbc778fea85a7c0a",
"text": "The use.of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.",
"title": ""
},
{
"docid": "71734f09f053ede7b565047a55cca132",
"text": "Researchers have paid considerable attention to natural user interfaces, especially sensing gestures and touches upon an un-instrumented surface from an overhead camera. We present a system that combines depth sensing from a Microsoft Kinect and temperature sensing from a thermal imaging camera to infer a variety of gestures and touches for controlling a natural user interface. The system, coined Dante, is capable of (1) inferring multiple touch points from multiple users (92.6% accuracy), (2) detecting and classifying each user using their depth and thermal footprint (87.7% accuracy), and (3) detecting touches on objects placed upon the table top (91.7% accuracy). The system can also classify the pressure of chording motions. The system is real time, with an average processing delay of 40 ms.",
"title": ""
},
{
"docid": "6dd1df4e520f5858d48db9860efb63a7",
"text": "This paper proposes single-phase direct pulsewidth modulation (PWM) buck-, boost-, and buck-boost-type ac-ac converters. The proposed converters are implemented with a series-connected freewheeling diode and MOSFET pair, which allows to minimize the switching and conduction losses of the semiconductor devices and resolves the reverse-recovery problem of body diode of MOSFET. The proposed converters are highly reliable because they can solve the shoot-through and dead-time problems of traditional ac-ac converters without voltage/current sensing module, lossy resistor-capacitor (RC) snubbers, or bulky coupled inductors. In addition, they can achieve high obtainable voltage gain and also produce output voltage waveforms of good quality because they do not use lossy snubbers. Unlike the recently developed switching cell (SC) ac-ac converters, the proposed ac-ac converters have no circulating current and do not require bulky coupled inductors; therefore, the total losses, current stresses, and magnetic volume are reduced and efficiency is improved. Detailed analysis and experimental results are provided to validate the novelty and merit of the proposed converters.",
"title": ""
},
{
"docid": "67fd6424fc1aebe250b0fbf638a196b7",
"text": "The World Health Organization's Ottawa Charter for Health Promotion has been influential in guiding the development of 'settings' based health promotion. Over the past decade, settings such as schools have flourished and there has been a considerable amount of academic literature produced, including theoretical papers, descriptive studies and evaluations. However, despite its central importance, the health-promoting general practice has received little attention. This paper discusses: the significance of this setting for health promotion; how a health promoting general practice can be created; effective health promotion approaches; the nursing contribution; and some challenges that need to be resolved. In order to become a health promoting general practice, the staff must undertake a commitment to fulfil the following conditions: create a healthy working environment; integrate health promotion into practice activities; and establish alliances with other relevant institutions and groups within the community. The health promoting general practice is the gold standard for health promotion. Settings that have developed have had the support of local, national and European networks. Similar assistance and advocacy will be needed in general practice. This paper recommends that a series of rigorously evaluated, high-quality pilot sites need to be established to identify and address potential difficulties, and to ensure that this innovative approach yields tangible health benefits for local communities. It also suggests that government support is critical to the future development of health promoting general practices. This will be needed both directly and in relation to the capacity and resourcing of public health in general.",
"title": ""
},
{
"docid": "e81d3f48d7213720f489f52852cfbfa3",
"text": "HE BRITISH ROCK GROUP Radiohead has carved out a unique place in the post-millennial rock milieu by tempering their highly experimental idiolect with structures more commonly heard in Top Forty rock styles. 1 In what I describe as a Goldilocks principle, much of their music after OK Computer (1997) inhabits a space between banal convention and sheer experimentation—a dichotomy which I have elsewhere dubbed the 'Spears–Stockhausen Continuum.' 2 In the timbral domain, the band often introduces sounds rather foreign to rock music such as the ondes Martenot and highly processed lead vocals within textures otherwise dominated by guitar, bass, and drums (e.g., 'The National Anthem,' 2000), and song forms that begin with paradigmatic verse–chorus structures often end with new material instead of a recapitulated chorus (e.g., 'All I Need,' 2007). In this T",
"title": ""
},
{
"docid": "ebe5630a0fb36452e2c9e94a53ef073a",
"text": "Imperforate hymen is uncommon, occurring in 0.1 % of newborn females. Non-syndromic familial occurrence of imperforate hymen is extremely rare and has been reported only three times in the English literature. The authors describe two cases in a family across two generations, one presenting with chronic cyclical abdominal pain and the other acutely. There were no other significant reproductive or systemic abnormalities in either case. Imperforate hymen occurs mostly in a sporadic manner, although rare familial cases do occur. Both the recessive and the dominant modes of transmission have been suggested. However, no genetic markers or mutations have been proven as etiological factors. Evaluating all female relatives of the affected patients at an early age can lead to early diagnosis and treatment in an asymptomatic case.",
"title": ""
},
{
"docid": "abbb210122d470215c5a1d0420d9db06",
"text": "Ensemble clustering, also known as consensus clustering, is emerging as a promising solution for multi-source and/or heterogeneous data clustering. The co-association matrix based method, which redefines the ensemble clustering problem as a classical graph partition problem, is a landmark method in this area. Nevertheless, the relatively high time and space complexity preclude it from real-life large-scale data clustering. We therefore propose SEC, an efficient Spectral Ensemble Clustering method based on co-association matrix. We show that SEC has theoretical equivalence to weighted K-means clustering and results in vastly reduced algorithmic complexity. We then derive the latent consensus function of SEC, which to our best knowledge is among the first to bridge co-association matrix based method to the methods with explicit object functions. The robustness and generalizability of SEC are then investigated to prove the superiority of SEC in theory. We finally extend SEC to meet the challenge rising from incomplete basic partitions, based on which a scheme for big data clustering can be formed. Experimental results on various real-world data sets demonstrate that SEC is an effective and efficient competitor to some state-of-the-art ensemble clustering methods and is also suitable for big data clustering.",
"title": ""
},
{
"docid": "3d8be6d4478154bc711d9cf241e7edb5",
"text": "The use of multimedia technology to teach language in its authentic cultural context represents a double challenge for language learners and teachers. On the one hand, the computer gives learners access to authentic video footage and other cultural materials that can help them get a sense of the sociocultural context in which the language is used. On the other hand, CD-ROM multimedia textualizes this context in ways that need to be \"read\" and interpreted. Learners are thus faced with the double task of (a) observing and choosing culturally relevant features of the context and (b) putting linguistic features in relation to other features to arrive at some understanding of language in use. This paper analyzes the interaction of text and context in a multimedia Quechua language program, and makes suggestions for teaching foreign languages through multimedia technology.",
"title": ""
},
{
"docid": "95296a02831a1f8fb50288503bea75ad",
"text": "The Residual Network (ResNet), proposed in He et al. (2015a), utilized shortcut connections to significantly reduce the difficulty of training, which resulted in great performance boosts in terms of both training and generalization error. It was empirically observed in He et al. (2015a) that stacking more layers of residual blocks with shortcut 2 results in smaller training error, while it is not true for shortcut of length 1 or 3. We provide a theoretical explanation for the uniqueness of shortcut 2. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The shortcut 1, however, is essentially equivalent to no shortcuts, which has a condition number exploding to infinity as the number of layers grows. We further argue that as the number of layers tends to infinity, it suffices to only look at the loss function at the zero initial point. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with shortcut 2 achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.",
"title": ""
},
{
"docid": "13beac4518bcbce5c0d68eb63e754474",
"text": "Alternating direction methods are a common tool for general mathematical programming and optimization. These methods have become particularly important in the field of variational image processing, which frequently requires the minimization of non-differentiable objectives. This paper considers accelerated (i.e., fast) variants of two common alternating direction methods: the Alternating Direction Method of Multipliers (ADMM) and the Alternating Minimization Algorithm (AMA). The proposed acceleration is of the form first proposed by Nesterov for gradient descent methods. In the case that the objective function is strongly convex, global convergence bounds are provided for both classical and accelerated variants of the methods. Numerical examples are presented to demonstrate the superior performance of the fast methods for a wide variety of problems.",
"title": ""
}
] | scidocsrr |
0c8107a1605a54c2e1f35f31ca34932a | Learning Disentangled Multimodal Representations for the Fashion Domain | [
{
"docid": "e77dc44a5b42d513bdbf4972d62a74f9",
"text": "Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.",
"title": ""
},
{
"docid": "88033862d9fac08702977f1232c91f3a",
"text": "Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.",
"title": ""
},
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "26884c49c5ada3fc80dbc2f2d1e5660b",
"text": "We introduce a complete pipeline for recognizing and classifying people’s clothing in natural scenes. This has several interesting applications, including e-commerce, event and activity recognition, online advertising, etc. The stages of the pipeline combine a number of state-of-the-art building blocks such as upper body detectors, various feature channels and visual attributes. The core of our method consists of a multi-class learner based on a Random Forest that uses strong discriminative learners as decision nodes. To make the pipeline as automatic as possible we also integrate automatically crawled training data from the web in the learning process. Typically, multi-class learning benefits from more labeled data. Because the crawled data may be noisy and contain images unrelated to our task, we extend Random Forests to be capable of transfer learning from different domains. For evaluation, we define 15 clothing classes and introduce a benchmark data set for the clothing classification task consisting of over 80, 000 images, which we make publicly available. We report experimental results, where our classifier outperforms an SVM baseline with 41.38 % vs 35.07 % average accuracy on challenging benchmark data.",
"title": ""
},
{
"docid": "527d7c091cfc63c8e9d36afdd6b7bdfe",
"text": "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"title": ""
}
] | [
{
"docid": "01ebd4b68fb94fc5defaff25c2d294b0",
"text": "High data rate E-band (71 GHz- 76 GHz, 81 GHz - 86 GHz, 92 GHz - 95 GHz) communication systems will benefit from power amplifiers that are more than twice as powerful than commercially available GaAs pHEMT MMICs. We report development of three stage GaN MMIC power amplifiers for E-band radio applications that produce 500 mW of saturated output power in CW mode and have > 12 dB of associated power gain. The output power density from 300 mum output gate width GaN MMICs is seven times higher than the power density of commercially available GaAs pHEMT MMICs in this frequency range.",
"title": ""
},
{
"docid": "c96fc4b6f28c1832c6e150dc62101f5e",
"text": "BACKGROUND AND OBJECTIVES\nNerve blocks and radiofrequency neurotomy of the nerves supplying the cervical zygapophyseal joints are validated tools for diagnosis and treatment of chronic neck pain, respectively. Unlike fluoroscopy, ultrasound may allow visualization of the target nerves, thereby potentially improving diagnostic accuracy and therapeutic efficacy of the procedures. The aims of this exploratory study were to determine the ultrasound visibility of the target nerves in chronic neck pain patients and to describe the variability of their course in relation to the fluoroscopically used bony landmarks.\n\n\nMETHODS\nFifty patients with chronic neck pain were studied. Sonographic visibility of the nerves and the bony target of fluoroscopically guided blocks were determined. The craniocaudal distance between the nerves and their corresponding fluoroscopic targets was measured.\n\n\nRESULTS\nSuccessful visualization of the nerves varied from 96% for the third occipital nerve to 84% for the medial branch of C6. The great exception was the medial branch of C7, which was visualized in 32%. The bony targets could be identified in all patients, with exception of C7, which was identified in 92%. The craniocaudal distance of each nerve to the corresponding bony target varied, the upper limit of the range being 2.2 mm at C4, the lower limit 1.0 mm at C7.\n\n\nCONCLUSIONS\nThe medial branches and their relation to the fluoroscopically used bony targets were mostly visualized by ultrasound, with the exception of the medial branch of C7 and, to a lesser extent, the bony target of C7. The nerve location may be distant from the fluoroscope's target. These findings justify further studies to investigate the validity of ultrasound guided blocks for invasive diagnosis/treatment of cervical zygapophyseal joint pain.",
"title": ""
},
{
"docid": "dcacbed90f45b76e9d40c427e16e89d6",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "95efc564448b3ec74842d047f94cb779",
"text": "Over the past 25 years or so there has been much interest in the use of digital pre-distortion (DPD) techniques for the linearization of RF and microwave power amplifiers. In this paper, we describe the important system and hardware requirements for the four main subsystems found in the DPD linearized transmitter: RF/analog, data converters, digital signal processing, and the DPD architecture and algorithms, and illustrate how the overall DPD system architecture is influenced by the design choices that may be made in each of these subsystems. We shall also consider the challenges presented to future applications of DPD systems for wireless communications, such as higher operating frequencies, wider signal bandwidths, greater spectral efficiency signals, resulting in higher peak-to-average power ratios, multiband and multimode operation, lower power consumption requirements, faster adaption, and how these affect the system design choices.",
"title": ""
},
{
"docid": "c3f25271d25590bf76b36fee4043d227",
"text": "Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.",
"title": ""
},
{
"docid": "24e2efc78dc8ffd57f25744ac7532807",
"text": "In this paper, we address the problem of outdoor, appearance-based topological localization, particularly over long periods of time where seasonal changes alter the appearance of the environment. We investigate a straight-forward method that relies on local image features to compare single image pairs. We first look into which of the dominating image feature algorithms, SIFT or the more recent SURF, that is most suitable for this task. We then fine-tune our localization algorithm in terms of accuracy, and also introduce the epipolar constraint to further improve the result. The final localization algorithm is applied on multiple data sets, each consisting of a large number of panoramic images, which have been acquired over a period of nine months with large seasonal changes. The final localization rate in the single-image matching, cross-seasonal case is between 80 to 95%.",
"title": ""
},
{
"docid": "1b7efa9ffda9aa23187ae7028ea5d966",
"text": "Tools for clinical assessment and escalation of observation and treatment are insufficiently established in the newborn population. We aimed to provide an overview over early warning- and track and trigger systems for newborn infants and performed a nonsystematic review based on a search in Medline and Cinahl until November 2015. Search terms included 'infant, newborn', 'early warning score', and 'track and trigger'. Experts in the field were contacted for identification of unpublished systems. Outcome measures included reference values for physiological parameters including respiratory rate and heart rate, and ways of quantifying the extent of deviations from the reference. Only four neonatal early warning scores were published in full detail, and one system for infants with cardiac disease was considered as having a more general applicability. Temperature, respiratory rate, heart rate, SpO2, capillary refill time, and level of consciousness were parameters commonly included, but the definition and quantification of 'abnormal' varied slightly. The available scoring systems were designed for term and near-term infants in postpartum wards, not neonatal intensive care units. In conclusion, there is a limited availability of neonatal early warning scores. Scoring systems for high-risk neonates in neonatal intensive care units and preterm infants were not identified.",
"title": ""
},
{
"docid": "db09043f9491381140febff04b2bb212",
"text": "In this book Professor Holmes discusses some of the evidence relating to one of the most baffling problems yet recognized by biologists-the factors involved in the regulation of growth and form. A wide range of possible influences, from enzymes to cellular competition, is considered. Numerous experiments and theories are described, with or without bibliographic citation. There is a list of references for each chapter and an index. The subject from a scientific standpoint is an exceedingly difficult one, for the reason that very little indeed is understood regarding such phenomena as differentiation. It follows that the problem offers fine opportunities for intellectual jousting by mechanists and vitalists, that hypotheses and theories must often be the weapons of choice, and philosophy the armor. Professor Holmes gives us a good seat from which to watch the combats, explains clearly what is going on, and occasionally slips away to enter the lists himself. This stereoscopic atlas of anatomy was designed as an aid in teaching neuro-anatomy for beginning medical students and as a review for physicians taking Board examinations in Psychiatry and Neurology. Each plate consists of a pair of stereoscopic photographs and a labelled diagram of the important parts seen in the photograph. Perhaps in this day of scarcity of materials, particularly of human brains hardened for dissection, photographs of this kind conceivably can be used as a substitute. Successive stages of dissection are presented in such a fashion that, used in conjunction with the dissecting manual, a student should be able to identify most of the important components of the nervous system without much outside help. The area covered is limited to the gross features of the brain and brain stem and perhaps necessarily does not deal with any of the microscopic structure. So much more can be learned from the dissection of the actual brain that it is doubtful if this atlas would be useful except where brains are not available. A good deal of effort has been spent on the preparation of this atlas, with moderately successful results.",
"title": ""
},
{
"docid": "353bbc5e68ec1d53b3cd0f7c352ee699",
"text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "68abef37fe49bb675d7a2ce22f7bf3a7",
"text": "Objective: The case for exercise and health has primarily been made on its impact on diseases such coronary heart disease, obesity and diabetes. However, there is a very high cost attributed to mental disorders and illness and in the last 15 years there has been increasing research into the role of exercise a) in the treatment of mental health, and b) in improving mental well-being in the general population. There are now several hundred studies and over 30 narrative or meta-analytic reviews of research in this field. These have summarised the potential for exercise as a therapy for clinical or subclinical depression or anxiety, and the use of physical activity as a means of upgrading life quality through enhanced self-esteem, improved mood states, reduced state and trait anxiety, resilience to stress, or improved sleep. The purpose of this paper is to a) provide an updated view of this literature within the context of public health promotion and b) investigate evidence for physical activity and dietary interactions affecting mental well-being. Design: Narrative review and summary. Conclusions: Sufficient evidence now exists for the effectiveness of exercise in the treatment of clinical depression. Additionally, exercise has a moderate reducing effect on state and trait anxiety and can improve physical self-perceptions and in some cases global self-esteem. Also there is now good evidence that aerobic and resistance exercise enhances mood states, and weaker evidence that exercise can improve cognitive function (primarily assessed by reaction time) in older adults. Conversely, there is little evidence to suggest that exercise addiction is identifiable in no more than a very small percentage of exercisers. Together, this body of research suggests that moderate regular exercise should be considered as a viable means of treating depression and anxiety and improving mental well-being in the general public.",
"title": ""
},
{
"docid": "0eaee4f37754d0137de78cf1b4d8d950",
"text": "Outlier detection is an important task in data mining with numerous applications, including credit card fraud detection, video surveillance, etc. Outlier detection has been widely focused and studied in recent years. The concept about outlier factor of object is extended to the case of cluster. Although many outlier detection algorithms have been proposed, most of them face the top-n problem, i.e., it is difficult to know how many points in a database are outliers. In this paper we propose a novel outlier cluster detection algorithm called ROCF based on the concept of mutual neighbor graph and on the idea that the size of outlier clusters is usually much smaller than the normal clusters. ROCF can automatically figure out the outlier rate of a database and effectively detect the outliers and outlier clusters without top-n parameter. The formal analysis and experiments show that this method can achieve good performance in outlier detection. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "23d6c80c6cb04dcfd6f29037548d2f05",
"text": "In recent years, the chaos based cryptographic algorithms have suggested some new and efficient ways to develop secure image encryption techniques. In this communication,wepropose a newapproach for image encryption basedon chaotic logisticmaps in order tomeet the requirements of the secure image transfer. In the proposed image encryption scheme, an external secret key of 80-bit and two chaotic logistic maps are employed. The initial conditions for the both logistic maps are derived using the external secret key by providing different weightage to all its bits. Further, in the proposed encryption process, eight different types of operations are used to encrypt the pixels of an image and which one of them will be used for a particular pixel is decided by the outcome of the logistic map. To make the cipher more robust against any attack, the secret key is modified after encrypting each block of sixteen pixels of the image. The results of several experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for real-time image encryption and transmission. q 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5aeb8a7daa383259340ac7e27113f783",
"text": "This paper reports on the design, implementation and characterization of wafer-level packaging technology for a wide range of microelectromechanical system (MEMS) devices. The encapsulation technique is based on thermal decomposition of a sacrificial polymer through a polymer overcoat to form a released thin-film organic membrane with scalable height on top of the active part of the MEMS. Hermiticity and vacuum operation are obtained by thin-film deposition of a metal such as chromium, aluminum or gold. The thickness of the overcoat can be optimized according to the size of the device and differential pressure to package a wide variety of MEMS such as resonators, accelerometers and gyroscopes. The key performance metrics of several batches of packaged devices do not degrade as a result of residues from the sacrificial polymer. A Q factor of 5000 at a resonant frequency of 2.5 MHz for the packaged resonator, and a static sensitivity of 2 pF g−1 for the packaged accelerometer were obtained. Cavities as small as 0.000 15 mm3 for the resonator and as large as 1 mm3 for the accelerometer have been made by this method. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "217e3b6bc1ed6a1ef8860efff285f4ab",
"text": "Currently, salvage is considered as an effective way for protecting ecosystems of inland water from toxin-producing algal blooms. Yet, the magnitude of algal blooms, which is the essential information required for dispatching salvage boats, cannot be estimated accurately with low cost in real time. In this paper, a data-driven soft sensor is proposed for algal blooms monitoring, which estimates the magnitude of algal blooms using data collected by inexpensive water quality sensors as input. The modeling of the soft sensor consists of two steps: 1) magnitude calculation and 2) regression model training. In the first step, we propose an active learning strategy to construct high-accuracy image classification model with ~50 % less labeled data. Based on this model, we design a new algorithm that recognizes algal blooms and calculates the magnitude using water surface pictures. In the second step, we propose to use Gaussian process to train the regression model that maps the multiparameter water quality sensor data to the calculated magnitude of algal blooms and learn the parameters of the model automatically from the training data. We conduct extensive experiments to evaluate our modeling method, AlgaeSense, based on over 200 000 heterogeneous sensor data records collected in four months from our field-deployed sensor system. The results indicate that the soft sensor can accurately estimate the magnitude of algal blooms in real time using data collected by just three kinds of inexpensive water quality sensors.",
"title": ""
},
{
"docid": "bfe5c10940d4cccfb071598ed04020ac",
"text": "BACKGROUND\nKnowledge about quality of life and sexual health in patients with genital psoriasis is limited.\n\n\nOBJECTIVES\nWe studied quality of life and sexual function in a large group of patients with genital psoriasis by means of validated questionnaires. In addition, we evaluated whether sufficient attention is given by healthcare professionals to sexual problems in patients with psoriasis, as perceived by the patients.\n\n\nMETHODS\nA self-administered questionnaire was sent to 1579 members of the Dutch Psoriasis Association. Sociodemographic patient characteristics, medical data and scores of several validated questionnaires regarding quality of life (Dermatology Life Quality Index) and sexual health (Sexual Quality of Life Questionnaire for use in Men, International Index of Erectile Function, Female Sexual Distress Scale and Female Sexual Function Index) were collected and analysed.\n\n\nRESULTS\nThis study (n = 487) shows that psoriasis has a detrimental effect on quality of life and sexual health. Patients with genital lesions reported even significantly worse quality of life than patients without genital lesions (mean ± SD quality of life scores 8·5 ± 6·5 vs. 5·5 ± 4·6, respectively, P < 0·0001). Sexual distress and dysfunction are particularly prominent in women (reported by 37·7% and 48·7% of the female patients, respectively). Sexual distress is especially high when genital skin is affected (mean ± SD sexual distress score in patients with genital lesions 16·1 ± 12·1 vs. 10·1 ± 9·7 in patients without genital lesions, P = 0·001). The attention given to possible sexual problems in the psoriasis population by healthcare professionals is perceived as insufficient by patients.\n\n\nCONCLUSIONS\nIn addition to quality of life, sexual health is diminished in a considerable number of patients with psoriasis and particularly women with genital lesions have on average high levels of sexual distress. We underscore the need for physicians to pay attention to the impact of psoriasis on psychosocial and sexual health when treating patients for this skin disease.",
"title": ""
},
{
"docid": "ad57044935e65f144a5d718844672b2c",
"text": "DeLone and McLean’s (1992) model of information systems success has received much attention amongst researchers. This study provides the first empirical test of the entire DeLone and McLean model in the user developed application domain. Overall, the model was not supported by the data. Of the nine hypothesised relationships tested four were found to be significant and the remainder not significant. The model provided strong support for the relationships between perceived system quality and user satisfaction, perceived information quality and user satisfaction, user satisfaction and intended use, and user satisfaction and perceived individual impact.",
"title": ""
},
{
"docid": "01165a990d16000ac28b0796e462147a",
"text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.",
"title": ""
},
{
"docid": "b958af84a3f977ea4c3efd854bd7de48",
"text": "This paper presents the novel development of an embedded system that aims at digital TV content recommendation based on descriptive metadata collected from versatile sources. The described system comprises a user profiling subsystem identifying user preferences and a user agent subsystem performing content rating. TV content items are ranked using a combined multimodal approach integrating classification-based and keyword-based similarity predictions so that a user is presented with a limited subset of relevant content. Observable user behaviors are discussed as instrumental in user profiling and a formula is provided for implicitly estimating the degree of user appreciation of content. A new relation-based similarity measure is suggested to improve categorized content rating precision. Experimental results show that our system can recommend desired content to users with significant amount of accuracy.",
"title": ""
},
{
"docid": "26d1678ccbd8f1453dccc4fa2eacd3aa",
"text": "A standard assumption in machine learning is the exchangeability of data, which is equivalent to assuming that the examples are generated from the same probability distribution independently. This paper is devoted to testing the assumption of exchangeability on-line: the examples arrive one by one, and after receiving each example we would like to have a valid measure of the degree to which the assumption of exchangeability has been falsified. Such measures are provided by exchangeability martingales. We extend known techniques for constructing exchangeability martingales and show that our new method is competitive with the martingales introduced before. Finally we investigate the performance of our testing method on two benchmark datasets, USPS and Statlog Satellite data; for the former, the known techniques give satisfactory results, but for the latter our new more flexible method becomes necessary.",
"title": ""
},
{
"docid": "a1f930147ad3c3ef48b6352e83d645d0",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
}
] | scidocsrr |
6c14f5de0d77beb9c1bd28221866539c | Generation and Analysis of a Large-Scale Urban Vehicular Mobility Dataset | [
{
"docid": "bac9584a31e42129fb7a5fe2640f5725",
"text": "During the last few years, continuous progresses in wireless communications have opened new research fields in computer networking, aimed at extending data networks connectivity to environments where wired solutions are impracticable. Among these, vehicular communication is attracting growing attention from both academia and industry, owing to the amount and importance of the related applications, ranging from road safety to traffic control and up to mobile entertainment. Vehicular Ad-hoc Networks (VANETs) are self-organized networks built up from moving vehicles, and are part of the broader class of Mobile Ad-hoc Networks (MANETs). Owing to their peculiar characteristics, VANETs require the definition of specific networking techniques, whose feasibility and performance are usually tested by means of simulation. One of the main challenges posed by VANETs simulations is the faithful characterization of vehicular mobility at both the macroscopic and microscopic levels, leading to realistic non-uniform distributions of cars and velocity, and unique connectivity dynamics. However, freely distributed tools which are commonly used for academic studies only consider limited vehicular mobility issues, while they pay little or no attention to vehicular traffic generation and its interaction with its motion constraints counterpart. Such a simplistic approach can easily raise doubts on the confidence of derived VANETs simulation results. In this paper we present VanetMobiSim, a freely available generator of realistic vehicular movement traces for networks simulators. The traces generated by VanetMobiSim are validated first by illustrating how the interaction between featured motion constraints and traffic generator models is able to reproduce typical phenomena of vehicular traffic. Then, the traces are formally validated against those obtained by TSIS-CORSIM, a benchmark traffic simulator in transportation research. This makes VanetMobiSim one of the few vehicular mobility simulator fully validated and freely available to the vehicular networks research community.",
"title": ""
}
] | [
{
"docid": "9091df6080e8cd531bd6a883810d7445",
"text": "Despite major scientific, medical and technological advances over the last few decades, a cure for cancer remains elusive. The disease initiation is complex, and including initiation and avascular growth, onset of hypoxia and acidosis due to accumulation of cells beyond normal physiological conditions, inducement of angiogenesis from the surrounding vasculature, tumour vascularization and further growth, and invasion of surrounding tissue and metastasis. Although the focus historically has been to study these events through experimental and clinical observations, mathematical modelling and simulation that enable analysis at multiple time and spatial scales have also complemented these efforts. Here, we provide an overview of this multiscale modelling focusing on the growth phase of tumours and bypassing the initial stage of tumourigenesis. While we briefly review discrete modelling, our focus is on the continuum approach. We limit the scope further by considering models of tumour progression that do not distinguish tumour cells by their age. We also do not consider immune system interactions nor do we describe models of therapy. We do discuss hybrid-modelling frameworks, where the tumour tissue is modelled using both discrete (cell-scale) and continuum (tumour-scale) elements, thus connecting the micrometre to the centimetre tumour scale. We review recent examples that incorporate experimental data into model parameters. We show that recent mathematical modelling predicts that transport limitations of cell nutrients, oxygen and growth factors may result in cell death that leads to morphological instability, providing a mechanism for invasion via tumour fingering and fragmentation. These conditions induce selection pressure for cell survivability, and may lead to additional genetic mutations. Mathematical modelling further shows that parameters that control the tumour mass shape also control its ability to invade. Thus, tumour morphology may serve as a predictor of invasiveness and treatment prognosis.",
"title": ""
},
{
"docid": "60abc52c4953a01d7964b63dde2d8935",
"text": "This article proposes a security authentication process that is well-suited for Vehicular Ad-hoc Networks (VANET). As compared to current Public Key Infrastructure (PKI) proposals for VANET authentication, the scheme is significantly more efficient with regard to bandwidth and computation. The scheme uses time as the creator of asymmetric knowledge. A sender creates a long chain of keys. Each key is used for only a short period of time to sign messages. When a key expires, it is publicly revealed, and then never again used. (The sender subsequently uses the next key in its chain to sign future messages.) Upon receiving a revealed key, recipients authenticate previously received messages. The root of a sender’s keychain is given in a certificate signed by an authority. This article describes several possible certificate exchange methods. It also addresses privacy issues in VANET, specifically the tension between anonymity and the ability to revoke certificates.",
"title": ""
},
{
"docid": "800fd3b3b6dfd21838006e643ba92a0d",
"text": "The primary goals in use of half-bridge LLC series-resonant converter (LLC-SRC) are high efficiency, low noise, and wide-range regulation. A voltage-clamped drive circuit for simultaneously driving both primary and secondary switches is proposed to achieve synchronous rectification (SR) at switching frequency higher than the dominant resonant frequency. No high/low-side driver circuit for half-bridge switches of LLC-SRC is required and less circuit complexity is achieved. The SR mode LLC-SRC developed for reducing output rectification losses is described along with steady-state analysis, gate drive strategy, and its experiments. Design consideration is described thoroughly so as to build up a reference for design and realization. A design example of 240W SR LLC-SRC is examined and an average efficiency as high as 95% at full load is achieved. All performances verified by simulation and experiment are close to the theoretical predictions.",
"title": ""
},
{
"docid": "8fcc1b7e4602649f66817c4c50e10b3d",
"text": "Conventional wisdom suggests that praising a child as a whole or praising his or her traits is beneficial. Two studies tested the hypothesis that both criticism and praise that conveyed person or trait judgments could send a message of contingent worth and undermine subsequent coping. In Study 1, 67 children (ages 5-6 years) role-played tasks involving a setback and received 1 of 3 forms of criticism after each task: person, outcome, or process criticism. In Study 2, 64 children role-played successful tasks and received either person, outcome, or process praise. In both studies, self-assessments, affect, and persistence were measured on a subsequent task involving a setback. Results indicated that children displayed significantly more \"helpless\" responses (including self-blame) on all dependent measures after person criticism or praise than after process criticism or praise. Thus person feedback, even when positive, can create vulnerability and a sense of contingent self-worth.",
"title": ""
},
{
"docid": "18aa13e95a26f0cb82d257c3913b6203",
"text": "The ability to learn from incrementally arriving data is essential for any life-long learning system. However, standard deep neural networks forget the knowledge about the old tasks, a phenomenon called catastrophic forgetting, when trained on incrementally arriving data. We discuss the biases in current Generative Adversarial Networks (GAN) based approaches that learn the classifier by knowledge distillation from previously trained classifiers. These biases cause the trained classifier to perform poorly. We propose an approach to remove these biases by distilling knowledge from the classifier of AC-GAN. Experiments on MNIST and CIFAR10 show that this method is comparable to current state of the art rehearsal based approaches. The code for this paper is available at this link.",
"title": ""
},
{
"docid": "8a9680ae0d35a1c53773ccf7dcef4df7",
"text": "Support Vector Machines SVMs have proven to be highly e ective for learning many real world datasets but have failed to establish them selves as common machine learning tools This is partly due to the fact that they are not easy to implement and their standard imple mentation requires the use of optimization packages In this paper we present simple iterative algorithms for training support vector ma chines which are easy to implement and guaranteed to converge to the optimal solution Furthermore we provide a technique for automati cally nding the kernel parameter and best learning rate Extensive experiments with real datasets are provided showing that these al gorithms compare well with standard implementations of SVMs in terms of generalisation accuracy and computational cost while being signi cantly simpler to implement",
"title": ""
},
{
"docid": "72cc9333577fb255c97f137c5d19fd54",
"text": "The purpose of this study was to provide insight on attitudes towards Facebook advertising. In order to figure out the attitudes towards Facebook advertising, a snowball survey was executed among Facebook users by spreading a link to the survey. This study was quantitative study but the results of the study were interpreted in qualitative way. This research was executed with the help of factor analysis and cluster analysis, after which Chisquare test was used. This research expected that the result of the survey would lead in to two different groups with negative and positive attitudes. Factor analysis was used to find relations between variables that the survey data generated. The factor analysis resulted in 12 factors that were put in a cluster analysis to find different kinds of groups. Surprisingly the cluster analysis enabled the finding of three groups with different interests and different attitudes towards Facebook advertising. These clusters were analyzed and compared. One group was clearly negative, tending to block and avoid advertisements. Second group was with more neutral attitude towards advertising, and more carefree internet using. They did not have blocking software in use and they like to participate in activities more often. The third group had positive attitude towards advertising. The result of this study can be used to help companies better plan their Facebook advertising according to groups. It also reminds about the complexity of people and their attitudes; not everything suits everybody.",
"title": ""
},
{
"docid": "dbc468368059e6b676c8ece22b040328",
"text": "In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm.",
"title": ""
},
{
"docid": "7cf625ce06d335d7758c868514b4c635",
"text": "Jeffrey's rule of conditioning has been proposed in order to revise a probability measure by another probability function. We generalize it within the framework of the models based on belief functions. We show that several forms of Jeffrey's conditionings can be defined that correspond to the geometrical rule of conditioning and to Dempster's rule of conditioning, respectively. 1. Jeffrey's rule in probability theory. In probability theory conditioning on an event . is classically obtained by the application of Bayes' rule. Let (Q, � , P) be a probability space where P(A) is the probability of the event Ae � where� is a Boolean algebra defined on a finite2 set n. P(A) quantified the degree of belief or the objective probability, depending on the interpretation given to the probability measure, that a particular arbitrary element m of n which is not a priori located in any of the sets of� belongs to a particular set Ae�. Suppose it is known that m belongs to Be� and P(B)>O. The probability measure P must be updated into PB that quantifies the same event as previously but after taking in due consideration the know ledge that me B. PB is obtained by Bayes' rule of conditioning: This rule can be obtained by requiring that: 81: VBE�. PB(B) = 1 82: VBe�, VX,Ye� such that X.Y�B. and PJ3(X) _ P(X) PB(Y)P(Y) PB(Y) = 0 ifP(Y)>O",
"title": ""
},
{
"docid": "23e5d6ab308be70276468b988213d8f5",
"text": "Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g., classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on. The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). “Bi-level” denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is “automated,” because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e., classification accuracy achieves 81.6% on CUB-200–2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.",
"title": ""
},
{
"docid": "1f1158ad55dc8a494d9350c5a5aab2f2",
"text": "Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is comparatively low, given their age, education and intellectual reasoning ability. Low performance due to cerebral trauma is called acquired dyscalculia. Mathematical learning difficulties with similar features but without evidence of cerebral trauma are referred to as developmental dyscalculia. This review identifies types of developmental dyscalculia, the neuropsychological processes that are linked with them and procedures for identifying dyscalculia. The concept of dyslexia is one with which professionals working in the areas of special education, learning disabilities are reasonably familiar. The concept of dyscalculia, on the other hand, is less well known. This article describes this condition and examines its implications for understanding mathematics learning disabilities. Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is significantly depressed, given their age, education and intellectual reasoning ability ( Mental Disorders IV (DSM IV)). When this loss of ability to calculate is due to cerebral trauma, the condition is called acalculia or acquired dyscalculia. Mathematical learning difficulties that share features with acquired dyscalculia but without evidence of cerebral trauma are referred to as developmental dyscalculia (Hughes, Kolstad & Briggs, 1994). The focus of this review is on developmental dyscalculia (DD). Students who show DD have difficulty recalling number facts and completing numerical calculations. They also show chronic difficulties with numerical processing skills such recognizing number symbols, writing numbers or naming written numerals and applying procedures correctly (Gordon, 1992). They may have low self efficacy and selective attentional difficulties (Gross Tsur, Auerbach, Manor & Shalev, 1996). Not all students who display low mathematics achievement have DD. Mathematics underachievement can be due to a range of causes, for example, lack of motivation or interest in learning mathematics, low self efficacy, high anxiety, inappropriate earlier teaching or poor school attendance. It can also be due to generalised poor learning capacity, immature general ability, severe language disorders or sensory processing. Underachievement due to DD has a neuropsychological foundation. The students lack particular cognitive or information processing strategies necessary for acquiring and using arithmetic knowledge. They can learn successfully in most contexts and have relevant general language and sensory processing. They also have access to a curriculum from which their peers learn successfully. It is also necessary to clarify the relationship between DD and reading disabilities. Some aspects of both literacy and arithmetic learning draw on the same cognitive processes. Both, for example, 1 This article was published in Australian Journal of Learning Disabilities, 2003 8, (4).",
"title": ""
},
{
"docid": "a541260619ab3026451fab57d11ee276",
"text": "A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.",
"title": ""
},
{
"docid": "a47ef65411bb58481643b49e88b31e34",
"text": "An approach to fault-tolerant design is described in which a computing system S and an algorithm A to be executed by S are both defined by graphs whose nodes represent computing facilities. A is executable by S if A is isomorphic to a subgraph of S.A k-fault is the removal of k nodes (facilities) from S.S is a k-fault tolerant (k-FT) realization of A if A can be executed by S with any k-fault present in S. The problem of designing optimal k-FT systems is considered where A is equated to a 0-FT system. Techniques are described for designing optimal k-FT realizations of single-loop systems; these techniques are related to results in Hamiltonian graph theory. The design of optimal k-FT realizations of certain types of tree systems is also examined. The advantages and disadvantages of the graph model are discussed.",
"title": ""
},
{
"docid": "30e287e44e66e887ad5d689657e019c3",
"text": "OBJECTIVE\nThe purpose of this study was to determine whether the Sensory Profile discriminates between children with and without autism and which items on the profile best discriminate between these groups.\n\n\nMETHOD\nParents of 32 children with autism aged 3 to 13 years and of 64 children without autism aged 3 to 10 years completed the Sensory Profile. A descriptive analysis of the data set of children with autism identified the distribution of responses on each item. A multivariate analysis of covariance (MANCOVA) of each category of the Sensory Profile identified possible differences among subjects without autism, with mild or moderate autism, and with severe autism. Follow-up univariate analyses were conducted for any category that yielded a significant result on the MANCOVA:\n\n\nRESULTS\nEight-four of 99 items (85%) on the Sensory Profile differentiated the sensory processing skills of subjects with autism from those without autism. There were no group differences between subjects with mild or moderate autism and subjects with severe autism.\n\n\nCONCLUSION\nThe Sensory Profile can provide information about the sensory processing skills of children with autism to assist occupational therapists in assessing and planning intervention for these children.",
"title": ""
},
{
"docid": "ba1368e4acc52395a8e9c5d479d4fe8f",
"text": "This talk will present an overview of our recent research on distributional reinforcement learning. Our starting point is our recent ICML paper, in which we argued for the fundamental importance of the value distribution: the distribution of random returns received by a reinforcement learning agent. This is in contrast to the common approach, which models the expectation of this return, or value. Back then, we were able to design a new algorithm that learns the value distribution through a TD-like bootstrap process and achieved state-of-the-art performance on games from the Arcade Learning Environment (ALE). However, this left open the question as to why the distributional approach should perform better at all. We’ve since delved deeper into what makes distributional RL work: first by improving the original using quantile regression, which directly minimizes the Wasserstein metric; and second by unearthing surprising connections between the original C51 algorithm and the distant cousin of the Wasserstein metric, the Cramer distance.",
"title": ""
},
{
"docid": "96f42b3a653964cffa15d9b3bebf0086",
"text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,",
"title": ""
},
{
"docid": "240c6c87a15890e3255c1c35bacbe534",
"text": "Lidar based 3D object detection is inevitable for autonomous driving, because it directly links to environmental understanding and therefore builds the base for prediction and motion planning. The capacity of inferencing highly sparse 3D data in real-time is an ill-posed problem for lots of other application areas besides automated vehicles, e.g. augmented reality, personal robotics or industrial automation. We introduce Complex-YOLO, a state of the art real-time 3D object detection network on point clouds only. In this work, we describe a network that expands YOLOv2, a fast 2D standard object detector for RGB images, by a specific complex regression strategy to estimate multi-class 3D boxes in Cartesian space. Thus, we propose a specific Euler-RegionProposal Network (E-RPN) to estimate the pose of the object by adding an imaginary and a real fraction to the regression network. This ends up in a closed complex space and avoids singularities, which occur by single angle estimations. The E-RPN supports to generalize well during training. Our experiments on the KITTI benchmark suite show that we outperform current leading methods for 3D object detection specifically in terms of efficiency. We achieve state of the art results for cars, pedestrians and cyclists by being more than five times faster than the fastest competitor. Further, our model is capable of estimating all eight KITTIclasses, including Vans, Trucks or sitting pedestrians simultaneously with high accuracy.",
"title": ""
},
{
"docid": "06614a4d74d2d059944b9487f2966ff4",
"text": "In web search, relevance ranking of popular pages is relatively easy, because of the inclusion of strong signals such as anchor text and search log data. In contrast, with less popular pages, relevance ranking becomes very challenging due to a lack of information. In this paper the former is referred to as head pages, and the latter tail pages. We address the challenge by learning a model that can extract search-focused key n-grams from web pages, and using the key n-grams for searches of the pages, particularly, the tail pages. To the best of our knowledge, this problem has not been previously studied. Our approach has four characteristics. First, key n-grams are search-focused in the sense that they are defined as those which can compose \"good queries\" for searching the page. Second, key n-grams are learned in a relative sense using learning to rank techniques. Third, key n-grams are learned using search log data, such that the characteristics of key n-grams in the search log data, particularly in the heads; can be applied to the other data, particularly to the tails. Fourth, the extracted key n-grams are used as features of the relevance ranking model also trained with learning to rank techniques. Experiments validate the effectiveness of the proposed approach with large-scale web search datasets. The results show that our approach can significantly improve relevance ranking performance on both heads and tails; and particularly tails, compared with baseline approaches. Characteristics of our approach have also been fully investigated through comprehensive experiments.",
"title": ""
},
{
"docid": "89eaafb816877a6c4139c30aea0ac8d8",
"text": "We have developed several digital heritage interfaces that utilize Web3D, virtual and augmented reality technologies for visualizing digital heritage in an interactive manner through the use of several different input devices. We propose in this paper an integration of these technologies to provide a novel multimodal mixed reality interface that facilitates the implementation of more interesting digital heritage exhibitions. With such exhibitions participants can switch dynamically between virtual web-based environments to indoor augmented reality environments as well as make use of various multimodal interaction techniques to better explore heritage information in the virtual museum. The museum visitor can potentially experience their digital heritage in the physical sense in the museum, then explore further through the web, visualize this heritage in the round (3D on the web), take that 3D artifact into the augmented reality domain (the real world) and explore it further using various multimodal interfaces.",
"title": ""
},
{
"docid": "e14420212ec11882cc71a57fd68cbb08",
"text": "Organizational ambidexterity refers to the ability of an organization to both explore and exploit—to compete in mature technologies and markets where efficiency, control, and incremental improvement are prized and to also compete in new technologies and markets where flexibility, autonomy, and experimentation are needed. In the past 15 years there has been an explosion of interest and research on this topic. We briefly review the current state of the research, highlighting what we know and don’t know about the topic. We close with a point of view on promising areas for ongoing research.",
"title": ""
}
] | scidocsrr |
f2c52f60de08600aae9f2233ce3bf930 | Artificial Intelligence-Assisted Online Social Therapy for Youth Mental Health | [
{
"docid": "dde075f427d729d028d6d382670f8346",
"text": "Using social media Web sites is among the most common activity of today's children and adolescents. Any Web site that allows social interaction is considered a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. Such sites offer today's youth a portal for entertainment and communication and have grown exponentially in recent years. For this reason, it is important that parents become aware of the nature of social media sites, given that not all of them are healthy environments for children and adolescents. Pediatricians are in a unique position to help families understand these sites and to encourage healthy use and urge parents to monitor for potential problems with cyberbullying, \"Facebook depression,\" sexting, and exposure to inappropriate content.",
"title": ""
}
] | [
{
"docid": "65c4d3f99a066c235bb5d946934bee05",
"text": "This paper describes a new Augmented Reality (AR) system called HoloLens developed by Microsoft, and the interaction model for supporting collaboration in this space with other users. Whereas traditional AR collaboration is between two or more head-mounted displays (HMD) users, we describe collaboration between a single HMD user and others who join the space by hitching on the view of the HMD user. The remote companions participate remotely through Skype-enabled devices such as tablets or PC's. The interaction is novel in the use of a 3D space with digital objects where the interaction by remote parties can be achieved asynchronously and reflected back to the primary user. We describe additional collaboration scenarios possible with this arrangement.",
"title": ""
},
{
"docid": "8cc42ad71caac7605648166f9049df8e",
"text": "This section considers the application of eye movements to user interfaces—both for analyzing interfaces, measuring usability, and gaining insight into human performance—and as an actual control medium within a human-computer dialogue. The two areas have generally been reported separately; but this book seeks to tie them together. For usability analysis, the user’s eye movements while using the system are recorded and later analyzed retrospectively, but the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user-computer dialogue. They might be the sole input, typically for disabled users or hands-busy applications, or they might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices.",
"title": ""
},
{
"docid": "ae3ccd3698a5b96243a223d41ee4ece4",
"text": "In this paper, we introduce a new approach to image retrieval. This new approach takes the best from two worlds, combines image features (content) and words from collateral text (context) into one semantic space. Our approach uses Latent Semantic Indexing, a method that uses co-occurrence statistics to uncover hidden semantics. This paper shows how this method, that has proven successful in both monolingual and cross lingual text retrieval, can be used for multi-modal and cross-modal information retrieval. Experiments with an on-line newspaper archive show that Latent Semantic Indexing can outperform both content based and context based approaches and that it is a promising approach for indexing visual and multi-modal data.",
"title": ""
},
{
"docid": "4ce758b707071e9bd72e81215938d4cf",
"text": "We describe a pipelined optical-flow processing system that works as a virtual motion sensor. It is based on a field-programmable gate array (FPGA) device enabling the easy change of configuring parameters to adapt the sensor to different speeds, light conditions and other environmental factors. We refer to it as a \"virtual sensor\" because it consists of a conventional camera as front-end supported by an FPGA processing device, which embeds the frame grabber, optical-flow algorithm implementation, output module, and some configuration and storage circuitry. To the best of our knowledge, this is the first study that presents a fully stand-alone optical-flow processing system to include measurements of the platform performance in terms of accuracy and speed.",
"title": ""
},
{
"docid": "4b1c1194a9292adf76452eda03f7f67f",
"text": "Fin-type field-effect transistors (FinFETs) are promising substitutes for bulk CMOS at the nanoscale. FinFETs are double-gate devices. The two gates of a FinFET can either be shorted for higher perfomance or independently controlled for lower leakage or reduced transistor count. This gives rise to a rich design space. This chapter provides an introduction to various interesting FinFET logic design styles, novel circuit designs, and layout considerations.",
"title": ""
},
{
"docid": "114627ba443111d4010969df51f93824",
"text": "We propose a method for explaining regression models and their predictions for individual instances. The method successfully reveals how individual features influence the model and can be used with any type of regression model in a uniform way. We used different types of models and data sets to demonstrate that the method is a useful tool for explaining, comparing, and identifying errors in regression models.",
"title": ""
},
{
"docid": "4a5d4db892145324597bd8d6b98c009f",
"text": "Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications. M. Chen · S. Gonzalez · H. Cao · V. C. M. Leung Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada M. Chen School of Computer Science and Engineering, Seoul National University, Seoul, South Korea A. Vasilakos (B) Department of Computer and Telecommunications Engineering, University of Western Macedonia, Macedonia, Greece e-mail: vasilako@ath.forthnet.gr",
"title": ""
},
{
"docid": "beba751220fc4f8df7be8d8e546150d0",
"text": "Theoretical analysis and implementation of autonomous staircase detection and stair climbing algorithms on a novel rescue mobile robot are presented in this paper. The main goals are to find the staircase during navigation and to implement a fast, safe and smooth autonomous stair climbing algorithm. Silver is used here as the experimental platform. This tracked mobile robot is a tele-operative rescue mobile robot with great capabilities in climbing obstacles in destructed areas. Its performance has been demonstrated in rescue robot league of international RoboCup competitions. A fuzzy controller is applied to direct the robot during stair climbing. Controller inputs are generated by processing the range data from two LASER range finders which scan the environment one horizontally and the other vertically. The experimental results of stair detection algorithm and stair climbing controller are demonstrated at the end.",
"title": ""
},
{
"docid": "6ec38db3aa02deb595e832de9fa8db96",
"text": "Electroactive polymer (EAP) actuators are electrically responsive materials that have several characteristics in common with natural muscles. Thus, they are being studied as 'artificial muscles' for a variety of biomimetic motion applications. EAP materials are commonly classified into two major families: ionic EAPs, activated by an electrically induced transport of ions and/or solvent, and electronic EAPs, activated by electrostatic forces. Although several EAP materials and their properties have been known for many decades, they have found very limited applications. Such a trend has changed recently as a result of an effective synergy of at least three main factors: key scientific breakthroughs being achieved in some of the existing EAP technologies; unprecedented electromechanical properties being discovered in materials previously developed for different purposes; and higher concentration of efforts for industrial exploitation. As an outcome, after several years of basic research, today the EAP field is just starting to undergo transition from academia into commercialization, with significant investments from large companies. This paper presents a brief overview on the full range of EAP actuator types and the most significant areas of interest for applications. It is hoped that this overview can instruct the reader on how EAPs can enable bioinspired motion systems.",
"title": ""
},
{
"docid": "0382ad43b6d31a347d9826194a7261ce",
"text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.",
"title": ""
},
{
"docid": "7971ac5a8abaefc2ebc814624b5c8546",
"text": "Multibody structure from motion (SfM) is the extension of classical SfM to dynamic scenes with multiple rigidly moving objects. Recent research has unveiled some of the mathematical foundations of the problem, but a practical algorithm which can handle realistic sequences is still missing. In this paper, we discuss the requirements for such an algorithm, highlight theoretical issues and practical problems, and describe how a static structure-from-motion framework needs to be extended to handle real dynamic scenes. Theoretical issues include different situations in which the number of independently moving scene objects changes: Moving objects can enter or leave the field of view, merge into the static background (e.g., when a car is parked), or split off from the background and start moving independently. Practical issues arise due to small freely moving foreground objects with few and short feature tracks. We argue that all of these difficulties need to be handled online as structure-from-motion estimation progresses, and present an exemplary solution using the framework of probabilistic model-scoring.",
"title": ""
},
{
"docid": "fb1959ff402e790d175639fb8ebc1d6d",
"text": "Programmable thermostats offer large potential energy savings without sacrificing comfort, but only if setback schedules are defined correctly. We present the concept of a self-programming thermostat that automatically creates an optimal setback schedule by sensing the occupancy statistics of a home. The system monitors occupancy using simple sensors in the home, similar to those already used in typical security systems, and the user defines the desired balance between energy and comfort using a single, intuitive knob. Our preliminary results show that this approach can reduce heating and cooling demand by up to 15% over the default setback schedule recommended by EnergyStar.",
"title": ""
},
{
"docid": "48d778934127343947b494fe51f56a33",
"text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.",
"title": ""
},
{
"docid": "4a6c7b68ea23f910f0edc35f4542e5cb",
"text": "Microgrids have been proposed in order to handle the impacts of Distributed Generators (DGs) and make conventional grids suitable for large scale deployments of distributed generation. However, the introduction of microgrids brings some challenges. Protection of a microgrid and its entities is one of them. Due to the existence of generators at all levels of the distribution system and two distinct operating modes, i.e. Grid Connected and Islanded modes, the fault currents in a system vary substantially. Consequently, the traditional fixed current relay protection schemes need to be improved. This paper presents a conceptual design of a microgrid protection system which utilizes extensive communication to monitor the microgrid and update relay fault currents according to the variations in the system. The proposed system is designed so that it can respond to dynamic changes in the system such as connection/disconnection of DGs.",
"title": ""
},
{
"docid": "66720892b48188c10d05937367dbd25e",
"text": "In wireless sensor network (WSN) [1], energy efficiency is one of the very important issues. Protocols in WSNs are broadly classified as Hierarchical, Flat and Location Based routing protocols. Hierarchical routing is used to perform efficient routing in WSN. Here we concentrate on Hierarchical Routing protocols, different types of Hierarchical routing protocols, and PEGASIS (Power-Efficient Gathering in Sensor Information Systems) [2, 3] based routing",
"title": ""
},
{
"docid": "c3c15cc4edc816e53d1a8c19472ad203",
"text": "Among different Business Process Management strategies and methodologies, one common feature is to capture existing processes and representing the new processes adequately. Business Process Modelling (BPM) plays a crucial role on such an effort. This paper proposes a “to-be” inbound logistics business processes model using BPMN 2.0 standard specifying the structure and behaviour of the system within the SME environment. The generic framework of inbound logistics model consists of one main high-level module-based system named Order System comprising of four main sub-systems of the Order core, Procure, Auction, and Purchase systems. The system modelingis elaborately discussed to provide a business analytical perspective from various activities in inbound logistics system. Since the main purpose of the paper is to map out the functionality and behaviour of Logistics system requirements, employing the model is of a great necessity on the future applications at system development such as in the data modelling effort. Moreover, employing BPMN 2.0 method and providing explanatory techniques as a nifty guideline and framework to assist the business process practitioners, analysts and managers at identical systems.",
"title": ""
},
{
"docid": "de48850e635e5a15f8574a0022cbb1e5",
"text": "People use various social media for different purposes. The information on an individual site is often incomplete. When sources of complementary information are integrated, a better profile of a user can be built to improve online services such as verifying online information. To integrate these sources of information, it is necessary to identify individuals across social media sites. This paper aims to address the cross-media user identification problem. We introduce a methodology (MOBIUS) for finding a mapping among identities of individuals across social media sites. It consists of three key components: the first component identifies users' unique behavioral patterns that lead to information redundancies across sites; the second component constructs features that exploit information redundancies due to these behavioral patterns; and the third component employs machine learning for effective user identification. We formally define the cross-media user identification problem and show that MOBIUS is effective in identifying users across social media sites. This study paves the way for analysis and mining across social media sites, and facilitates the creation of novel online services across sites.",
"title": ""
},
{
"docid": "32fa965f20be0ae72c32ef7f096b32d4",
"text": "We systematically explored a spectrum of normalization algorithms related to Batch Normalization (BN) and propose a generalized formulation that simultaneously solves two major limitations of BN: (1) online learning and (2) recurrent learning. Our proposal is simpler and more biologically-plausible. Unlike previous approaches, our technique can be applied out of the box to all learning scenarios (e.g., online learning, batch learning, fully-connected, convolutional, feedforward, recurrent and mixed — recurrent and convolutional) and compare favorably with existing approaches. We also propose Lp Normalization for normalizing by different orders of statistical moments. In particular, L1 normalization is well-performing, simple to implement, fast to compute, more biologically-plausible and thus ideal for GPU or hardware implementations. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. 1 ar X iv :1 61 0. 06 16 0v 1 [ cs .L G ] 1 9 O ct 2 01 6 Approach FF & FC FF & Conv Rec & FC Rec & Conv Online Learning Small Batch All Combined Original Batch Normalization(BN) 3 3 7 7 7 Suboptimal 7 Time-specific BN 3 3 Limited Limited 7 Suboptimal 7 Layer Normalization 3 7* 3 7* 3 3 7* Streaming Normalization 3 3 3 3 3 3 3 Table 1: An overview of normalization techiques for different tasks. 3: works well. 7: does not work well. FF: Feedforward. Rec: Recurrent. FC: Fully-connected. Conv: convolutional. Limited: time-specific BN requires recording normalization statistics for each timestep and thus may not generalize to novel sequence length. *Layer normalization does not fail on these tasks but perform significantly worse than the best approaches.",
"title": ""
},
{
"docid": "f178c362aac13afaf0229b83a8f5ace0",
"text": "Around the world, Rotating Savings and Credit Associations (ROSCAs) are a prevalent saving mechanism in markets with low financial inclusion ratios. ROSCAs, which rely on social networks, facilitate credit and financing needs for individuals and small businesses. Despite their benefits, informality in ROSCAs leads to problems driven by disagreements and frauds. This further necessitates ROSCA participants’ dependency on social capital. To overcome these problems, to build on ROSCA participants’ financial proclivities, and to enhance access and efficiency of ROSCAs, we explore opportunities to digitize ROSCAs in Pakistan by building a digital platform for collection and distribution of ROSCA funds. Digital ROSCAs have the potential to mitigate issues with safety and privacy of ROSCA money, frauds and defaults in ROSCAs, and record keeping, including payment history. In this context, we illustrate features of a digital ROSCA and examine aspects of gender, social capital, literacy, and religion as they relate to digital ROSCAs.",
"title": ""
},
{
"docid": "bf1dd3cf77750fe5e994fd6c192ba1be",
"text": "Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type.",
"title": ""
}
] | scidocsrr |
2a6609f28ccd04f9de7c4e9b02837b33 | A Tale of Two Kernels: Towards Ending Kernel Hardening Wars with Split Kernel | [
{
"docid": "7c05ef9ac0123a99dd5d47c585be391c",
"text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.",
"title": ""
},
{
"docid": "16186ff81d241ecaea28dcf5e78eb106",
"text": "Different kinds of people use computers now than several decades ago, but operating systems have not fully kept pace with this change. It is true that we have point-and-click GUIs now instead of command line interfaces, but the expectation of the average user is different from what it used to be, because the user is different. Thirty or 40 years ago, when operating systems began to solidify into their current form, almost all computer users were programmers, scientists, engineers, or similar professionals doing heavy-duty computation, and they cared a great deal about speed. Few teenagers and even fewer grandmothers spent hours a day behind their terminal. Early users expected the computer to crash often; reboots came as naturally as waiting for the neighborhood TV repairman to come replace the picture tube on their home TVs. All that has changed and operating systems need to change with the times.",
"title": ""
}
] | [
{
"docid": "3475d98ae13c4bab3424103f009f3fb1",
"text": "According to a small, lightweight, low-cost high performance inertial Measurement Units(IMU), an effective calibration method is implemented to evaluate the performance of Micro-Electro-Mechanical Systems(MEMS) sensors suffering from various errors to get acceptable navigation results. A prototype development board based on FPGA, dual core processor's configuration for INS/GPS integrated navigation system is designed for experimental testing. The significant error sources of IMU such as bias, scale factor, and misalignment are estimated in virtue of static tests, rate tests, thermal tests. Moreover, an effective intelligent calibration method combining with Kalman Filter is proposed to estimate parameters and compensate errors. The proposed approach has been developed and its efficiency is demonstrated by various experimental scenarios with real MEMS data.",
"title": ""
},
{
"docid": "41c317b0e275592ea9009f3035d11a64",
"text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.",
"title": ""
},
{
"docid": "363cc184a6cae8b7a81744676e339a80",
"text": "Dismissing-avoidant adults are characterized by expressing relatively low levels of attachment-related distress. However, it is unclear whether this reflects a relative absence of covert distress or an attempt to conceal covert distress. Two experiments were conducted to distinguish between these competing explanations. In Experiment 1, participants were instructed to suppression resulted in a decrease in the accessibility of abandonment-related thoughts for dismissing-avoidant adults. Experiment 2 demonstrated that attempts to suppress the attachment system resulted in decreases in physiological arousal for dismissing-avoidant adults. These experiments indicate that dismissing-avoidant adults are capable of suppressing the latent activation of their attachment system and are not simply concealing latent distress. The discussion focuses on development, cognitive, and social factors that may promote detachment.",
"title": ""
},
{
"docid": "329ab44195e7c20e696e5d7edc8b65a8",
"text": "In this work, we consider challenges relating to security for Industrial Control Systems (ICS) in the context of ICS security education and research targeted both to academia and industry. We propose to address those challenges through gamified attack training and countermeasure evaluation. We tested our proposed ICS security gamification idea in the context of the (to the best of our knowledge) first Capture-The-Flag (CTF) event targeted to ICS security called SWaT Security Showdown (S3). Six teams acted as attackers in a security competition leveraging an ICS testbed, with several academic defense systems attempting to detect the ongoing attacks. The event was conducted in two phases. The online phase (a jeopardy-style CTF) served as a training session. The live phase was structured as an attack-defense CTF. We acted as judges and we assigned points to the attacker teams according to a scoring system that we developed internally based on multiple factors, including realistic attacker models. We conclude the paper with an evaluation and discussion of the S3, including statistics derived from the data collected in each phase of S3.",
"title": ""
},
{
"docid": "6825c5294da2dfe7a26b6ac89ba8f515",
"text": "Restoring natural walking for amputees has been increasingly investigated because of demographic evolution, leading to increased number of amputations, and increasing demand for independence. The energetic disadvantages of passive pros-theses are clear, and active prostheses are limited in autonomy. This paper presents the simulation, design and development of an actuated knee-ankle prosthesis based on a variable stiffness actuator with energy transfer from the knee to the ankle. This approach allows a good approximation of the joint torques and the kinematics of the human gait cycle while maintaining compliant joints and reducing energy consumption during level walking. This first prototype consists of a passive knee and an active ankle, which are energetically coupled to reduce the power consumption.",
"title": ""
},
{
"docid": "fed23432144a6929c4f3442b10157771",
"text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa",
"title": ""
},
{
"docid": "85c4c0ffb224606af6bc3af5411d31ca",
"text": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-tofine attention models lag behind state-ofthe-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.",
"title": ""
},
{
"docid": "404fce3f101d0a1d22bc9afdf854b1e0",
"text": "The intimate connection between the brain and the heart was enunciated by Claude Bernard over 150 years ago. In our neurovisceral integration model we have tried to build on this pioneering work. In the present paper we further elaborate our model. Specifically we review recent neuroanatomical studies that implicate inhibitory GABAergic pathways from the prefrontal cortex to the amygdala and additional inhibitory pathways between the amygdala and the sympathetic and parasympathetic medullary output neurons that modulate heart rate and thus heart rate variability. We propose that the default response to uncertainty is the threat response and may be related to the well known negativity bias. We next review the evidence on the role of vagally mediated heart rate variability (HRV) in the regulation of physiological, affective, and cognitive processes. Low HRV is a risk factor for pathophysiology and psychopathology. Finally we review recent work on the genetics of HRV and suggest that low HRV may be an endophenotype for a broad range of dysfunctions.",
"title": ""
},
{
"docid": "6ce3156307df03190737ee7c0ae24c75",
"text": "Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset.1 The results demonstrate that our approach outperforms all baselines on both tasks and datasets.",
"title": ""
},
{
"docid": "f153ee3853f40018ed0ae8b289b1efcf",
"text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.",
"title": ""
},
{
"docid": "308622daf5f4005045f3d002f5251f8c",
"text": "The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.",
"title": ""
},
{
"docid": "9d2f569d1105bdac64071541eb01c591",
"text": "1. Outline the principles of the diagnostic tests used to confirm brain death. . 2. The patient has been certified brain dead and her relatives agree with her previously stated wishes to donate her organs for transplantation. Outline the supportive measures which should be instituted to maintain this patient’s organs in an optimal state for subsequent transplantation of the heart, lungs, liver and kidneys.",
"title": ""
},
{
"docid": "01a649c8115810c8318e572742d9bd00",
"text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.",
"title": ""
},
{
"docid": "1f20204533ade658723cc56b429d5792",
"text": "ILQUA first participated in TREC QA main task in 2003. This year we have made modifications to the system by removing some components with poor performance and enhanced the system with new methods and new components. The newly built ILQUA is an IE-driven QA system. To answer “Factoid” and “List” questions, we apply our answer extraction methods on NE-tagged passages. The answer extraction methods adopted here are surface text pattern matching, n-gram proximity search and syntactic dependency matching. Surface text pattern matching has been applied in some previous TREC QA systems. However, the patterns used in ILQUA are automatically generated by a supervised learning system and represented in a format of regular expressions which can handle up to 4 question terms. N-gram proximity search and syntactic dependency matching are two steps of one component. N-grams of question terms are matched around every named entity in the candidate passages and a list of named entities are generated as answer candidate. These named entities go through a multi-level syntactic dependency matching until a final answer is generated. To answer “Other” questions, we parse the answer sentences of “Other” questions in 2004 main task and built syntactic patterns combined with semantic features. These patterns are applied to the parsed candidate sentences to extract answers of “Other” questions. The evaluation results showed ILQUA has reached an accuracy of 30.9% for factoid questions. ILQUA is an IE-driven QA system without any pre-compiled knowledge base of facts and it doesn’t get reference from any other external search engine such as Google. The disadvantage of an IE-driven QA system is that there are some types of questions that can’t be answered because the answer in the passages can’t be tagged as appropriate NE types. Figure 1 shows the diagram of the ILQUA architecture.",
"title": ""
},
{
"docid": "73333ad599c6bbe353e46d7fd4f51768",
"text": "The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research–brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.",
"title": ""
},
{
"docid": "0c9bbeaa783b2d6270c735f004ecc47f",
"text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu.",
"title": ""
},
{
"docid": "f4edb4f6bc0d0e9b31242cf860f6692d",
"text": "Search on the web is a delay process and it can be hard task especially for beginners when they attempt to use a keyword query language. Beginner (inexpert) searchers commonly attempt to find information with ambiguous queries. These ambiguous queries make the search engine returns irrelevant results. This work aims to get more relevant pages to query through query reformulation and expanding search space. The proposed system has three basic parts WordNet, Google search engine and Genetic Algorithm. Every part has a special task. The system uses WordNet to remove ambiguity from queries by displaying the meaning of every keyword in user query and selecting the proper meaning for keywords. The system obtains synonym for every keyword from WordNet and generates query list. Genetic algorithm is used to create generation for every query in query list. Every query in system is navigated using Google search engine to obtain results from group of documents on the Web. The system has been tested on number of ambiguous queries and it has obtained more relevant URL to user query especially when the query has one keyword. The results are promising and therefore open further research directions.",
"title": ""
},
{
"docid": "29d2a613f7da6b99e35eb890d590f4ca",
"text": "Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physicallybased augmentation pipeline to vary sensor effects – specifically, chromatic aberration, blur, exposure, noise, and color cast – across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.",
"title": ""
},
{
"docid": "5873204bba0bd16262274d4961d3d5f9",
"text": "The analysis of the adaptive behaviour of many different kinds of systems such as humans, animals and machines, requires more general ways of assessing their cognitive abilities. This need is strengthened by increasingly more tasks being analysed for and completed by a wider diversity of systems, including swarms and hybrids. The notion of universal test has recently emerged in the context of machine intelligence evaluation as a way to define and use the same cognitive test for a variety of systems, using some principled tasks and adapting the interface to each particular subject. However, how far can universal tests be taken? This paper analyses this question in terms of subjects, environments, space-time resolution, rewards and interfaces. This leads to a number of findings, insights and caveats, according to several levels where universal tests may be progressively more difficult to conceive, implement and administer. One of the most significant contributions is given by the realisation that more universal tests are defined as maximisations of less universal tests for a variety of configurations. This means that universal tests must be necessarily adaptive.",
"title": ""
}
] | scidocsrr |
dfe84773f14b5e5f43ff495ec2509a45 | Dense visual SLAM | [
{
"docid": "75c2b1565c61136bf014d5e67eb52daf",
"text": "This paper describes a system for dense depth estimation for multiple images in real-time. The algorithm runs almost entirely on standard graphics hardware, leaving the main CPU free for other tasks as image capture, compression and storage during scene capture. We follow a plain-sweep approach extended by truncated SSD scores, shiftable windows and best camera selection. We do not need specialized hardware and exploit the computational power of freely programmable PC graphics hardware. Dense depth maps are computed with up to 20 fps.",
"title": ""
}
] | [
{
"docid": "f3641cacf284444ac45f0e085c7214bf",
"text": "Recognition that the entire central nervous system (CNS) is highly plastic, and that it changes continually throughout life, is a relatively new development. Until very recently, neuroscience has been dominated by the belief that the nervous system is hardwired and changes at only a few selected sites and by only a few mechanisms. Thus, it is particularly remarkable that Sir John Eccles, almost from the start of his long career nearly 80 years ago, focused repeatedly and productively on plasticity of many different kinds and in many different locations. He began with muscles, exploring their developmental plasticity and the functional effects of the level of motor unit activity and of cross-reinnervation. He moved into the spinal cord to study the effects of axotomy on motoneuron properties and the immediate and persistent functional effects of repetitive afferent stimulation. In work that combined these two areas, Eccles explored the influences of motoneurons and their muscle fibers on one another. He studied extensively simple spinal reflexes, especially stretch reflexes, exploring plasticity in these reflex pathways during development and in response to experimental manipulations of activity and innervation. In subsequent decades, Eccles focused on plasticity at central synapses in hippocampus, cerebellum, and neocortex. His endeavors extended from the plasticity associated with CNS lesions to the mechanisms responsible for the most complex and as yet mysterious products of neuronal plasticity, the substrates underlying learning and memory. At multiple levels, Eccles' work anticipated and helped shape present-day hypotheses and experiments. He provided novel observations that introduced new problems, and he produced insights that continue to be the foundation of ongoing basic and clinical research. This article reviews Eccles' experimental and theoretical contributions and their relationships to current endeavors and concepts. It emphasizes aspects of his contributions that are less well known at present and yet are directly relevant to contemporary issues.",
"title": ""
},
{
"docid": "7ae8e724297985e0531f90b3da3424f4",
"text": "This study examined the extent to which previous experience with duration in first language (L1) vowel distinctions affects the use of duration when perceiving vowels in a second language (L2). Native speakers of Greek (where duration is not used to differentiate vowels) and Japanese (where vowels are distinguished by duration) first identified and rated the eleven English monophthongs, embedded in /bVb/ and /bVp/ contexts, in terms of their L1 categories and then carried out discrimination tests on those English vowels. The results demonstrated that both L2 groups were sensitive to durational cues when perceiving the English vowels. However, listeners were found to temporally assimilate L2 vowels to L1 category/categories. Temporal information was available in discrimination only when the listeners' L1 duration category/categories did not interfere with the target duration categories and hence the use of duration in such cases cannot be attributed to its perceptual salience as has been proposed.",
"title": ""
},
{
"docid": "7a6a1bf378f5bdfc6c373dc55cf0dabd",
"text": "In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous—each core does not need to idle and wait for the other cores—the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM.",
"title": ""
},
{
"docid": "d2421e2458f6f2ce55cb9664542a7ea8",
"text": "Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operating the sensor network for a long period of time. In [12], a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. There is some fixed amount of energy cost in the electronics when transmitting or receiving a packet and a variable cost when transmitting a packet which depends on the distance of transmission. If each node transmits its sensed data directly to the base station, then it will deplete its power quickly. The LEACH protocol presented in [12] is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster-heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. An improved version of LEACH, called LEACH-C, is presented in [14], where the central base station performs the clustering to improve energy efficiency. In this paper, we present an improved scheme, called PEGASIS (Power-Efficient GAthering in Sensor Information Systems), which is a near-optimal chain-based protocol that minimizes energy. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 200 percent when 1 percent, 25 percent, 50 percent, and 100 percent of nodes die for different network sizes and topologies. For many applications, in addition to minimizing energy, it is also important to consider the delay incurred in gathering sensed data. We capture this with the energy delay metric and present schemes that attempt to balance the energy and delay cost for data gathering from sensor networks. Since most of the delay factor is in the transmission time, we measure delay in terms of number of transmissions to accomplish a round of data gathering. Therefore, delay can be reduced by allowing simultaneous transmissions when possible in the network. With CDMA capable sensor nodes [11], simultaneous data transmissions are possible with little interference. In this paper, we present two new schemes to minimize energy delay using CDMA and non-CDMA sensor nodes. If the goal is to minimize only the delay cost, then a binary combining scheme can be used to accomplish this task in about logN units of delay with parallel communications and incurring a slight increase in energy cost. With CDMA capable sensor nodes, a chain-based binary scheme performs best in terms of energy delay. If the sensor nodes are not CDMA capable, then parallel communications are possible only among spatially separated nodes and a chain-based 3-level hierarchy scheme performs well. We compared the performance of direct, LEACH, and our schemes with respect to energy delay using extensive simulations for different network sizes. Results show that our schemes perform 80 or more times better than the direct scheme and also outperform the LEACH protocol.",
"title": ""
},
{
"docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2",
"text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"title": ""
},
{
"docid": "91c9dcfd3428fb79afd8d99722c95b69",
"text": "In this article we describe results of our research on the disambiguation of user queries using ontologies for categorization. We present an approach to cluster search results by using classes or “Sense Folders” ~prototype categories! derived from the concepts of an assigned ontology, in our case WordNet. Using the semantic relations provided from such a resource, we can assign categories to prior, not annotated documents. The disambiguation of query terms in documents with respect to a user-specific ontology is an important issue in order to improve the retrieval performance for the user. Furthermore, we show that a clustering process can enhance the semantic classification of documents, and we discuss how this clustering process can be further enhanced using only the most descriptive classes of the ontology. © 2006 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "526a2bcf3b9a32a27fe5f7dd431cd231",
"text": "This study examined the effects of a waitlist policy for state psychiatric hospitals on length of stay and time to readmission using data from North Carolina for 2004–2010. Cox proportional hazards models tested the hypothesis that patients were discharged “quicker-but-sicker” post-waitlist, as hospitals struggled to manage admission delays and quickly admit waitlisted patients. Results refute this hypothesis, indicating that waitlists were associated with increased length of stay and time to readmission. Further research is needed to evaluate patients’ clinical outcomes directly and to examine the impact of state hospital waitlists in other areas, such as state hospital case mix, local emergency departments, and outpatient mental health agencies.",
"title": ""
},
{
"docid": "125e3793333d94347ec53bea19b4dd56",
"text": "Minutiae are important features in the fingerprints matching. The effective of minutiae extraction depends greatly on the results of fingerprint enhancement. This paper proposes a novel fingerprint enhancement method for direct gray scale extracting minutiae based on combining Gabor filters with the Adaptive Modified Finite Radon Transform (AMFRAT) filters. First, the proposed method uses Gabor filters as band-pass filters for deleting the noise and clarifying ridges. Next, AMFRAT filters are applied for connecting broken ridges together, filling the created holes and clarifying linear symmetry of ridges quickly. AMFRAT is the MFRAT filter, the window size of which is adaptively adjusted according to the coherence values. The small window size is for high curvature ridge areas (small coherence value), and vice versa. As the result, the ridges are the linear symmetry areas, and more suitable for direct gray scale minutiae extraction. Finally, linear symmetry filter is only used for locating minutiae in an inverse model, as \"lack of linear symmetry\" occurs at minutiae points. Experimental results on FVC2004 databases DB4 (set A) shows that the proposed method is capable of improving the goodness index (GI).",
"title": ""
},
{
"docid": "3d1093e183b4e9c656e5dd20efe5a311",
"text": "In the past, tactile displays were of one of two kinds: they were either shape displays, or relied on distributed vibrotactile stimulation. A tactile display device is described in this paper which is distinguished by the fact that it relies exclusively on lateral skin stretch stimulation. It is constructed from an array of 64 closely packed piezoelectric actuators connected to a membrane. The deformations of this membrane cause an array of 112 skin contactors to create programmable lateral stress fields in the skin of the finger pad. Some preliminary observations are reported with respect to the sensations that this kind of display can produce. INTRODUCTION Tactile displays are devices used to provide subjects with the sensation of touching objects directly with the skin. Previously reported tactile displays portray distributed tactile stimulation as a one of two possibilities [1]. One class of displays, termed “shape displays”, typically consists of devices having a dense array of skin contactors which can move orthogonally to the surface of the skin in an attempt to display the shape of objects via its spatially sampled approximation. There exist numerous examples of such displays, for recent designs see [2; 3; 4; 5]. In the interest of brevity, the distinction between “pressure displays” and shape displays is not made here. However, an important distinction with regard to the focus of this paper must be made between displays intended to cause no slip between the contactors and the skin and those intended for the opposite case.1 Displays which are intended to be used without slip can be mounted on a carrier device [6; 2]. 1Braille displays can be found in this later category. Another class of displays takes advantage of vibrotactile stimulation. With this technique, an array of tactilly active sites stimulates the skin using an array of contactors vibrating at a fixed frequency. This frequency is selected to maximize the loudness of the sensation (200–300 Hz). Tactile images are associated, not to the quasi-static depth of indentation, but the amplitude of the vibration [7].2 Figure 1. Typical Tactile Display. Shape displays control the rising movement of the contactors (resp. the force applied to). In a vibrotactile display, the contactors oscillate at a fixed frequency. Devices intended to be used as general purpose tactile displays cause stimulation by independently and simultaneously activated skin contactors according to patterns that depend both on space and on time. Such patterns may be thought of as “tactile images”, but because of the rapid adaptation of the skin mechanoreceptors, the images should more accurately be described as “tactile movies”. It is also accepted that the separation between these contactors needs to be of the order of one millimeter so that the resulting percept fuse into one single continuous image. In addition, when contactors apply vibratory signals to the skin at a frequency, which may range from a few Hertz to a few kiloHertz, a perception is derived which may be described 2The Optacon device is a well known example [8]. Proceedings of the Haptic Interfaces for Virtual Environment and Teleoperator Systems Symposium, ASME International Mechanical Engineering Congress & Exposition 2000, Orlando, Florida, USA . pp. 1309-1314",
"title": ""
},
{
"docid": "dc2c952b5864a167c19b34be6db52389",
"text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.",
"title": ""
},
{
"docid": "aef85d4f84b56e1355c5a0d7e3354e2e",
"text": "Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-More iterations, require solution of system of linear equations. In large scale optimization this may be prohibitively expensive. It is shown in this paper that an approximate solution of the trust region problem may be found by the preconditioned conjugate gradient method. This may be regarded as a generalized dogleg technique where we asymptotically take the inexact quasi-Newton step. We also show that we have the same convergence properties as existing methods based on the dogleg strategy using an approximate Hessian.",
"title": ""
},
{
"docid": "1368a00839a5dd1edc7dbaced35e56f1",
"text": "Nowadays, transfer of the health care from ambulance to patient's home needs higher demand on patient's mobility, comfort and acceptance of the system. Therefore, the goal of this study is to proof the concept of a system which is ultra-wearable, less constraining and more suitable for long term measurements than conventional ECG monitoring systems which use conductive electrolytic gels for low impedance electrical contact with skin. The developed system is based on isolated capacitive coupled electrodes without any galvanic contact to patient's body and does not require the common right leg electrode. Measurements performed under real conditions show that it is possible to acquire well known ECG waveforms without the common electrode when the patient is sitting and even during walking. Results of the validation process demonstrate that the system performance is comparable to the conventional ECG system while the wearability is increased.",
"title": ""
},
{
"docid": "e0b253fc2216e7985ccf9d5631e827f5",
"text": "Solitons, nonlinear self-trapped wavepackets, have been extensively studied in many and diverse branches of physics such as optics, plasmas, condensed matter physics, fluid mechanics, particle physics and even astrophysics. Interestingly, over the past two decades, the field of solitons and related nonlinear phenomena has been substantially advanced and enriched by research and discoveries in nonlinear optics. While optical solitons have been vigorously investigated in both spatial and temporal domains, it is now fair to say that much soliton research has been mainly driven by the work on optical spatial solitons. This is partly due to the fact that although temporal solitons as realized in fiber optic systems are fundamentally one-dimensional entities, the high dimensionality associated with their spatial counterparts has opened up altogether new scientific possibilities in soliton research. Another reason is related to the response time of the nonlinearity. Unlike temporal optical solitons, spatial solitons have been realized by employing a variety of noninstantaneous nonlinearities, ranging from the nonlinearities in photorefractive materials and liquid crystals to the nonlinearities mediated by the thermal effect, thermophoresis and the gradient force in colloidal suspensions. Such a diversity of nonlinear effects has given rise to numerous soliton phenomena that could otherwise not be envisioned, because for decades scientists were of the mindset that solitons must strictly be the exact solutions of the cubic nonlinear Schrödinger equation as established for ideal Kerr nonlinear media. As such, the discoveries of optical spatial solitons in different systems and associated new phenomena have stimulated broad interest in soliton research. In particular, the study of incoherent solitons and discrete spatial solitons in optical periodic media not only led to advances in our understanding of fundamental processes in nonlinear optics and photonics, but also had a very important impact on a variety of other disciplines in nonlinear science. In this paper, we provide a brief overview of optical spatial solitons. This review will cover a variety of issues pertaining to self-trapped waves supported by different types of nonlinearities, as well as various families of spatial solitons such as optical lattice solitons and surface solitons. Recent developments in the area of optical spatial solitons, such as 3D light bullets, subwavelength solitons, self-trapping in soft condensed matter and spatial solitons in systems with parity-time symmetry will also be discussed briefly.",
"title": ""
},
{
"docid": "c57cbe432fdab3f415d2c923bea905ff",
"text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.",
"title": ""
},
{
"docid": "5dcc5026f959b202240befbe56857ac4",
"text": "When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates.",
"title": ""
},
{
"docid": "5ed9fde132f44ff2f2354b5d9f5b14ab",
"text": "An issue in microfabrication of the fluidic channels in glass/poly (dimethyl siloxane) (PDMS) is the absence of a well-defined study of the bonding strength between the surfaces making up these channels. Although most of the research papers mention the use of oxygen plasma for developing chemical (siloxane) bonds between the participating surfaces, yet they only define a certain set of parameters, tailored to a specific setup. An important requirement of all the microfluidics/biosensors industry is the development of a general regime, which defines a systematic method of gauging the bond strength between the participating surfaces in advance by correlation to a common parameter. This enhances the reliability of the devices and also gives a structured approach to its future large-scale manufacturing. In this paper, we explore the possibility of the existence of a common scale, which can be used to gauge bond strength between various surfaces. We find that the changes in wettability of surfaces owing to various levels of plasma exposure can be a useful parameter to gauge the bond strength. We obtained a good correlation between contact angle of deionized water (a direct measure of wettability) on the PDMS and glass surfaces based on various dosages or oxygen plasma treatment. The exposure was done first in an inductively coupled high-density (ICP) plasma system and then in plasma enhanced chemical vapor deposition (PECVD) system. This was followed by the measurement of bond strength by use or the standardized blister test.",
"title": ""
},
{
"docid": "d83ecee8e5f59ee8e6a603c65f952c22",
"text": "PredPatt is a pattern-based framework for predicate-argument extraction. While it works across languages and provides a well-formed syntax-semantics interface for NLP tasks, a large-scale and reproducible evaluation has been lacking, which prevents comparisons between PredPatt and other related systems, and inhibits the updates of the patterns in PredPatt. In this work, we improve and evaluate PredPatt by introducing a large set of high-quality annotations converted from PropBank, which can also be used as a benchmark for other predicate-argument extraction systems. We compare PredPatt with other prominent systems and shows that PredPatt achieves the best precision and recall.",
"title": ""
},
{
"docid": "f2b13b98556a57b0d9486d628409892a",
"text": "Emerging Complex Event Processing (CEP) applications in cyber physical systems like Smart Power Grids present novel challenges for end-to-end analysis over events, flowing from heterogeneous information sources to persistent knowledge repositories. CEP for these applications must support two distinctive features – easy specification patterns over diverse information streams, and integrated pattern detection over realtime and historical events. Existing work on CEP has been limited to relational query patterns, and engines that match events arriving after the query has been registered. We propose SCEPter, a semantic complex event processing framework which uniformly processes queries over continuous and archived events. SCEPteris built around an existing CEP engine with innovative support for semantic event pattern specification and allows their seamless detection over past, present and future events. Specifically, we describe a unified semantic query model that can operate over data flowing through event streams to event repositories. Compile-time and runtime semantic patterns are distinguished and addressed separately for efficiency. Query rewriting is examined and analyzed in the context of temporal boundaries that exist between event streams and their repository to avoid duplicate or missing results. The design and prototype implementation of SCEPterare analyzed using latency and throughput metrics for scenarios from the Smart Grid domain.",
"title": ""
},
{
"docid": "7f81e1d6a6955cec178c1c811810322b",
"text": "The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems",
"title": ""
}
] | scidocsrr |
500ba7fc08e0f640a5e601be8a24768b | Stigmergy as a universal coordination mechanism I: Definition and components | [
{
"docid": "5dfd057e7abc9eda57d031fc0f922505",
"text": "Collective behaviour is often characterised by the so-called “coordination paradox” : Looking at individual ants, for example, they do not seem to cooperate or communicate explicitly, but nevertheless at the social level cooperative behaviour, such as nest building, emerges, apparently without any central coordination. In the case of social insects such emergent coordination has been explained by the theory of stigmergy, which describes how individuals can effect the behaviour of others (and their own) through artefacts, i.e. the product of their own activity (e.g., building material in the ants’ case). Artefacts clearly also play a strong role in human collective behaviour, which has been emphasised, for example, by proponents of activity theory and distributed cognition. However, the relation between theories of situated/social cognition and theories of social insect behaviour has so far received relatively li ttle attention in the cognitive science literature. This paper aims to take a step in this direction by comparing three theoretical frameworks for the study of cognition in the context of agent-environment interaction (activity theory, situated action, and distributed cognition) to each other and to the theory of stigmergy as a possible minimal common ground. The comparison focuses on what each of the four theories has to say about the role/nature of (a) the agents involved in collective behaviour, (b) their environment, (c) the collective activities addressed, and (d) the role that artefacts play in the interaction between agents and their environments, and in particular in the coordination",
"title": ""
}
] | [
{
"docid": "cefa0a3c3a80fa0a170538abdb3f7e46",
"text": "This tutorial introduces the basics of emerging nonvolatile memory (NVM) technologies including spin-transfer-torque magnetic random access memory (STTMRAM), phase-change random access memory (PCRAM), and resistive random access memory (RRAM). Emerging NVM cell characteristics are summarized, and device-level engineering trends are discussed. Emerging NVM array architectures are introduced, including the one-transistor-one-resistor (1T1R) array and the cross-point array with selectors. Design challenges such as scaling the write current and minimizing the sneak path current in cross-point array are analyzed. Recent progress on megabit-to gigabit-level prototype chip demonstrations is summarized. Finally, the prospective applications of emerging NVM are discussed, ranging from the last-level cache to the storage-class memory in the memory hierarchy. Topics of three-dimensional (3D) integration and radiation-hard NVM are discussed. Novel applications beyond the conventional memory applications are also surveyed, including physical unclonable function for hardware security, reconfigurable routing switch for field-programmable gate array (FPGA), logic-in-memory and nonvolatile cache/register/flip-flop for nonvolatile processor, and synaptic device for neuro-inspired computing.",
"title": ""
},
{
"docid": "f6a149131a816989ae246a6de0c50dbc",
"text": "In this paper a comparison of outlier detection algorithms is presented, we present an overview on outlier detection methods and experimental results of six implemented methods. We applied these methods for the prediction of stellar populations parameters as well as on machine learning benchmark data, inserting artificial noise and outliers. We used kernel principal component analysis in order to reduce the dimensionality of the spectral data. Experiments on noisy and noiseless data were performed.",
"title": ""
},
{
"docid": "a38e863016bfcead5fd9af46365d4d5c",
"text": "Social networks generate a large amount of text content over time because of continuous interaction between participants. The mining of such social streams is more challenging than traditional text streams, because of the presence of both text content and implicit network structure within the stream. The problem of event detection is also closely related to clustering, because the events can only be inferred from aggregate trend changes in the stream. In this paper, we will study the two related problems of clustering and event detection in social streams. We will study both the supervised and unsupervised case for the event detection problem. We present experimental results illustrating the effectiveness of incorporating network structure in event discovery over purely content-based",
"title": ""
},
{
"docid": "5f5828952aa0a0a95e348a0c0b2296fb",
"text": "Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter.",
"title": ""
},
{
"docid": "bed080cd023291a70eb88467240c81b6",
"text": "As new data products of research increasingly become the product or output of complex processes, the lineage of the resulting products takes on greater importance as a description of the processes that contributed to the result. Without adequate description of data products, their reuse is lessened. The act of instrumenting an application for provenance capture is burdensome, however. This paper explores the option of deriving provenance from existing log files, an approach that reduces the instrumentation task substantially but raises questions about sifting through huge amounts of information for what may or may not be complete provenance. In this paper we study the tradeoff of ease of capture and provenance completeness, and show that under some circumstances capture through logs can result in high quality provenance.",
"title": ""
},
{
"docid": "38d86817d68a8047fa19ae5948b1c056",
"text": "The crossbar array architecture with resistive synaptic devices is attractive for on-chip implementation of weighted sum and weight update in the neuro-inspired learning algorithms. This paper discusses the design challenges on scaling up the array size due to non-ideal device properties and array parasitics. Circuit-level mitigation strategies have been proposed to minimize the learning accuracy loss in a large array. This paper also discusses the peripheral circuits design considerations for the neuro-inspired architecture. Finally, a circuit-level macro simulator is developed to explore the design trade-offs and evaluate the overhead of the proposed mitigation strategies as well as project the scaling trend of the neuro-inspired architecture.",
"title": ""
},
{
"docid": "910fdcf9e9af05b5d1cb70a9c88e4143",
"text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.",
"title": ""
},
{
"docid": "93fcbdfe59015b67955246927d67a620",
"text": "The Emotion Recognition in the Wild (EmotiW) Challenge has been held for three years. Previous winner teams primarily focus on designing specific deep neural networks or fusing diverse hand-crafted and deep convolutional features. They all neglect to explore the significance of the latent relations among changing features resulted from facial muscle motions. In this paper, we study this recognition challenge from the perspective of analyzing the relations among expression-specific facial features in an explicit manner. Our method has three key components. First, we propose a pair-wise learning strategy to automatically seek a set of facial image patches which are important for discriminating two particular emotion categories. We found these learnt local patches are in part consistent with the locations of expression-specific Action Units (AUs), thus the features extracted from such kind of facial patches are named AU-aware facial features. Second, in each pair-wise task, we use an undirected graph structure, which takes learnt facial patches as individual vertices, to encode feature relations between any two learnt facial patches. Finally, a robust emotion representation is constructed by concatenating all task-specific graph-structured facial feature relations sequentially. Extensive experiments on the EmotiW 2015 Challenge testify the efficacy of the proposed approach. Without using additional data, our final submissions achieved competitive results on both sub-challenges including the image based static facial expression recognition (we got 55.38% recognition accuracy outperforming the baseline 39.13% with a margin of 16.25%) and the audio-video based emotion recognition (we got 53.80% recognition accuracy outperforming the baseline 39.33% and the 2014 winner team's final result 50.37% with the margins of 14.47% and 3.43%, respectively).",
"title": ""
},
{
"docid": "0f29172ecf0ed3dfd775c3fa43db4127",
"text": "Reusing software through copying and pasting is a continuous plague in software development despite the fact that it creates serious maintenance problems. Various techniques have been proposed to find duplicated redundant code (also known as software clones). A recent study has compared these techniques and shown that token-based clone detection based on suffix trees is extremely fast but yields clone candidates that are often no syntactic units. Current techniques based on abstract syntax trees-on the other hand-find syntactic clones but are considerably less efficient. This paper describes how we can make use of suffix trees to find clones in abstract syntax trees. This new approach is able to find syntactic clones in linear time and space. The paper reports the results of several large case studies in which we empirically compare the new technique to other techniques using the Bellon benchmark for clone detectors",
"title": ""
},
{
"docid": "84e47d33a895afd0fab28784c112d8f4",
"text": "Hybrid analog/digital precoding is a promising technique to reduce the hardware cost of radio-frequency components compared with the conventional full-digital precoding approach in millimeter-wave multiple-input multiple output systems. However, the large antenna dimensions of the hybrid precoder design makes it difficult to acquire an optimal full-digital precoder. Moreover, it also requires matrix inversion, which leads to high complexity in the hybrid precoder design. In this paper, we propose a low-complexity optimal full-digital precoder acquisition algorithm, named beamspace singular value decomposition (SVD) that saves power for the base station and user equipment. We exploit reduced-dimension beamspace channel state information (CSI) given by compressive sensing (CS) based channel estimators. Then, we propose a CS-assisted beamspace hybrid precoding (CS-BHP) algorithm that leverages CS-based CSI. Simulation results show that the proposed beamspace-SVD reduces complexity by 99.4% compared with an optimal full-digital precoder acquisition using full-dimension SVD. Furthermore, the proposed CS-BHP reduces the complexity of the state-of-the-art approach by 99.6% and has less than 5% performance loss compared with an optimal full-digital precoder.",
"title": ""
},
{
"docid": "18ada6a64572d11cf186e4497fd81f43",
"text": "The task of ranking is crucial in information retrieval. With the advent of the Big Data age, new challenges have arisen for the field. Deep neural architectures are capable of learning complex functions, and capture the underlying representation of the data more effectively. In this work, ranking is reduced to a classification problem and deep neural architectures are used for this task. A dynamic, pointwise approach is used to learn a ranking function, which outperforms the existing ranking algorithms. We introduce three architectures for the task, our primary objective being to identify architectures which produce good results, and to provide intuitions behind their usefulness. The inputs to the models are hand-crafted features provided in the datasets. The outputs are relevance levels. Further, we also explore the idea as to whether the semantic grouping of handcrafted features aids deep learning models in our task.",
"title": ""
},
{
"docid": "cf8b7c330ae26f1839682ebf0610dbc8",
"text": "Motivation\nBest performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models.\n\n\nResults\nWe propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems.\n\n\nAvailability and implementation\nThe GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN.\n\n\nContact\nandyli@ece.ufl.edu or aconesa@ufl.edu.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "779ca56cf734a3b187095424c79ae554",
"text": "Web crawlers are automated tools that browse the web to retrieve and analyze information. Although crawlers are useful tools that help users to find content on the web, they may also be malicious. Unfortunately, unauthorized (malicious) crawlers are increasingly becoming a threat for service providers because they typically collect information that attackers can abuse for spamming, phishing, or targeted attacks. In particular, social networking sites are frequent targets of malicious crawling, and there were recent cases of scraped data sold on the black market and used for blackmailing. In this paper, we introduce PUBCRAWL, a novel approach for the detection and containment of crawlers. Our detection is based on the observation that crawler traffic significantly differs from user traffic, even when many users are hidden behind a single proxy. Moreover, we present the first technique for crawler campaign attribution that discovers synchronized traffic coming from multiple hosts. Finally, we introduce a containment strategy that leverages our detection results to efficiently block crawlers while minimizing the impact on legitimate users. Our experimental results in a large, wellknown social networking site (receiving tens of millions of requests per day) demonstrate that PUBCRAWL can distinguish between crawlers and users with high accuracy. We have completed our technology transfer, and the social networking site is currently running PUBCRAWL in production.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "9379523ea300bd07d0e26242f692948a",
"text": "There has been a growing interest in recent years in the poten tial use of product differentiation (through eco-type labelling) as a means of promoting and rewarding the sustainable management and exploitation of fish stocks. This interest is marked by the growing literature on the topic, exploring both the concept and the key issues associated with it. It reflects a frustration among certain groups with the supply-side measures currently employed in fisheries management, which on their own have proven insufficient to counter the negative incentive structures characterising open-a ccess fisheries. The potential encapsulated by product differentiation has, however, yet to be tested in the market place. One of the debates that continues to accompany the concept is the nature and extent of the response of consumers to the introduction of labelled seafood products. Though differentiated seafood products are starting to come onto the market, we are still essentially dealing with a hypothetical market situation in terms of analysing consumer behaviour. Moving the debate from theoretical extrapolation to one of empirical evidence, this paper presents the preliminary empirical results of a study undertaken in the UK. The study aimed, amongst other things, to evaluate whether UK consumers are prepared to pay a premium for seafood products that are differentiated on the grounds that the fish is either of (a) high quality or (b) comes from a sustainably managed fishery. It also aimed to establish whether the quantity of fish products purchased would change. The results are presented in this paper.",
"title": ""
},
{
"docid": "b5c8263dd499088ded04c589b5da1d9f",
"text": "User interfaces and information systems have become increasingly social in recent years, aimed at supporting the decentralized, cooperative production and use of content. A theory that predicts the impact of interface and interaction designs on such factors as participation rates and knowledge discovery is likely to be useful. This paper reviews a variety of observed phenomena in social information foraging and sketches a framework extending Information Foraging Theory towards making predictions about the effects of diversity, interference, and cost-of-effort on performance time, participation rates, and utility of discoveries.",
"title": ""
},
{
"docid": "cce107dc268b2388e301f64718de1463",
"text": "The training of convolutional neural networks for image recognition usually requires large image datasets to produce favorable results. Those large datasets can be acquired by web crawlers that accumulate images based on keywords. Due to the nature of data in the web, these image sets display a broad variation of qualities across the contained items. In this work, a filtering approach for noisy datasets is proposed, utilizing a smaller trusted dataset. Hereby a convolutional neural network is trained on the trusted dataset and then used to construct a filtered subset from the noisy datasets. The methods described in this paper were applied to plant image classification and the created models have been submitted to the PlantCLEF 2017 competition.",
"title": ""
},
{
"docid": "e4f26f4ed55e51fb2a9a55fd0f04ccc0",
"text": "Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
},
{
"docid": "2e0262fce0a7ba51bd5ccf9e1397b0ca",
"text": "We present a topology detection method combining smart meter sensor information and sparse line measurements. The problem is formulated as a spanning tree identification problem over a graph given partial nodal and edge power flow information. In the deterministic case of known nodal power consumption and edge power flow we provide sensor placement criterion which guarantees correct identification of all spanning trees. We then present a detection method which is polynomial in complexity to the size of the graph. In the stochastic case where loads are given by forecasts derived from delayed smart meter data, we provide a combinatorial complexity MAP detector and a polynomial complexity approximate MAP detector which is shown to work near optimum in all numerical cases.",
"title": ""
}
] | scidocsrr |
050a1209dcbfe63ab99f79b3cff59762 | What is Market News ? | [
{
"docid": "6a23c39da8a17858964040a06aa30a80",
"text": "Psychological research indicates that people have a cognitive bias that leads them to misinterpret new information as supporting previously held hypotheses. We show in a simple model that such conrmatory bias induces overcondence: given any probabilistic assessment by an agent that one of two hypotheses is true, the appropriate beliefs would deem it less likely to be true. Indeed, the hypothesis that the agent believes in may be more likely to be wrong than right. We also show that the agent may come to believe with near certainty in a false hypothesis despite receiving an innite amount of information.",
"title": ""
}
] | [
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
},
{
"docid": "45f75c8d642be90e45abff69b4c6fbcf",
"text": "We describe a method for identifying the speakers of quoted speech in natural-language textual stories. We have assembled a corpus of more than 3,000 quotations, whose speakers (if any) are manually identified, from a collection of 19th and 20th century literature by six authors. Using rule-based and statistical learning, our method identifies candidate characters, determines their genders, and attributes each quote to the most likely speaker. We divide the quotes into syntactic classes in order to leverage common discourse patterns, which enable rapid attribution for many quotes. We apply learning algorithms to the remainder and achieve an overall accuracy of 83%.",
"title": ""
},
{
"docid": "f6249304dbd2b275a70b2b12faeb4712",
"text": "This paper describes a system, built and refined over the past five years, that automatically analyzes student programs assigned in a computer organization course. The system tests a student's program, then e-mails immediate feedback to the student to assist and encourage the student to continue testing, debugging, and optimizing his or her program. The automated feedback system improves the students' learning experience by allowing and encouraging them to improve their program iteratively until it is correct. The system has also made it possible to add challenging parts to each project, such as optimization and testing, and it has enabled students to meet these challenges. Finally, the system has reduced the grading load of University of Michigan's large classes significantly and helped the instructors handle the rapidly increasing enrollments of the 1990s. Initial experience with the feedback system showed that students depended too heavily on the feedback system as a substitute for their own testing. This problem was addressed by requiring students to submit a comprehensive test suite along with their program and by applying automated feedback techniques to help students learn how to write good test suites. Quantitative iterative feedback has proven to be extremely helpful in teaching students specific concepts about computer organization and general concepts on computer programming and testing.",
"title": ""
},
{
"docid": "8b3ad3d48da22c529e65c26447265372",
"text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.",
"title": ""
},
{
"docid": "d622cf283f27a32b2846a304c0359c5f",
"text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.",
"title": ""
},
{
"docid": "0b0791d64f67b4df8441215a6c6cd116",
"text": "The offset voltage of the dynamic latched comparator is analyzed in detail, and the dynamic latched comparator design is optimized for the minimal offset voltage based on the analysis in this paper. As a result, 1-sigma offset voltage was reduced from 12.5mV to 6.5mV at the cost of 9% increase of the power dissipation (152µW from 136µW). Using a digitally controlled capacitive offset calibration technique, the offset voltage of the comparator is further reduced from 6.50mV to 1.10mV at 1-sigma at the operating clock frequency of 3 GHz and it consumes 54µW/GHz after the calibration.",
"title": ""
},
{
"docid": "73905bf74f0f66c7a02aeeb9ab231d7b",
"text": "This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.",
"title": ""
},
{
"docid": "9dc9b5bad3422a6f1c7f33ccb25fdead",
"text": "We present a named entity recognition (NER) system for extracting product attributes and values from listing titles. Information extraction from short listing titles present a unique challenge, with the lack of informative context and grammatical structure. In this work, we combine supervised NER with bootstrapping to expand the seed list, and output normalized results. Focusing on listings from eBay’s clothing and shoes categories, our bootstrapped NER system is able to identify new brands corresponding to spelling variants and typographical errors of the known brands, as well as identifying novel brands. Among the top 300 new brands predicted, our system achieves 90.33% precision. To output normalized attribute values, we explore several string comparison algorithms and found n-gram substring matching to work well in practice.",
"title": ""
},
{
"docid": "3d811c193d489f347119bc911006e2cd",
"text": "The performance of massive multiple input multiple output systems may be limited by inter-cell pilot contamination (PC) unless appropriate PC mitigation or avoidance schemes are employed. In this paper we develop techniques based on existing long term evolution (LTE) measurements - open loop power control (OLPC) and pilot sequence reuse schemes, that avoid PC within a group of cells. We compare the performance of simple least-squares channel estimator with the higher-complexity minimum mean square error estimator, and evaluate the performance of the recently proposed coordinated pilot allocation (CPA) technique (which is appropriate in cooperative systems). The performance measures of interest include the normalized mean square error of channel estimation, the downlink signal-to-interference-plus-noise and spectral efficiency when employing maximum ratio transmission or zero forcing precoding at the base station. We find that for terminals moving at vehicular speeds, PC can be effectively mitigated in an operation and maintenance node using both the OLPC and the pilot reuse schemes. Additionally, greedy CPA provides performance gains only for a fraction of terminals, at the cost of degradation for the rest of the terminals and higher complexity. These results indicate that in practice, PC may be effectively mitigated without the need for second-order channel statistics or inter-cell cooperation.",
"title": ""
},
{
"docid": "ac1d1bf198a178cb5655768392c3d224",
"text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.",
"title": ""
},
{
"docid": "f795ba59b0c2c81953b94ac981ee0b57",
"text": "The Digital Beamforming Synthetic Aperture Radar (DBSAR) is a state-of-the-art L-band radar that employs advanced radar technology and a customized data acquisition and real-time processor in order to enable multimode measurement techniques in a single radar platform. DBSAR serves as a test bed for the development, implementation, and testing of digital beamforming radar techniques applicable to Earth science and planetary measurements. DBSAR flew its first field campaign on board the National Aeronautics and Space Administration P3 aircraft in October 2008, demonstrating enabling techniques for scatterometry, synthetic aperture, and altimetry.",
"title": ""
},
{
"docid": "fc97e17c5c9e1ea43570d799ac1ecd1f",
"text": "OBJECTIVE\nTo determine the clinical course in dogs with aural cholesteatoma.\n\n\nSTUDY DESIGN\nCase series.\n\n\nANIMALS\nDogs (n=20) with aural cholesteatoma.\n\n\nMETHODS\nCase review (1998-2007).\n\n\nRESULTS\nTwenty dogs were identified. Clinical signs other than those of chronic otitis externa included head tilt (6 dogs), unilateral facial palsy (4), pain on opening or inability to open the mouth (4), and ataxia (3). Computed tomography (CT) was performed in 19 dogs, abnormalities included osteoproliferation (13 dogs), lysis of the bulla (12), expansion of the bulla (11), bone lysis in the squamous or petrosal portion of the temporal bone (4) and enlargement of associated lymph nodes (7). Nineteen dogs had total ear canal ablation-lateral bulla osteotomy or ventral bulla osteotomy with the intent to cure; 9 dogs had no further signs of middle ear disease whereas 10 had persistent or recurrent clinical signs. Risk factors for recurrence after surgery were inability to open the mouth or neurologic signs on admission and lysis of any portion of the temporal bone on CT imaging. Dogs admitted with neurologic signs or inability to open the mouth had a median survival of 16 months.\n\n\nCONCLUSIONS\nEarly surgical treatment of aural cholesteatoma may be curative. Recurrence after surgery is associated with advanced disease, typically indicated by inability to open the jaw, neurologic disease, or bone lysis on CT imaging.\n\n\nCLINICAL RELEVANCE\nPresence of aural cholesteatoma may affect the prognosis for successful surgical treatment of middle ear disease.",
"title": ""
},
{
"docid": "e0ec89c103aedb1d04fbc5892df288a8",
"text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.",
"title": ""
},
{
"docid": "4e9438fede70ff0aa1c87cdcd64f0bac",
"text": "This paper presents a novel formulation for detecting objects with articulated rigid bodies from high-resolution monitoring images, particularly engineering vehicles. There are many pixels in high-resolution monitoring images, and most of them represent the background. Our method first detects object patches from monitoring images using a coarse detection process. In this phase, we build a descriptor based on histograms of oriented gradient, which contain color frequency information. Then we use a linear support vector machine to rapidly detect many image patches that may contain object parts, with a low false negative rate and a high false positive rate. In the second phase, we apply a refinement classification to determine the patches that actually contain objects. In this stage, we increase the size of the image patches so that they include the complete object using models of the object parts. Then an accelerated and improved salient mask is used to improve the performance of the dense scale-invariant feature transform descriptor. The detection process returns the absolute position of positive objects in the original images. We have applied our methods to three datasets to demonstrate their effectiveness.",
"title": ""
},
{
"docid": "c60693035f0f99528a741fe5e3d88219",
"text": "Transmit array design is more challenging for dual-band operation than for single band, due to the independent 360° phase wrapping jumps needed at each band when large electrical length compensation is involved. This happens when aiming at large gains, typically above 25 dBi with beam scanning and $F/D \\le 1$ . No such designs have been reported in the literature. A general method is presented here to reduce the complexity of dual-band transmit array design, valid for arbitrarily large phase error compensation and any band ratio, using a finite number of different unit cells. The procedure is demonstrated for two offset transmit array implementations operating in circular polarization at 20 GHz(Rx) and 30 GHz(Tx) for Ka-band satellite-on-the-move terminals with mechanical beam-steering. An appropriate set of 30 dual-band unit cells is developed with transmission coefficient greater than −0.9 dB. The full-size transmit array is characterized by full-wave simulation enabling elevation beam scanning over 0°–50° with gains reaching 26 dBi at 20 GHz and 29 dBi at 30 GHz. A smaller prototype was fabricated and measured, showing a measured gain of 24 dBi at 20 GHz and 27 dBi at 30 GHz. In both cases, the beam pointing direction is coincident over the two frequency bands, and thus confirming the proposed design procedure.",
"title": ""
},
{
"docid": "4f50f9ed932635614d0f4facbaa80992",
"text": "In this paper we propose an overview of the recent academic literature devoted to the applications of Hawkes processes in finance. Hawkes processes constitute a particular class of multivariate point processes that has become very popular in empirical high frequency finance this last decade. After a reminder of the main definitions and properties that characterize Hawkes processes, we review their main empirical applications to address many different problems in high frequency finance. Because of their great flexibility and versatility, we show that they have been successfully involved in issues as diverse as estimating the volatility at the level of transaction data, estimating the market stability, accounting for systemic risk contagion, devising optimal execution strategies or capturing the dynamics of the full order book.",
"title": ""
},
{
"docid": "0ac7db546c11b9d18897ceeb2e5be70f",
"text": "A backstepping approach is proposed in this paper to cope with the failure of a quadrotor propeller. The presented methodology supposes to turn off also the motor which is opposite to the broken one. In this way, a birotor configuration with fixed propellers is achieved. The birotor is controlled to follow a planned emergency landing trajectory. Theory shows that the birotor can reach any point in the Cartesian space losing the possibility to control the yaw angle. Simulation tests are employed to validate the proposed controller design.",
"title": ""
},
{
"docid": "88fb71e503e0d0af7515dd8489061e25",
"text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "49610c4b28f85faaa333b4845443e121",
"text": "The variety of wound types has resulted in a wide range of wound dressings with new products frequently introduced to target different aspects of the wound healing process. The ideal dressing should achieve rapid healing at reasonable cost with minimal inconvenience to the patient. This article offers a review of the common wound management dressings and emerging technologies for achieving improved wound healing. It also reviews many of the dressings and novel polymers used for the delivery of drugs to acute, chronic and other types of wound. These include hydrocolloids, alginates, hydrogels, polyurethane, collagen, chitosan, pectin and hyaluronic acid. There is also a brief section on the use of biological polymers as tissue engineered scaffolds and skin grafts. Pharmacological agents such as antibiotics, vitamins, minerals, growth factors and other wound healing accelerators that take active part in the healing process are discussed. Direct delivery of these agents to the wound site is desirable, particularly when systemic delivery could cause organ damage due to toxicological concerns associated with the preferred agents. This review concerns the requirement for formulations with improved properties for effective and accurate delivery of the required therapeutic agents. General formulation approaches towards achieving optimum physical properties and controlled delivery characteristics for an active wound healing dosage form are also considered briefly.",
"title": ""
}
] | scidocsrr |
b049d9544a7cee820b8df4f4b4fe1adc | Compact CPW-Fed Tri-Band Printed Antenna With Meandering Split-Ring Slot for WLAN/WiMAX Applications | [
{
"docid": "237a88ea092d56c6511bb84604e6a7c7",
"text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.",
"title": ""
},
{
"docid": "7bc8be5766eeb11b15ea0aa1d91f4969",
"text": "A coplanar waveguide (CPW)-fed planar monopole antenna with triple-band operation for WiMAX and WLAN applications is presented. The antenna, which occupies a small size of 25(L) × 25(W) × 0.8(H) mm3, is simply composed of a pentagonal radiating patch with two bent slots. By carefully selecting the positions and lengths of these slots, good dual stopband rejection characteristic of the antenna can be obtained so that three operating bands covering 2.14-2.85, 3.29-4.08, and 5.02-6.09 GHz can be achieved. The measured results also demonstrate that the proposed antenna has good omnidirectional radiation patterns with appreciable gain across the operating bands and is thus suitable to be integrated within the portable devices for WiMAX/WLAN applications.",
"title": ""
}
] | [
{
"docid": "0b6f3498022abdf0407221faba72dcf1",
"text": "A broadband coplanar waveguide (CPW) to coplanar strip (CPS) transmission line transition directly integrated with an RF microelectromechanical systems reconfigurable multiband antenna is presented in this paper. This transition design exhibits very good performance up to 55 GHz, and uses a minimum number of dissimilar transmission line sections and wire bonds, achieving a low-loss and low-cost balancing solution to feed planar antenna designs. The transition design methodology that was followed is described and measurement results are presented.",
"title": ""
},
{
"docid": "c31dddbca92e13e84e08cca310329151",
"text": "For the first time, automated Hex solvers have surpassed humans in their ability to solve Hex positions: they can now solve many 9×9 Hex openings. We summarize the methods that attained this milestone, and examine the future of Hex solvers.",
"title": ""
},
{
"docid": "65ed76ddd6f7fd0aea717d2e2643dd16",
"text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "26b67fe7ee89c941d313187672b1d514",
"text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.",
"title": ""
},
{
"docid": "1336b193e4884a024f21a384b265eac6",
"text": "In this proposal, we introduce Bayesian Abductive Logic Programs (BALP), a probabilistic logic that adapts Bayesian Logic Programs (BLPs) for abductive reasoning. Like BLPs, BALPs also combine first-order logic and Bayes nets. However, unlike BLPs, which use deduction to construct Bayes nets, BALPs employ logical abduction. As a result, BALPs are more suited for problems like plan/activity recognition that require abductive reasoning. In order to demonstrate the efficacy of BALPs, we apply it to two abductive reasoning tasks – plan recognition and natural language understanding.",
"title": ""
},
{
"docid": "529929af902100d25e08fe00d17e8c1a",
"text": "Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the student is known. It is difficult to keep track of the engagement of each student in a face-to-face learning happening in a large classroom. It is even more difficult in an online learning platform where, the user is accessing the material at different instances. Automatic analysis of the engagement of students can help to better understand the state of the student in a classroom setting as well as online learning platforms and is more scalable. In this paper we propose a framework that uses Temporal Convolutional Network (TCN) to understand the intensity of engagement of students attending video material from Massive Open Online Courses (MOOCs). The input to the TCN network is the statistical features computed on 10 second segments of the video from the gaze, head pose and action unit intensities available in OpenFace library. The ability of the TCN architecture to capture long term dependencies gives it the ability to outperform other sequential models like LSTMs. On the given test set in the EmotiW 2018 sub challenge-\"Engagement in the Wild\", the proposed approach with Dilated-TCN achieved an average mean square error of 0.079.",
"title": ""
},
{
"docid": "ee61181cb9625868526eb608db0c58b4",
"text": "The primary focus of machine learning has traditionally been on learning from data assumed to be sufficient and representative of the underlying fixed, yet unknown, distribution. Such restrictions on the problem domain paved the way for development of elegant algorithms with theoretically provable performance guarantees. As is often the case, however, real-world problems rarely fit neatly into such restricted models. For instance class distributions are often skewed, resulting in the “class imbalance” problem. Data drawn from non-stationary distributions is also common in real-world applications, resulting in the “concept drift” or “non-stationary learning” problem which is often associated with streaming data scenarios. Recently, these problems have independently experienced increased research attention, however, the combined problem of addressing all of the above mentioned issues has enjoyed relatively little research. If the ultimate goal of intelligent machine learning algorithms is to be able to address a wide spectrum of real-world scenarios, then the need for a general framework for learning from, and adapting to, a non-stationary environment that may introduce imbalanced data can be hardly overstated. In this paper, we first present an overview of each of these challenging areas, followed by a comprehensive review of recent research for developing such a general framework.",
"title": ""
},
{
"docid": "54a1257346f9a1ead514bb8077b0e7ca",
"text": "Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.",
"title": ""
},
{
"docid": "5785108e48e62ce2758a7b18559a697e",
"text": "The objective of this article is to create a better understanding of the intersection of the academic fields of entrepreneurship and strategic management, based on an aggregation of the extant literature in these two fields. The article structures and synthesizes the existing scholarly works in the two fields, thereby generating new knowledge. The results can be used to further enhance fruitful integration of these two overlapping but separate academic fields. The article attempts to integrate the two fields by first identifying apparent interrelations, and then by concentrating in more detail on some important intersections, including strategic management in small and medium-sized enterprises and start-ups, acknowledging the central role of the entrepreneur. The content and process sides of strategic management are discussed as well as their important connecting link, the business plan. To conclude, implications and future research directions for the two fields are proposed.",
"title": ""
},
{
"docid": "efde28bc545de68dbb44f85b198d85ff",
"text": "Blockchain technology is regarded as highly disruptive, but there is a lack of formalization and standardization of terminology. Not only because there are several (sometimes propriety) implementation platforms, but also because the academic literature so far is predominantly written from either a purely technical or an economic application perspective. The result of the confusion is an offspring of blockchain solutions, types, roadmaps and interpretations. For blockchain to be accepted as a technology standard in established industries, it is pivotal that ordinary internet users and business executives have a basic yet fundamental understanding of the workings and impact of blockchain. This conceptual paper provides a theoretical contribution and guidance on what blockchain actually is by taking an ontological approach. Enterprise Ontology is used to make a clear distinction between the datalogical, infological and essential level of blockchain transactions and smart contracts.",
"title": ""
},
{
"docid": "5275184686a8453a1922cec7a236b66d",
"text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.",
"title": ""
},
{
"docid": "c75967795041ef900236d71328dd7936",
"text": "In order to investigate the strategies used to plan and control multijoint arm trajectories, two-degrees-of-freedom arm movements performed by normal adult humans were recorded. Only the shoulder and elbow joints were active. When a subject was told simply to move his hand from one visual target to another, the path of the hand was roughly straight, and the hand speed profile of their straight trajectories was bell-shaped. When the subject was required to produce curved hand trajectories, the path usually had a segmented appearance, as if the subject was trying to approximate a curve with low curvature elements. Hand speed profiles associated with curved trajectories contained speed valleys or inflections which were temporally associated with the local maxima in the trajectory curvature. The mean duration of curved movements was longer than the mean for straight movements. These results are discussed in terms of trajectory control theories which have originated in the fields of mechanical manipulator control and biological motor control. Three explanations for the results are offered.",
"title": ""
},
{
"docid": "06c4281aad5e95cac1f4525cbb90e5c7",
"text": "Offering training programs to their employees is one of the necessary tasks that managers must comply with. Training is done mainly to provide upto-date knowledge or to convey to staff the objectives, history, corporate name, functions of the organization’s areas, processes, laws, norms or policies that must be fulfilled. Although there are a lot of methods, models or tools that are useful for this purpose, many companies face with some common problems like employee’s motivation and high costs in terms of money and time. In an effort to solve this problem, new trends have emerged in the last few years, in particular strategies related to games, such as serious games and gamification, whose success has been demonstrated by numerous researchers. According to the above, we present a systematic literature review of the different approaches that have used games or their elements, using the procedure suggested by Cooper, on this matter, ending with about the positive and negative findings.",
"title": ""
},
{
"docid": "24d55c65807e4a90fb0dffb23fc2f7bc",
"text": "This paper presents a comprehensive study of deep correlation features on image style classification. Inspired by that, correlation between feature maps can effectively describe image texture, and we design various correlations and transform them into style vectors, and investigate classification performance brought by different variants. In addition to intralayer correlation, interlayer correlation is proposed as well, and its effectiveness is verified. After showing the effectiveness of deep correlation features, we further propose a learning framework to automatically learn correlations between feature maps. Through extensive experiments on image style classification and artist classification, we demonstrate that the proposed learnt deep correlation features outperform several variants of convolutional neural network features by a large margin, and achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "283d3f1ff0ca4f9c0a2a6f4beb1f7771",
"text": "As a proof-of-concept for the vision “SSD as SQL Engine” (SaS in short), we demonstrate that SQLite [4], a popular mobile database engine, in its entirety can run inside a real SSD development platform. By turning storage device into database engine, SaS allows applications to directly interact with full SQL database server running inside storage device. In SaS, the SQL language itself, not the traditional dummy block interface, will be provided as new interface between applications and storage device. In addition, since SaS plays the role of the uni ed platform of database computing node and storage node, the host and the storage need not be segregated any more as separate physical computing components.",
"title": ""
},
{
"docid": "62d39d41523bca97939fa6a2cf736b55",
"text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.",
"title": ""
},
{
"docid": "328aad76b94b34bf49719b98ae391cfe",
"text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.",
"title": ""
},
{
"docid": "a9a22c9c57e9ba8c3deefbea689258d5",
"text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.",
"title": ""
},
{
"docid": "c5f749c36b3d8af93c96bee59f78efe5",
"text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.",
"title": ""
},
{
"docid": "87eb54a981fca96475b73b3dfa99b224",
"text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.",
"title": ""
}
] | scidocsrr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.