query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
9
100
subset
stringclasses
7 values
62b8ef39d2ec05c9aee2b4445c1e5c4e
A Large-Displacement 3-DOF Flexure Parallel Mechanism with Decoupled Kinematics Structure
[ { "docid": "f7f90e224c71091cc3e6356ab1ec0ea5", "text": "A new two-degrees-of-freedom (2-DOF) compliant parallel micromanipulator (CPM) utilizing flexure joints has been proposed for two-dimensional (2-D) nanomanipulation in this paper. The system is developed by a careful design and proper selection of electrical and mechanical components. Based upon the developed PRB model, both the position and velocity kinematic modelings have been performed in details, and the CPM's workspace area is determined analytically in view of the physical constraints imposed by pizeo-actuators and flexure hinges. Moreover, in order to achieve a maximum workspace subjected to the given dexterity indices, kinematic optimization of the design parameters has been carried out, which leads to a manipulator satisfying the requirement of this work. Simulation results reveal that the designed CPM can perform a high dexterous manipulation within its workspace.", "title": "" } ]
[ { "docid": "816575ea7f7903784abba96180190ea3", "text": "The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees.", "title": "" }, { "docid": "59daeea2c602a1b1d64bae95185f9505", "text": "Traumatic brain injury (TBI) triggers endoplasmic reticulum (ER) stress and impairs autophagic clearance of damaged organelles and toxic macromolecules. In this study, we investigated the effects of the post-TBI administration of docosahexaenoic acid (DHA) on improving hippocampal autophagy flux and cognitive functions of rats. TBI was induced by cortical contusion injury in Sprague–Dawley rats, which received DHA (16 mg/kg in DMSO, intraperitoneal administration) or vehicle DMSO (1 ml/kg) with an initial dose within 15 min after the injury, followed by a daily dose for 3 or 7 days. First, RT-qPCR reveals that TBI induced a significant elevation in expression of autophagy-related genes in the hippocampus, including SQSTM1/p62 (sequestosome 1), lysosomal-associated membrane proteins 1 and 2 (Lamp1 and Lamp2), and cathepsin D (Ctsd). Upregulation of the corresponding autophagy-related proteins was detected by immunoblotting and immunostaining. In contrast, the DHA-treated rats did not exhibit the TBI-induced autophagy biogenesis and showed restored CTSD protein expression and activity. T2-weighted images and diffusion tensor imaging (DTI) of ex vivo brains showed that DHA reduced both gray matter and white matter damages in cortical and hippocampal tissues. DHA-treated animals performed better than the vehicle control group on the Morris water maze test. Taken together, these findings suggest that TBI triggers sustained stimulation of autophagy biogenesis, autophagy flux, and lysosomal functions in the hippocampus. Swift post-injury DHA administration restores hippocampal lysosomal biogenesis and function, demonstrating its therapeutic potential.", "title": "" }, { "docid": "3732f96144d7f28c88670dd63aff63a1", "text": "The problem of defining and classifying power system stability has been addressed by several previous CIGRE and IEEE Task Force reports. These earlier efforts, however, do not completely reflect current industry needs, experiences and understanding. In particular, the definitions are not precise and the classifications do not encompass all practical instability scenarios. This report developed by a Task Force, set up jointly by the CIGRE Study Committee 38 and the IEEE Power System Dynamic Performance Committee, addresses the issue of stability definition and classification in power systems from a fundamental viewpoint and closely examines the practical ramifications. The report aims to define power system stability more precisely, provide a systematic basis for its classification, and discuss linkages to related issues such as power system reliability and security.", "title": "" }, { "docid": "50d0b1e141bcea869352c9b96b0b2ad5", "text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.", "title": "" }, { "docid": "b9400c6d317f60dc324877d3a739fd17", "text": "The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a “defect” (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although dozens of effect size statistics have been available for some time, many researchers were trained at a time when effect sizes were not emphasized, or perhaps even taught. Consequently, some readers may appreciate a review of how to estimate and interpret various effect sizes. In addition to the tutorial, the authors recommend effect size interpretations that emphasize direct and explicit comparisons of effects in a new study with those reported in the prior related literature, with a focus on evaluating result replicability.", "title": "" }, { "docid": "d1c2936521b0a3270163ea4d9123e4da", "text": "Large-scale instance-level image retrieval aims at retrieving specific instances of objects or scenes. Simultaneously retrieving multiple objects in a test image adds to the difficulty of the problem, especially if the objects are visually similar. This paper presents an efficient approach for per-exemplar multi-label image classification, which targets the recognition and localization of products in retail store images. We achieve runtime efficiency through the use of discriminative random forests, deformable dense pixel matching and genetic algorithm optimization. Cross-dataset recognition is performed, where our training images are taken in ideal conditions with only one single training image per product label, while the evaluation set is taken using a mobile phone in real-life scenarios in completely different conditions. In addition, we provide a large novel dataset and labeling tools for products image search, to motivate further research efforts on multi-label retail products image classification. The proposed approach achieves promising results in terms of both accuracy and runtime efficiency on 680 annotated images of our dataset, and 885 test images of GroZi-120 dataset. We make our dataset of 8350 different product images and the 680 test images from retail stores with complete annotations available to the wider community.", "title": "" }, { "docid": "5db123f7b584b268f908186c67d3edcb", "text": "From the point of view of a programmer, the robopsychology is a synonym for the activity is done by developers to implement their machine learning applications. This robopsychological approach raises some fundamental theoretical questions of machine learning. Our discussion of these questions is constrained to Turing machines. Alan Turing had given an algorithm (aka the Turing Machine) to describe algorithms. If it has been applied to describe itself then this brings us to Turing’s notion of the universal machine. In the present paper, we investigate algorithms to write algorithms. From a pedagogy point of view, this way of writing programs can be considered as a combination of learning by listening and learning by doing due to it is based on applying agent technology and machine learning. As the main result we introduce the problem of learning and then we show that it cannot easily be handled in reality therefore it is reasonable to use machine learning algorithm for learning Turing machines.", "title": "" }, { "docid": "fc3aeb32f617f7a186d41d56b559a2aa", "text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.", "title": "" }, { "docid": "66d5101d55595754add37e9e50952056", "text": "The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines", "title": "" }, { "docid": "b43c4d5d97120963a3ea84a01d029819", "text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.", "title": "" }, { "docid": "1b347401820c826db444cc3580bde210", "text": "Utilization of Natural Fibers in Plastic Composites: Problems and Opportunities Roger M. Rowell, Anand R, Sanadi, Daniel F. Caulfield and Rodney E. Jacobson Forest Products Laboratory, ESDA, One Gifford Pinchot Drive, Madison, WI 53705 Department of Forestry, 1630 Linden Drive, University of Wisconsin, WI 53706 recycled. Results suggest that agro-based fibers are a viable alternative to inorganic/material based reinforcing fibers in commodity fiber-thermoplastic composite materials as long as the right processing conditions are used and for applications where higher water absorption may be so critical. These renewable fibers hav low densities and high specific properties and their non-abrasive nature permits a high volume of filling in the composite. Kenaf fivers, for example, have excellent specific properties and have potential to be outstanding reinforcing fillers in plastics. In our experiments, several types of natural fibers were blended with polyprolylene(PP) and then injection molded, with the fiber weight fractions varying to 60%. A compatibilizer or a coupling agent was used to improve the interaction and adhesion between the non-polar matrix and the polar lignocellulosic fibers. The specific tensile and flexural moduli of a 50% by weight (39% by volume) of kenaf-PP composites compares favorably with 40% by weight of glass fiber (19% by volume)-PP injection molded composites. Furthermore, prelimimary results sugget that natural fiber-PP composites can be regrounded and", "title": "" }, { "docid": "701ddde2a7ff66c6767a2978ce7293f2", "text": "Epigenetics is the study of heritable changesin gene expression that does not involve changes to theunderlying DNA sequence, i.e. a change in phenotype notinvolved by a change in genotype. At least three mainfactor seems responsible for epigenetic change including DNAmethylation, histone modification and non-coding RNA, eachone sharing having the same property to affect the dynamicof the chromatin structure by acting on Nucleosomes position. A nucleosome is a DNA-histone complex, where around150 base pairs of double-stranded DNA is wrapped. Therole of nucleosomes is to pack the DNA into the nucleusof the Eukaryote cells, to form the Chromatin. Nucleosomepositioning plays an important role in gene regulation andseveral studies shows that distinct DNA sequence featureshave been identified to be associated with nucleosomepresence. Starting from this suggestion, the identificationof nucleosomes on a genomic scale has been successfullyperformed by DNA sequence features representation andclassical supervised classification methods such as SupportVector Machines, Logistic regression and so on. Taking inconsideration the successful application of the deep neuralnetworks on several challenging classification problems, inthis paper we want to study how deep learning network canhelp in the identification of nucleosomes.", "title": "" }, { "docid": "e4ce5d47a095fcdadbe5c16bb90445d4", "text": "Artificial neural network (ANN) has been widely applied in flood forecasting and got good results. However, it can still not go beyond one or two hidden layers for the problematic non-convex optimization. This paper proposes a deep learning approach by integrating stacked autoencoders (SAE) and back propagation neural networks (BPNN) for the prediction of stream flow, which simultaneously takes advantages of the powerful feature representation capability of SAE and superior predicting capacity of BPNN. To further improve the non-linearity simulation capability, we first classify all the data into several categories by the K-means clustering. Then, multiple SAE-BP modules are adopted to simulate their corresponding categories of data. The proposed approach is respectively compared with the support-vector-machine (SVM) model, the BP neural network model, the RBF neural network model and extreme learning machine (ELM) model. The experimental results show that the SAE-BP integrated algorithm performs much better than other benchmarks.", "title": "" }, { "docid": "348f9c689c579cf07085b6e263c53ff5", "text": "Over recent years, interest has been growing in Bitcoin, an innovation which has the potential to play an important role in e-commerce and beyond. The aim of our paper is to provide a comprehensive empirical study of the payment and investment features of Bitcoin and their implications for the conduct of ecommerce. Since network externality theory suggests that the value of a network and its take-up are interlinked, we investigate both adoption and price formation. We discover that Bitcoin returns are driven primarily by its popularity, the sentiment expressed in newspaper reports on the cryptocurrency, and total number of transactions. The paper also reports on the first global survey of merchants who have adopted this technology and model the share of sales paid for with this alternative currency, using both ordinary and Tobit regressions. Our analysis examines how country, customer and company-specific characteristics interact with the proportion of sales attributed to Bitcoin. We find that company features, use of other payment methods, customers’ knowledge about Bitcoin, as well as the size of both the official and unofficial economy are significant determinants. The results presented allow a better understanding of the practical and theoretical ramifications of this innovation.", "title": "" }, { "docid": "1c079b53b0967144a183f65a16e10158", "text": "Android has provided dynamic code loading (DCL) since API level one. DCL allows an app developer to load additional code at runtime. DCL raises numerous challenges with regards to security and accountability analysis of apps. While previous studies have investigated DCL on Android, in this paper we formulate and answer three critical questions that are missing from previous studies: (1) Where does the loaded code come from (remotely fetched or locally packaged), and who is the responsible entity to invoke its functionality? (2) In what ways is DCL utilized to harden mobile apps, specifically, application obfuscation? (3) What are the security risks and implications that can be found from DCL in off-the-shelf apps? We design and implement DYDROID, a system which uses both dynamic and static analysis to analyze dynamically loaded code. Dynamic analysis is used to automatically exercise apps, capture DCL behavior, and intercept the loaded code. Static analysis is used to investigate malicious behavior and privacy leakage in that dynamically loaded code. We have used DYDROID to analyze over 46K apps with little manual intervention, allowing us to conduct a large-scale measurement to investigate five aspects of DCL, such as source identification, malware detection, vulnerability analysis, obfuscation analysis, and privacy tracking analysis. We have several interesting findings. (1) 27 apps are found to violate the content policy of Google Play by executing code downloaded from remote servers. (2) We determine the distribution, pros/cons, and implications of several common obfuscation methods, including DEX encryption/loading. (3) DCL’s stealthiness enables it to be a channel to deploy malware, and we find 87 apps loading malicious binaries which are not detected by existing antivirus tools. (4) We found 14 apps that are vulnerable to code injection attacks due to dynamically loading code which is writable by other apps. (5) DCL is mainly used by third-party SDKs, meaning that app developers may not know what sort of sensitive functionality is injected into their apps.", "title": "" }, { "docid": "f5658fe48ecc31e72fbfbcb12f843a44", "text": "PURPOSE OF REVIEW\nThe current review discusses the integration of guideline and evidence-based palliative care into heart failure end-of-life (EOL) care.\n\n\nRECENT FINDINGS\nNorth American and European heart failure societies recommend the integration of palliative care into heart failure programs. Advance care planning, shared decision-making, routine measurement of symptoms and quality of life and specialist palliative care at heart failure EOL are identified as key components to an effective heart failure palliative care program. There is limited evidence to support the effectiveness of the individual elements. However, results from the palliative care in heart failure trial suggest an integrated heart failure palliative care program can significantly improve quality of life for heart failure patients at EOL.\n\n\nSUMMARY\nIntegration of a palliative approach to heart failure EOL care helps to ensure patients receive the care that is congruent with their values, wishes and preferences. Specialist palliative care referrals are limited to those who are truly at heart failure EOL.", "title": "" }, { "docid": "c88f3c3b6bf8ad80b20216caf1a7cad6", "text": "This study examined the effects of heavy resistance training on physiological acute exercise-induced fatigue (5 × 10 RM leg press) changes after two loading protocols with the same relative intensity (%) (5 × 10 RMRel) and the same absolute load (kg) (5 × 10 RMAbs) as in pretraining in men (n = 12). Exercise-induced neuromuscular (maximal strength and muscle power output), acute cytokine and hormonal adaptations (i.e., total and free testosterone, cortisol, growth hormone (GH), insulin-like growth factor-1 (IGF-1), IGF binding protein-3 (IGFBP-3), interleukin-1 receptor antagonist (IL-1ra), IL-1β, IL-6, and IL-10 and metabolic responses (i.e., blood lactate) were measured before and after exercise. The resistance training induced similar acute responses in serum cortisol concentration but increased responses in anabolic hormones of FT and GH, as well as inflammation-responsive cytokine IL-6 and the anti-inflammatory cytokine IL-10, when the same relative load was used. This response was balanced by a higher release of pro-inflammatory cytokines IL-1β and cytokine inhibitors (IL-1ra) when both the same relative and absolute load was used after training. This enhanced hormonal and cytokine response to strength exercise at a given relative exercise intensity after strength training occurred with greater accumulated fatigue and metabolic demand (i.e., blood lactate accumulation). The magnitude of metabolic demand or the fatigue experienced during the resistance exercise session influences the hormonal and cytokine response patterns. Similar relative intensities may elicit not only higher exercise-induced fatigue but also an increased acute hormonal and cytokine response during the initial phase of a resistance training period.", "title": "" }, { "docid": "f4535d47191caaa1e830e5d8fae6e1ba", "text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.", "title": "" }, { "docid": "da9a6e165744245fd19ab788790c37c9", "text": "Worldwide medicinal use of cannabis is rapidly escalating, despite limited evidence of its efficacy from preclinical and clinical studies. Here we show that cannabidiol (CBD) effectively reduced seizures and autistic-like social deficits in a well-validated mouse genetic model of Dravet syndrome (DS), a severe childhood epilepsy disorder caused by loss-of-function mutations in the brain voltage-gated sodium channel NaV1.1. The duration and severity of thermally induced seizures and the frequency of spontaneous seizures were substantially decreased. Treatment with lower doses of CBD also improved autistic-like social interaction deficits in DS mice. Phenotypic rescue was associated with restoration of the excitability of inhibitory interneurons in the hippocampal dentate gyrus, an important area for seizure propagation. Reduced excitability of dentate granule neurons in response to strong depolarizing stimuli was also observed. The beneficial effects of CBD on inhibitory neurotransmission were mimicked and occluded by an antagonist of GPR55, suggesting that therapeutic effects of CBD are mediated through this lipid-activated G protein-coupled receptor. Our results provide critical preclinical evidence supporting treatment of epilepsy and autistic-like behaviors linked to DS with CBD. We also introduce antagonism of GPR55 as a potential therapeutic approach by illustrating its beneficial effects in DS mice. Our study provides essential preclinical evidence needed to build a sound scientific basis for increased medicinal use of CBD.", "title": "" }, { "docid": "d6cb714b47b056e1aea8ef0682f4ae51", "text": "Arti cial neural networks are being used with increasing frequency for high dimensional problems of regression or classi cation. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques.", "title": "" } ]
scidocsrr
f90906dea9c0ba01edc93f425e6c9b1d
Deep Learning for Automated Quality Assessment of Color Fundus Images in Diabetic Retinopathy Screening
[ { "docid": "d622cf283f27a32b2846a304c0359c5f", "text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.", "title": "" } ]
[ { "docid": "bcdf411d631f822e15a0b78396dc55e7", "text": "Exercise-induced ST-segment elevation was correlated with myocardial perfusion abnormalities and coronary artery obstruction in 35 patients. Ten patients (group 1) developed exercise ST elevation in leads without Q waves on the resting ECG. The site of ST elevation corresponded to both a reversible perfusion defect and a severely obstructed coronary artery. Associated ST-segment depression in other leads occurred in seven patients, but only one had a second perfusion defect at the site of ST depression. In three of the 10 patients, abnormal left ventricular wall motion at the site of exercise-induced ST elevation was demonstrated by ventriculography. Twenty-five patients (group 2) developed exercise ST elevation in leads with Q waves on the resting ECG. The site ofST elevation corresponded to severe coronary artery stenosis and a thallium perfusion defect that persisted on the 4-hour scan (constant in 12 patients, decreased in 13). Associated ST depression in other leads occurred in 11 patients and eight (73%) had a second perfusion defect at the site of ST depression. In all 25 patients with previous transmural infarction, abnormal left ventricular wall motion at the site of the Q waves was shown by ventriculography. In patients without previous myocardial infarction, the site of exercise-induced ST-segment elevation indicates the site of severe transient myocardial ischemia, and associated ST depression is usually reciprocal. In patients with Q waves on the resting ECG, exercise ST elevation way be due to peri-infarctional ischemia, abnormal ventricular wall motion or both. Exercise ST-segment depression may be due to a second area of myocardial ischemia rather than being reciprocal to ST elevation.", "title": "" }, { "docid": "65500c886a91a58ac95365c1e8539902", "text": "This introductory overview tutorial on social network analysis (SNA) demonstrates through theory and practical case studies applications to research, particularly on social media, digital interaction and behavior records. NodeXL provides an entry point for non-programmers to access the concepts and core methods of SNA and allows anyone who can make a pie chart to now build, analyze and visualize complex networks.", "title": "" }, { "docid": "488c52d028d18227f456cb3383784d05", "text": "For smart grid execution, one of the most important requirements is fast, precise, and efficient synchronized measurements, which are possible by phasor measurement unit (PMU). To achieve fully observable network with the least number of PMUs, optimal placement of PMU (OPP) is crucial. In trying to achieve OPP, priority may be given at critical buses, generator buses, or buses that are meant for future extension. Also, different applications will have to be kept in view while prioritizing PMU placement. Hence, OPP with multiple solutions (MSs) can offer better flexibility for different placement strategies as it can meet the best solution based on the requirements. To provide MSs, an effective exponential binary particle swarm optimization (EBPSO) algorithm is developed. In this algorithm, a nonlinear inertia-weight-coefficient is used to improve the searching capability. To incorporate previous position of particle, two innovative mathematical equations that can update particle's position are formulated. For quick and reliable convergence, two useful filtration techniques that can facilitate MSs are applied. Single mutation operator is conditionally applied to avoid stagnation. The EBPSO algorithm is so developed that it can provide MSs for various practical contingencies, such as single PMU outage and single line outage for different systems.", "title": "" }, { "docid": "ec332042fb49c5628ea2398e185bb369", "text": "This paper describes a least squares (LS) channel estimation scheme for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems based on pilot tones. We first compute the mean square error (MSE) of the LS channel estimate. We then derive optimal pilot sequences and optimal placement of the pilot tones with respect to this MSE. It is shown that the optimal pilot sequences are equipowered, equispaced, and phase shift orthogonal. To reduce the training overhead, an LS channel estimation scheme over multiple OFDM symbols is also discussed. Moreover, to enhance channel estimation, a recursive LS (RLS) algorithm is proposed, for which we derive the optimal forgetting or tracking factor. This factor is found to be a function of both the noise variance and the channel Doppler spread. Through simulations, it is shown that the optimal pilot sequences derived in this paper outperform both the orthogonal and random pilot sequences. It is also shown that a considerable gain in signal-to-noise ratio (SNR) can be obtained by using the RLS algorithm, especially in slowly time-varying channels.", "title": "" }, { "docid": "0e672586c4be2e07c3e794ed1bb3443d", "text": "In this thesis, the multi-category dataset has been incorporated with the robust feature descriptor using the scale invariant feature transform (SIFT), SURF and FREAK along with the multi-category enabled support vector machine (mSVM). The multi-category support vector machine (mSVM) has been designed with the iterative phases to make it able to work with the multi-category dataset. The mSVM represents the training samples of main class as the primary class in every iterative phase and all other training samples are categorized as the secondary class for the support vector machine classification. The proposed model is made capable of working with the variations in the indoor scene image dataset, which are noticed in the form of the color, texture, light, image orientation, occlusion and color illuminations. Several experiments have been conducted over the proposed model for the performance evaluation of the indoor scene recognition system in the proposed model. The results of the proposed model have been obtained in the form of the various performance parameters of statistical errors, precision, recall, F1-measure and overall accuracy. The proposed model has clearly outperformed the existing models in the terms of the overall accuracy. The proposed model improvement has been recorded higher than ten percent for all of the evaluated parameters against the existing models based upon SURF, FREAK, etc.", "title": "" }, { "docid": "cbcb20173f4e012253c51020932e75a6", "text": "We investigate methods for combining multiple selfsupervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “harmonizing” network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.", "title": "" }, { "docid": "019ee0840b91f97a3acc3411edadcade", "text": "Despite the many solutions proposed by industry and the research community to address phishing attacks, this problem continues to cause enormous damage. Because of our inability to deter phishing attacks, the research community needs to develop new approaches to anti-phishing solutions. Most of today's anti-phishing technologies focus on automatically detecting and preventing phishing attacks. While automation makes anti-phishing tools user-friendly, automation also makes them suffer from false positives, false negatives, and various practical hurdles. As a result, attackers often find simple ways to escape automatic detection.\n This paper presents iTrustPage - an anti-phishing tool that does not rely completely on automation to detect phishing. Instead, iTrustPage relies on user input and external repositories of information to prevent users from filling out phishing Web forms. With iTrustPage, users help to decide whether or not a Web page is legitimate. Because iTrustPage is user-assisted, iTrustPage avoids the false positives and the false negatives associated with automatic phishing detection. We implemented iTrustPage as a downloadable extension to FireFox. After being featured on the Mozilla website for FireFox extensions, iTrustPage was downloaded by more than 5,000 users in a two week period. We present an analysis of our tool's effectiveness and ease of use based on our examination of usage logs collected from the 2,050 users who used iTrustPage for more than two weeks. Based on these logs, we find that iTrustPage disrupts users on fewer than 2% of the pages they visit, and the number of disruptions decreases over time.", "title": "" }, { "docid": "3b45dbcb526574cc77f3a099b5a97cd9", "text": "In this paper, we exploit a new multi-country historical dataset on public (government) debt to search for a systemic relationship between high public debt levels, growth and inflation. Our main result is that whereas the link between growth and debt seems relatively weak at “normal” debt levels, median growth rates for countries with public debt over roughly 90 percent of GDP are about one percent lower than otherwise; average (mean) growth rates are several percent lower. Surprisingly, the relationship between public debt and growth is remarkably similar across emerging markets and advanced economies. This is not the case for inflation. We find no systematic relationship between high debt levels and inflation for advanced economies as a group (albeit with individual country exceptions including the United States). By contrast, in emerging market countries, high public debt levels coincide with higher inflation. Our topic would seem to be a timely one. Public debt has been soaring in the wake of the recent global financial maelstrom, especially in the epicenter countries. This should not be surprising, given the experience of earlier severe financial crises. Outsized deficits and epic bank bailouts may be useful in fighting a downturn, but what is the long-run macroeconomic impact,", "title": "" }, { "docid": "9cb13d599da25991d11d276aaa76a005", "text": "We propose a quasi real-time method for discrimination of ventricular ectopic beats from both supraventricular and paced beats in the electrocardiogram (ECG). The heartbeat waveforms were evaluated within a fixed-length window around the fiducial points (100 ms before, 450 ms after). Our algorithm was designed to operate with minimal expert intervention and we define that the operator is required only to initially select up to three ‘normal’ heartbeats (the most frequently seen supraventricular or paced complexes). These were named original QRS templates and their copies were substituted continuously throughout the ECG analysis to capture slight variations in the heartbeat waveforms of the patient’s sustained rhythm. The method is based on matching of the evaluated heartbeat with the QRS templates by a complex set of ECG descriptors, including maximal cross-correlation, area difference and frequency spectrum difference. Temporal features were added by analyzing the R-R intervals. The classification criteria were trained by statistical assessment of the ECG descriptors calculated for all heartbeats in MIT-BIH Supraventricular Arrhythmia Database. The performance of the classifiers was tested on the independent MIT-BIH Arrhythmia Database. The achieved unbiased accuracy is represented by sensitivity of 98.4% and specificity of 98.86%, both being competitive to other published studies. The provided computationally efficient techniques enable the fast post-recording analysis of lengthy Holter-monitor ECG recordings, as well as they can serve as a quasi real-time detection method embedded into surface ECG monitors.", "title": "" }, { "docid": "ba2a9451fa1f794c7a819acaa9bc5d82", "text": "In this paper we briefly address DLR’s (German Aerospace Center) background in space robotics by hand of corresponding milestone projects including systems on the International Space Station. We then discuss the key technologies needed for the development of an artificial “robonaut” generation with mechatronic ultra-lightweight arms and multifingered hands. The third arm generation is nearly finished now, approaching the limits of what is technologically achievable today with respect to light weight and power losses. In a similar way DLR’s second generation of artificial four-fingered hands was a big step towards higher reliability, manipulability and overall", "title": "" }, { "docid": "8a6955ee53b9920a7c192143557ddf44", "text": "C utaneous metastases rarely develop in patients having cancer with solid tumors. The reported incidence of cutaneous metastases from a known primary malignancy ranges from 0.6% to 9%, usually appearing 2 to 3 years after the initial diagnosis.1-11 Skin metastases may represent the first sign of extranodal disease in 7.6% of patients with a primary oncologic diagnosis.1 Cutaneous metastases may also be the first sign of recurrent disease after treatment, with 75% of patients also having visceral metastases.2 Infrequently, cutaneous metastases may be seen as the primary manifestation of an undiagnosed malignancy.12 Prompt recognition of such tumors can be of great significance, affecting prognosis and management. The initial presentation of cutaneous metastases is frequently subtle and may be overlooked without proper index of suspicion, appearing as multiple or single nodules, plaques, and ulcers, in decreasing order of frequency. Commonly, a painless, mobile, erythematous papule is initially noted, which may enlarge to an inflammatory nodule over time.8 Such lesions may be misdiagnosed as cysts, lipomas, fibromas, or appendageal tumors. Clinical features of cutaneous metastases rarely provide information regarding the primary tumor, although the location of the tumor may be helpful because cutaneous metastases typically manifest in the same geographic region as the initial cancer. The most common primary tumors seen with cutaneous metastases are melanoma, breast, and squamous cell carcinoma of the head and neck.1 Cutaneous metastases are often firm, because of dermal or lymphatic involvement, or erythematous. These features may help rule out some nonvascular entities in the differential diagnosis (eg, cysts and fibromas). The presence of pigment most commonly correlates with cutaneous metastases from melanoma. Given the limited body of knowledge regarding distinct clinical findings, we sought to better elucidate the dermoscopic patterns of cutaneous metastases, with the goal of using this diagnostic tool to help identify these lesions. We describe 20 outpatients with biopsy-proven cutaneous metastases secondary to various underlying primary malignancies. Their clinical presentation is reviewed, emphasizing the dermoscopic findings, as well as the histopathologic correlation.", "title": "" }, { "docid": "855a8cfdd9d01cd65fe32d18b9be4fdf", "text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.", "title": "" }, { "docid": "f0532446a19fb2fa28a7a01cddca7e37", "text": "The use of rumble strips on roads can provide drivers lane departure warning (LDW). However, rumble strips require an infrastructure and do not exist on a majority of roadways. Therefore, it is very desirable to have an effective in-vehicle LDW system to detect when the driver is in danger of departing the road and then triggers an alarm to warn the driver early enough to take corrective action. This paper presents the development of an image-based LDW system using the Lucas-Kanade (L-K) optical flow and the Hough transform methods. Our approach integrates both techniques to establish an operation algorithm to determine whether a warning signal should be issued based on the status of the vehicle deviating from its heading lane. The L-K optical flow tracking is used when the lane boundaries cannot be detected, while the lane detection technique is used when they become available. Even though both techniques are used in the system, only one method is activated at any given time because each technique has its own advantages and also disadvantages. The developed LDW system was road tested on several rural highways and also one section of the interstate I35 freeway. Overall, the system operates correctly as expected with a false alarm occurred only roughly about 1.18% of the operation time. This paper presents the system implementation together with our findings. Key-Words: Lane departure warning, Lucas-Kanade optical flow, Hough transform.", "title": "" }, { "docid": "65b64f338b0126151a5e8dbcd4a9cf33", "text": "This free executive summary is provided by the National Academies as part of our mission to educate the world on issues of science, engineering, and health. If you are interested in reading the full book, please visit us online at http://www.nap.edu/catalog/9728.html . You may browse and search the full, authoritative version for free; you may also purchase a print or electronic version of the book. If you have questions or just want more information about the books published by the National Academies Press, please contact our customer service department toll-free at 888-624-8373.", "title": "" }, { "docid": "502cae1daa2459ed0f826ed3e20c44e4", "text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.", "title": "" }, { "docid": "53d07bc7229500295741491aea15f63a", "text": "Unhealthy lifestyle behaviour is driving an increase in the burden of chronic non-communicable diseases worldwide. Recent evidence suggests that poor diet and a lack of exercise contribute to the genesis and course of depression. While studies examining dietary improvement as a treatment strategy in depression are lacking, epidemiological evidence clearly points to diet quality being of importance to the risk of depression. Exercise has been shown to be an effective treatment strategy for depression, but this is not reflected in treatment guidelines, and increased physical activity is not routinely encouraged when managing depression in clinical practice. Recommendations regarding dietary improvement, increases in physical activity and smoking cessation should be routinely given to patients with depression. Specialised and detailed advice may not be necessary. Recommendations should focus on following national guidelines for healthy eating and physical activity.", "title": "" }, { "docid": "3d2e82a0353d0b2803a579c413403338", "text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi", "title": "" }, { "docid": "57bac865d79700350e3b1f2fe9f7a2f7", "text": "This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences. Unlike existing pipelined approaches using syntactic parsers, our end-to-end model learns a latent graph parser as part of the encoder of an attention-based neural machine translation model, and thus the parser is optimized according to the translation objective. In experiments, we first show that our model compares favorably with state-of-the-art sequential and pipelined syntax-based NMT models. We also show that the performance of our model can be further improved by pretraining it with a small amount of treebank annotations. Our final ensemble model significantly outperforms the previous best models on the standard Englishto-Japanese translation dataset.", "title": "" }, { "docid": "c98bf9bf53f39ba1cf5ff97ed7c9d0a3", "text": "The problem of detecting community structures of a social network has been extensively studied over recent years, but most existing methods solely rely on the network structure and neglect the context information of the social relations. The main reason is that a context-rich network offers too much flexibility and complexity for automatic or manual modulation of the multifaceted context in the analysis process. We address the challenging problem of incorporating context information into the community analysis with a novel visual analysis mechanism. Our approach consists of two stages: interactive discovery of salient context, and iterative context-guided community detection. Central to the analysis process is a context relevance model (CRM) that visually characterizes the influence of a given set of contexts on the variation of the detected communities, and discloses the community structure in specific context configurations. The extracted relevance is used to drive an iterative visual reasoning process, in which the community structures are progressively discovered. We introduce a suite of visual representations to encode the community structures, the context as well as the CRM. In particular, we propose an enhanced parallel coordinates representation to depict the context and community structures, which allows for interactive data exploration and community investigation. Case studies on several datasets demonstrate the efficiency and accuracy of our approach.", "title": "" }, { "docid": "41de353ad7e48d5f354893c6045394e2", "text": "This paper proposes a long short-term memory recurrent neural network (LSTM-RNN) for extracting melody and simultaneously detecting regions of melody from polyphonic audio using the proposed harmonic sum loss. The previous state-of-the-art algorithms have not been based on machine learning techniques and certainly not on deep architectures. The harmonics structure in melody is incorporated in the loss function to attain robustness against both octave mismatch and interference from background music. Experimental results show that the performance of the proposed method is better than or comparable to other state-of-the-art algorithms.", "title": "" } ]
scidocsrr
2a3d81dcfe9827429ff879c5242e12e5
Vital Sign Monitoring Through the Back Using an UWB Impulse Radar With Body Coupled Antennas
[ { "docid": "c70e11160c90bd67caa2294c499be711", "text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.", "title": "" } ]
[ { "docid": "1d7035cc5b85e13be6ff932d39740904", "text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor", "title": "" }, { "docid": "c55cf6c871a681cad112cb9c664a1928", "text": "Splitting of the behavioural activity phase has been found in nocturnal rodents with suprachiasmatic nucleus (SCN) coupling disorder. A similar phenomenon was observed in the sleep phase in the diurnal human discussed here, suggesting that there are so-called evening and morning oscillators in the SCN of humans. The present case suffered from bipolar disorder refractory to various treatments, and various circadian rhythm sleep disorders, such as delayed sleep phase, polyphasic sleep, separation of the sleep bout resembling splitting and circabidian rhythm (48 h), were found during prolonged depressive episodes with hypersomnia. Separation of sleep into evening and morning components and delayed sleep-offset (24.69-h cycle) developed when lowering and stopping the dose of aripiprazole (APZ). However, resumption of APZ improved these symptoms in 2 weeks, accompanied by improvement in the patient's depressive state. Administration of APZ may improve various circadian rhythm sleep disorders, as well as improve and prevent manic-depressive episodes, via augmentation of coupling in the SCN network.", "title": "" }, { "docid": "c83456247c28dd7824e9611f3c59167d", "text": "In this paper, we present a carry skip adder (CSKA) structure that has a higher speed yet lower energy consumption compared with the conventional one. The speed enhancement is achieved by applying concatenation and incrementation schemes to improve the efficiency of the conventional CSKA (Conv-CSKA) structure. In addition, instead of utilizing multiplexer logic, the proposed structure makes use of AND-OR-Invert (AOI) and OR-AND-Invert (OAI) compound gates for the skip logic. The structure may be realized with both fixed stage size and variable stage size styles, wherein the latter further improves the speed and energy parameters of the adder. Finally, a hybrid variable latency extension of the proposed structure, which lowers the power consumption without considerably impacting the speed, is presented. This extension utilizes a modified parallel structure for increasing the slack time, and hence, enabling further voltage reduction. The proposed structures are assessed by comparing their speed, power, and energy parameters with those of other adders using a 45-nm static CMOS technology for a wide range of supply voltages. The results that are obtained using HSPICE simulations reveal, on average, 44% and 38% improvements in the delay and energy, respectively, compared with those of the Conv-CSKA. In addition, the power-delay product was the lowest among the structures considered in this paper, while its energy-delay product was almost the same as that of the Kogge-Stone parallel prefix adder with considerably smaller area and power consumption. Simulations on the proposed hybrid variable latency CSKA reveal reduction in the power consumption compared with the latest works in this field while having a reasonably high speed.", "title": "" }, { "docid": "19443768282cf17805e70ac83288d303", "text": "Interactive narrative is a form of storytelling in which users affect a dramatic storyline through actions by assuming the role of characters in a virtual world. This extended abstract outlines the SCHEHERAZADE-IF system, which uses crowdsourcing and artificial intelligence to automatically construct text-based interactive narrative experiences.", "title": "" }, { "docid": "cace842a0c5507ae447e5009fb160592", "text": "UNLABELLED\nDue to the localized surface plasmon (LSP) effect induced by Ag nanoparticles inside black silicon, the optical absorption of black silicon is enhanced dramatically in near-infrared range (1,100 to 2,500 nm). The black silicon with Ag nanoparticles shows much higher absorption than black silicon fabricated by chemical etching or reactive ion etching over ultraviolet to near-infrared (UV-VIS-NIR, 250 to 2,500 nm). The maximum absorption even increased up to 93.6% in the NIR range (820 to 2,500 nm). The high absorption in NIR range makes LSP-enhanced black silicon a potential material used for NIR-sensitive optoelectronic device.\n\n\nPACS\n78.67.Bf; 78.30.Fs; 78.40.-q; 42.70.Gi.", "title": "" }, { "docid": "7db4066e2e6faabe0dfd815cd5b1d66e", "text": "The observed poor quality of graduates of some Nigerian Universities in recent times has been partly traced to inadequacies of the National University Admission Examination System. In this study an Artificial Neural Network (ANN) model, for predicting the likely performance of a candidate being considered for admission into the university was developed and tested. Various factors that may likely influence the performance of a student were identified. Such factors as ordinary level subjects’ scores and subjects’ combination, matriculation examination scores, age on admission, parental background, types and location of secondary school attended and gender, among others, were then used as input variables for the ANN model. A model based on the Multilayer Perceptron Topology was developed and trained using data spanning five generations of graduates from an Engineering Department of University of Ibadan, Nigeria’s first University. Test data evaluation shows that the ANN model is able to correctly predict the performance of more than 70% of prospective students. (", "title": "" }, { "docid": "f7d023abf0f651177497ae38d8494efc", "text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.", "title": "" }, { "docid": "db5157c6682f281fb0f8ad1285646042", "text": "There are currently very few practical methods for assessin g the quality of resources or the reliability of other entities in the o nline environment. This makes it difficult to make decisions about which resources ca n be relied upon and which entities it is safe to interact with. Trust and repu tation systems are aimed at solving this problem by enabling service consumers to eliably assess the quality of services and the reliability of entities befo r they decide to use a particular service or to interact with or depend on a given en tity. Such systems should also allow serious service providers and online play ers to correctly represent the reliability of themselves and the quality of thei r s rvices. In the case of reputation systems, the basic idea is to let parties rate e ch other, for example after the completion of a transaction, and use the aggreg ated ratings about a given party to derive its reputation score. In the case of tru st systems, the basic idea is to analyse and combine paths and networks of trust rel ationships in order to derive measures of trustworthiness of specific nodes. Rep utation scores and trust measures can assist other parties in deciding whether or not to transact with a given party in the future, and whether it is safe to depend on a given resource or entity. This represents an incentive for good behaviour and for offering reliable resources, which thereby tends to have a positive effect on t he quality of online markets and communities. This chapter describes the backgr ound, current status and future trend of online trust and reputation systems.", "title": "" }, { "docid": "b9a1883e48cc1651d887124a2dee3831", "text": "It is known that local filtering-based edge preserving smoothing techniques suffer from halo artifacts. In this paper, a weighted guided image filter (WGIF) is introduced by incorporating an edge-aware weighting into an existing guided image filter (GIF) to address the problem. The WGIF inherits advantages of both global and local smoothing filters in the sense that: 1) the complexity of the WGIF is O(N) for an image with N pixels, which is same as the GIF and 2) the WGIF can avoid halo artifacts like the existing global smoothing filters. The WGIF is applied for single image detail enhancement, single image haze removal, and fusion of differently exposed images. Experimental results show that the resultant algorithms produce images with better visual quality and at the same time halo artifacts can be reduced/avoided from appearing in the final images with negligible increment on running times.", "title": "" }, { "docid": "2de8df231b5af77cfd141e26fb7a3ace", "text": "A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a “prior” that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.", "title": "" }, { "docid": "e2c2cdb5245b73b7511c434c4901fff8", "text": "Adversarial machine learning in the context of image processing and related applications has received a large amount of attention. However, adversarial machine learning, especially adversarial deep learning, in the context of malware detection has received much less attention despite its apparent importance. In this paper, we present a framework for enhancing the robustness of Deep Neural Networks (DNNs) against adversarial malware samples, dubbed Hashing Transformation Deep Neural Networks (HashTran-DNN). The core idea is to use hash functions with a certain locality-preserving property to transform samples to enhance the robustness of DNNs in malware classification. The framework further uses a Denoising Auto-Encoder (DAE) regularizer to reconstruct the hash representations of samples, making the resulting DNN classifiers capable of attaining the locality information in the latent space. We experiment with two concrete instantiations of the HashTranDNN framework to classify Android malware. Experimental results show that four known attacks can render standard DNNs useless in classifying Android malware, that known defenses can at most defend three of the four attacks, and that HashTran-DNN can effectively defend against all of the four attacks.", "title": "" }, { "docid": "5cc1058a0c88ff15e2992a4d83fdbe3f", "text": "The paper presents a finite-element method-based design and analysis of interior permanent magnet synchronous motor with flux barriers (IPMSMFB). Various parameters of IPMSMFB rotor structure were taken into account at determination of a suitable rotor construction. On the basis of FEM analysis the rotor of IPMSMFB with three-flux barriers was built. Output torque capability and flux weakening performance of IPMSMFB were compared with performances of conventional interior permanent magnet synchronous motor (IPMSM), having the same rotor geometrical dimensions and the same stator construction. The predicted performance of conventional IPMSM and IPMSMFB was confirmed with the measurements over a wide-speed range of constant output power operation.", "title": "" }, { "docid": "af19c558ac6b5b286bc89634a1f05e26", "text": "The SIGIR 2016 workshop on Neural Information Retrieval (Neu-IR) took place on 21 July, 2016 in Pisa. The goal of the Neu-IR (pronounced \"New IR\") workshop was to serve as a forum for academic and industrial researchers, working at the intersection of information retrieval (IR) and machine learning, to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research. In total, 19 papers were presented, including oral and poster presentations. The workshop program also included a session on invited \"lightning talks\" to encourage participants to share personal insights and negative results with the community. The workshop was well-attended with more than 120 registrations.", "title": "" }, { "docid": "39a394f6c7f42f3a5e1451b0337584ed", "text": "Surveys throughout the world have shown consistently that persons over 65 are far less likely to be victims of crime than younger age groups. However, many elderly people are unduly fearful about crime which has an adverse effect on their quality of life. This Trends and Issues puts this matter into perspective, but also discusses the more covert phenomena of abuse and neglect of the elderly. Our senior citizens have earned the right to live in dignity and without fear: the community as a whole should contribute to this process. Duncan Chappell Director", "title": "" }, { "docid": "42f176b03faacad53ccef0b7573afdc4", "text": "Acquired upper extremity amputations beyond the finger can have substantial physical, psychological, social, and economic consequences for the patient. The hand surgeon is one of a team of specialists in the care of these patients, but the surgeon plays a critical role in the surgical management of these wounds. The execution of a successful amputation at each level of the limb allows maximum use of the residual extremity, with or without a prosthesis, and minimizes the known complications of these injuries. This article reviews current surgical options in performing and managing upper extremity amputations proximal to the finger.", "title": "" }, { "docid": "7347c844cdc0b7e4b365dafcdc9f720c", "text": "Recommender systems are widely used in online applications since they enable personalized service to the users. The underlying collaborative filtering techniques work on user’s data which are mostly privacy sensitive and can be misused by the service provider. To protect the privacy of the users, we propose to encrypt the privacy sensitive data and generate recommendations by processing them under encryption. With this approach, the service provider learns no information on any user’s preferences or the recommendations made. The proposed method is based on homomorphic encryption schemes and secure multiparty computation (MPC) techniques. The overhead of working in the encrypted domain is minimized by packing data as shown in the complexity analysis.", "title": "" }, { "docid": "545f41e1c94a3198e75801da4c39b0da", "text": "When attempting to improve the performance of a deep learning system, there are more or less three approaches one can take: the first is to improve the structure of the model, perhaps adding another layer, switching from simple recurrent units to LSTM cells [4], or–in the realm of NLP–taking advantage of syntactic parses (e.g. as in [13, et seq.]); another approach is to improve the initialization of the model, guaranteeing that the early-stage gradients have certain beneficial properties [3], or building in large amounts of sparsity [6], or taking advantage of principles of linear algebra [15]; the final approach is to try a more powerful learning algorithm, such as including a decaying sum over the previous gradients in the update [12], by dividing each parameter update by the L2 norm of the previous updates for that parameter [2], or even by foregoing first-order algorithms for more powerful but more computationally costly second order algorithms [9]. This paper has as its goal the third option—improving the quality of the final solution by using a faster, more powerful learning algorithm.", "title": "" }, { "docid": "8c80129507b138d1254e39acfa9300fc", "text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\nhabibima@informatik.hu-berlin.de.", "title": "" }, { "docid": "55eb5594f05319c157d71361880f1983", "text": "Following the growing share of wind energy in electric power systems, several wind power forecasting techniques have been reported in the literature in recent years. In this paper, a wind power forecasting strategy composed of a feature selection component and a forecasting engine is proposed. The feature selection component applies an irrelevancy filter and a redundancy filter to the set of candidate inputs. The forecasting engine includes a new enhanced particle swarm optimization component and a hybrid neural network. The proposed wind power forecasting strategy is applied to real-life data from wind power producers in Alberta, Canada and Oklahoma, U.S. The presented numerical results demonstrate the efficiency of the proposed strategy, compared to some other existing wind power forecasting methods.", "title": "" }, { "docid": "d7538c23aa43edce6cfde8f2125fd3bb", "text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.", "title": "" } ]
scidocsrr
1cf79b316f5fa8001a961a72d59179b6
Beyond the Prince : Race and Gender Role Portrayal in
[ { "docid": "b4dcc5c36c86f9b1fef32839d3a1484d", "text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.", "title": "" } ]
[ { "docid": "c4183c8b08da8d502d84a650d804cac8", "text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>", "title": "" }, { "docid": "e066f0670583195b9ad2f3c888af1dd2", "text": "Deep learning has received much attention as of the most powerful approaches for multimodal representation learning in recent years. An ideal model for multimodal data can reason about missing modalities using the available ones, and usually provides more information when multiple modalities are being considered. All the previous deep models contain separate modality-specific networks and find a shared representation on top of those networks. Therefore, they only consider high level interactions between modalities to find a joint representation for them. In this paper, we propose a multimodal deep learning framework (MDLCW) that exploits the cross weights between representation of modalities, and try to gradually learn interactions of the modalities in a deep network manner (from low to high level interactions). Moreover, we theoretically show that considering these interactions provide more intra-modality information, and introduce a multi-stage pre-training method that is based on the properties of multi-modal data. In the proposed framework, as opposed to the existing deep methods for multi-modal data, we try to reconstruct the representation of each modality at a given level, with representation of other modalities in the previous layer. Extensive experimental results show that the proposed model outperforms state-of-the-art information retrieval methods for both image and text queries on the PASCAL-sentence and SUN-Attribute databases.", "title": "" }, { "docid": "932813bc4a6ccbb81c9a9698b96f3694", "text": "The fast growing deep learning technologies have become the main solution of many machine learning problems for medical image analysis. Deep convolution neural networks (CNNs), as one of the most important branch of the deep learning family, have been widely investigated for various computer-aided diagnosis tasks including long-term problems and continuously emerging new problems. Image contour detection is a fundamental but challenging task that has been studied for more than four decades. Recently, we have witnessed the significantly improved performance of contour detection thanks to the development of CNNs. Beyond purusing performance in existing natural image benchmarks, contour detection plays a particularly important role in medical image analysis. Segmenting various objects from radiology images or pathology images requires accurate detection of contours. However, some problems, such as discontinuity and shape constraints, are insufficiently studied in CNNs. It is necessary to clarify the challenges to encourage further exploration. The performance of CNN based contour detection relies on the state-of-the-art CNN architectures. Careful investigation of their design principles and motivations is critical and beneficial to contour detection. In this paper, we first review recent development of medical image contour detection and point out the current confronting challenges and problems. We discuss the development of general CNNs and their applications in image contours (or edges) detection. We compare those methods in detail, clarify their strengthens and weaknesses. Then we review their recent applications in medical image analysis and point out limitations, with the goal to light some potential directions in medical image analysis. We expect the paper to cover comprehensive technical ingredients of advanced CNNs to enrich the study in the medical image domain. 1E-mail: zizhaozhang@ufl.edu Preprint submitted to arXiv August 26, 2018 ar X iv :1 70 8. 07 28 1v 1 [ cs .C V ] 2 4 A ug 2 01 7", "title": "" }, { "docid": "2b68a925b9056e150a67d794b993e7c7", "text": "The rise and development of O2O e-commerce has brought new opportunities for the enterprise, and also proposed the new challenge to the traditional electronic commerce. The formation process of customer loyalty of O2O e-commerce environment is a complex psychological process. This paper will combine the characteristics of O2O e-commerce, customer's consumer psychology and consumer behavior characteristics to build customer loyalty formation mechanism model which based on the theory of reasoned action model. The related factors of the model including the customer perceived value, customer satisfaction, customer trust and customer switching costs. By exploring the factors affecting customer’ loyalty of O2O e-commerce can provide reference and basis for enterprises to develop e-commerce and better for O2O e-commerce enterprises to develop marketing strategy and enhance customer loyalty. At the end of this paper will also put forward some targeted suggestions for O2O e-commerce enterprises.", "title": "" }, { "docid": "d4f47babcd5840a3f2b5614244835c94", "text": "This paper presents new in-line pseudoelliptic bandpass filters with nonresonating nodes. Microwave bandpass filters based on dual- and triple-mode cavities are introduced. In each case, the transmission zeros (TZs) are individually generated and controlled by dedicated resonators. Dual- and triple-mode cavities are kept homogeneous and contain no coupling or tuning elements. A third-order filter with a TZ extracted at its center is designed by cascading two dual-mode cavities. A direct design technique of this filter is introduced and shown to produce accurate initial designs for narrow-band cases. A six-pole filter is designed by cascading two triple-mode cavities. Measured results are presented to demonstrate the validity of this novel approach.", "title": "" }, { "docid": "78d33d767f9eb15ef79a6d016ffcfb3a", "text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1", "title": "" }, { "docid": "f8e6f97f5c797d490e2490dad676f62a", "text": "Both patients and clinicians may incorrectly diagnose vulvovaginitis symptoms. Patients often self-treat with over-the-counter antifungals or home remedies, although they are unable to distinguish among the possible causes of their symptoms. Telephone triage practices and time constraints on office visits may also hamper effective diagnosis. This review is a guide to distinguish potential causes of vulvovaginal symptoms. The first section describes both common and uncommon conditions associated with vulvovaginitis, including infectious vulvovaginitis, allergic contact dermatitis, systemic dermatoses, rare autoimmune diseases, and neuropathic vulvar pain syndromes. The focus is on the clinical presentation, specifically 1) the absence or presence and characteristics of vaginal discharge; 2) the nature of sensory symptoms (itch and/or pain, localized or generalized, provoked, intermittent, or chronic); and 3) the absence or presence of mucocutaneous changes, including the types of lesions observed and the affected tissue. Additionally, this review describes how such features of the clinical presentation can help identify various causes of vulvovaginitis.", "title": "" }, { "docid": "9152c55c35305bcaf56bc586e87f1575", "text": "Information practices that use personal, financial, and health-related information are governed by US laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must properly be aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These \"rules\" are often precursors to software requirements that must undergo considerable refinement and analysis before they become implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology for directly extracting access rights and obligations from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the US Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.", "title": "" }, { "docid": "56fc185890f9bbf391e2617e0967e736", "text": "Automated Facial Expression Recognition has remained a challenging and interesting problem in computer vision. The recognition of facial expressions is difficult problem for machine learning techniques, since people can vary significantly in the way they show their expressions. Deep learning is a new area of research within machine learning method which can classify images of human faces into emotion categories using Deep Neural Networks (DNN). Convolutional neural networks (CNN) have been widely used to overcome the difficulties in facial expression classification. In this paper, we present a new architecture network based on CNN for facial expressions recognition. We fine tuned our architecture with Visual Geometry Group model (VGG) to improve results. To evaluate our architecture we tested it with many largely public databases (CK+, MUG, and RAFD). Obtained results show that the CNN approach is very effective in image expression recognition on many public databases which achieve an improvements in facial expression analysis.", "title": "" }, { "docid": "26b592326edeac03578d8b52ce33f2e2", "text": "This paper proposes a model of information aesthetics in the context of information visualization. It addresses the need to acknowledge a recently emerging number of visualization projects that combine information visualization techniques with principles of creative design. The proposed model contributes to a better understanding of information aesthetics as a potentially independent research field within visualization that specifically focuses on the experience of aesthetics, dataset interpretation and interaction. The proposed model is based on analysing existing visualization techniques by their interpretative intent and data mapping inspiration. It reveals information aesthetics as the conceptual link between information visualization and visualization art, and includes the fields of social and ambient visualization. This model is unique in its focus on aesthetics as the artistic influence on the technical implementation and intended purpose of a visualization technique, rather than subjective aesthetic judgments of the visualization outcome. This research provides a framework for understanding aesthetics in visualization, and allows for new design guidelines and reviewing criteria.", "title": "" }, { "docid": "89c3f876494506aceeb9b9ccf0da0ff1", "text": "With the prevalence of accessible depth sensors, dynamic human body skeletons have attracted much attention as a robust modality for action recognition. Previous methods model skeletons based on RNN or CNN, which has limited expressive power for irregular joints. In this paper, we represent skeletons naturally on graphs and propose a generalized graph convolutional neural networks (GGCN) for skeleton-based action recognition, aiming to capture space-time variation via spectral graph theory. In particular, we construct a generalized graph over consecutive frames, where each joint is not only connected to its neighboring joints in the same frame strongly or weakly, but also linked with relevant joints in the previous and subsequent frames. The generalized graphs are then fed into GGCN along with the coordinate matrix of the skeleton sequence for feature learning, where we deploy high-order and fast Chebyshev approximation of spectral graph convolution in the network. Experiments show that we achieve the state-of-the-art performance on the widely used NTU RGB+D, UT-Kinect and SYSU 3D datasets.", "title": "" }, { "docid": "fa240a48947a43b9130ee7f48c3ad463", "text": "Content distribution on today's Internet operates primarily in two modes: server-based and peer-to-peer (P2P). To leverage the advantages of both modes while circumventing their key limitations, a third mode: peer-to-server/peer (P2SP) has emerged in recent years. Although P2SP can provide efficient hybrid server-P2P content distribution, P2SP generally works in a closed manner by only utilizing its private owned servers to accelerate its private organized peer swarms. Consequently, P2SP still has its limitations in both content abundance and server bandwidth. To this end, the fourth mode (or says a generalized mode of P2SP) has appeared as \"open-P2SP\" that integrates various third-party servers, contents, and data transfer protocols all over the Internet into a large, open, and federated P2SP platform. In this paper, based on a large-scale commercial open-P2SP system named \"QQXuanfeng\" , we investigate the key challenging problems, practical designs and real-world performances of open-P2SP. Such \"white-box\" study of open-P2SP provides solid experiences and helpful heuristics to the designers of similar systems.", "title": "" }, { "docid": "82edffdadaee9ac0a5b11eb686e109a1", "text": "This paper highlights different security threats and vulnerabilities that is being challenged in smart-grid utilizing Distributed Network Protocol (DNP3) as a real time communication protocol. Experimentally, we will demonstrate two scenarios of attacks, unsolicited message attack and data set injection. The experiments were run on a computer virtual environment and then simulated in DETER testbed platform. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Therefore, mitigation techniques will be used to ensure a healthy check of the network and we will propose the use of host-based intrusion detection agent at each Intelligent Electronic Device (IED) for the purpose of detecting the intrusion and mitigating it. Performing attacks, attack detection, prevention and counter measures will be our primary goal to achieve in this research paper.", "title": "" }, { "docid": "aa5d6e57350c2c1082091c62b6a941e8", "text": "MEC is an emerging paradigm that provides computing, storage, and networking resources within the edge of the mobile RAN. MEC servers are deployed on a generic computing platform within the RAN, and allow for delay-sensitive and context-aware applications to be executed in close proximity to end users. This paradigm alleviates the backhaul and core network and is crucial for enabling low-latency, high-bandwidth, and agile mobile services. This article envisions a real-time, context-aware collaboration framework that lies at the edge of the RAN, comprising MEC servers and mobile devices, and amalgamates the heterogeneous resources at the edge. Specifically, we introduce and study three representative use cases ranging from mobile edge orchestration, collaborative caching and processing, and multi-layer interference cancellation. We demonstrate the promising benefits of the proposed approaches in facilitating the evolution to 5G networks. Finally, we discuss the key technical challenges and open research issues that need to be addressed in order to efficiently integrate MEC into the 5G ecosystem.", "title": "" }, { "docid": "74aaf19d143d86b52c09e726a70a2ac0", "text": "This paper presents simulation and experimental investigation results of steerable integrated lens antennas (ILAs) operating in the 60 GHz frequency band. The feed array of the ILAs is comprised by four switched aperture coupled microstrip antenna (ACMA) elements that allows steering between four different antenna main beam directions in one plane. The dielectric lenses of the designed ILAs are extended hemispherical quartz (ε = 3.8) lenses with the radiuses of 7.5 and 12.5 mm. The extension lengths of the lenses are selected through the electromagnetic optimization in order to achieve the maximum ILAs directivities and also the minimum directivity degradations of the outer antenna elements in the feed array (± 3 mm displacement) relatively to the inner ones (± 1 mm displacement). Simulated maximum directivities of the boresight beam of the designed ILAs are 19.8 dBi and 23.8 dBi that are sufficient for the steerable antennas for the millimeter-wave WLAN/WPAN communication systems. The feed ACMA array together with the waveguide to microstrip transition dedicated for experimental investigations is fabricated on high frequency and low cost Rogers 4003C substrate. Single Pole Double Through (SPDT) switches from Hittite are used in order to steer the ILA prototypes main beam directions. The experimental results of the fabricated electronically steerable quartz ILA prototypes prove the simulation results and show ±35° and ±22° angle sector coverage for the lenses with the 7.5 and 12.5 mm radiuses respectively.", "title": "" }, { "docid": "508ce0c5126540ad7f46b8f375c50df8", "text": "Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n = 33) than in female vervets (n = 30) (P < .05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P < .01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children. D 2002 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "f21b0f519f4bf46cb61b2dc2861014df", "text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.", "title": "" }, { "docid": "2cc1afe86873bb7d83e919d25fbd5954", "text": "Cellular Automata (CA) have attracted growing attention in urban simulation because their capability in spatial modelling is not fully developed in GIS. This paper discusses how cellular automata (CA) can be extended and integrated with GIS to help planners to search for better urban forms for sustainable development. The cellular automata model is built within a grid-GIS system to facilitate easy access to GIS databases for constructing the constraints. The essence of the model is that constraint space is used to regulate cellular space. Local, regional and global constraints play important roles in a€ ecting modelling results. In addition, ‘grey’ cells are deŽ ned to represent the degrees or percentages of urban land development during the iterations of modelling for more accurate results. The model can be easily controlled by the parameter k using a power transformation function for calculating the constraint scores. It can be used as a useful planning tool to test the e€ ects of di€ erent urban development scenarios. 1. Cellular automata and GIS for urban simulation Cellular automata (CA) were developed by Ulam in the 1940s and soon used by Von Neumann to investigate the logical nature of self-reproducible systems (White and Engelen 1993). A CA system usually consists of four elements—cells, states, neighbourhoods and rules. Cells are the smallest units which must manifest some adjacency or proximity. The state of a cell can change according to transition rules which are deŽ ned in terms of neighbourhood functions. The notion of neighbourhood is central to the CA paradigm (Couclelis 1997), but the deŽ nition of neighbourhood is rather relaxed. CA are cell-based methods that can model two-dimensional space. Because of this underlying feature, it does not take long for geographers to apply CA to simulate land use change, urban development and other changes of geographical phenomena. CA have become especially, useful as a tool for modelling urban spatial dynamics and encouraging results have been documented (Deadman et al. 1993, Batty and Xie 1994a, Batty and Xie 1997, White and Engelen 1997). The advantages are that the future trajectory of urban morphology can be shown virtually during the simulation processes. The rapid development of GIS helps to foster the application of CA in urban Internationa l Journal of Geographica l Information Science ISSN 1365-8816 print/ISSN 1362-3087 online © 2000 Taylor & Francis Ltd http://www.tandf.co.uk/journals/tf/13658816.html X. L i and A. G. Yeh 132 simulation. Some researches indicate that cell-based GIS may indeed serve as a useful tool for implementing cellular automata models for the purposes of geographical analysis (Itami 1994). Although current GIS are not designed for fast iterative computation, cellular automata can still be used by creating batch ® les that contain iterative command sequences. While linking cellular automata to GIS can overcome some of the limitations of current GIS (White and Engelen 1997), CA can bene® t from the useful information provided by GIS in de® ning transition rules. The data realism requirement of CA can be best satis® ed with the aid of GIS (Couclelis 1997). Space no longer needs to be uniform since the spatial di€ erence equations can be easily developed in the context of GIS (Batty and Xie 1994b). Most current GIS techniques have limitations in modelling changes in the landscape over time, but the integration of CA and GIS has demonstrated considerable potential (Itami 1988, Deadman et al. 1993). The limitations of contemporary GIS include its poor ability to handle dynamic spatial models, poor performance for many operations, and poor handling of the temporal dimension (Park and Wagner 1997 ). In coupling GIS with CA, CA can serves as an analytical engine to provide a ̄ exible framework for the programming and running of dynamic spatial models. 2. Constrained CA for the planning of sustainable urban development Interest in sustainable urban development has increased rapidly in recent years. Unfortunately, the concept of sustainable urban development is debatable because unique de® nitions and scopes do not exist (Haughton and Hunter 1994). However, this concept is very important to our society in dealing with its increasingly pressing resource and environmental problems. As more nations are implementing this concept in their development plans, it has created important impacts on national policies and urban planning. The concern over sustainable urban development will continue to grow, especially in the developing countries which are undergoing rapid urbanization. A useful way to clarify its ambiguity is to set up some working de® nitions. Some speci® c and narrow de® nitions do exist for special circumstances but there are no commonly accepted de® nitions. The working de® nitions can help to eliminate ambiguities and ® nd out solutions and better alternatives to existing development patterns. The conversion of agricultural land into urban land uses in the urbanization processes has become a serious issue for sustainable urban development in the developing countries. Take China as an example, it cannot a€ ord to lose a signi® cant amount of its valuable agricultural land because it has a huge growing population to feed. Unfortunately, in recent years, a large amount of such land have been unnecessarily lost and the forms of existing urban development cannot help to sustain its further development (Yeh and Li 1997, Yeh and Li 1998). The complete depletion of agricultural land resources would not be far away in some fast growing areas if such development trends continued. The main issue of sustainable urban development is to search for better urban forms that can help to sustain development, especially the minimization of unnecessary agricultural land loss. Four operational criteria for sustainable urban forms can be used: (1 ) not to convert too much agricultural land at the early stages of development; (2 ) to decide the amount of land consumption based on available land resources and population growth; (3 ) to guide urban development to sites which are less important for food production; and (4 ) to maintain compact development patterns. The objective of this research is to develop an operational CA model for Modelling sustainable urban development 133 sustainable urban development. A number of advantages have been identi® ed in the application of CA in urban simulation (Wolfram 1984, Itami 1988). Cellular automata are seen not only as a framework for dynamic spatial modelling but as a paradigm for thinking about complex spatial-temporal phenomena and an experimental laboratory for testing ideas (Itami 1994 ). Formally, standard cellular automata may be generalised as follows: St+1 = f (St, N ) (1 ) where S is a set of all possible states of the cellular automata, N is a neighbourhood of all cells providing input values for the function f, and f is a transition function that de® nes the change of the state from t to t+1. Standard cellular automata apply a b̀ottom-up’ approach. The approach argues that local rules can create complex patterns by running the models in iterations. It is central to the idea that cities should work from particular to general, and that they should seek to understand the small scale in order to understand the large (Batty and Xie 1994a). It is amazing to see that real urban systems can be modelled based on microscopic behaviour that may be the CA model’s most useful advantage . However, the t̀op-down’ critique nevertheless needs to be taken seriously. An example is that central governments have the power to control overall land development patterns and the amount of land consumption. With the implementations of sustainable elements into cellular automata, a new paradigm for thinking about urban planning emerges. It is possible to embed some constraints in the transition rules of cellular automata so that urban growth can be rationalised according to a set of pre-de® ned sustainable criteria. However, such experiments are very limited since many researchers just focus on the simulation of possible urban evolution and the understanding of growth mechanisms using CA techniques. The constrained cellular automata should be able to provide much better alternatives to actual development patterns. A good example is to produce a c̀ompact’ urban form using CA models. The need for sustainable cities is readily apparent in recent years. A particular issue is to seek the most suitable form for sustainable urban development. The growing spread of urban areas accelerating at an alarming rate in the last few decades re ̄ ects the dramatic pressure of human development on nature. The steady rise in urban areas and decline in agricultural land have led to the worsening of food production and other environmental problems. Urban development towards a compact form has been proposed as a means to alleviate the increasingly intensi® ed land use con ̄ icts. The morphology of a city is an important feature in the c̀ompact city theory’ (Jenks et al. 1996). Evidence indicates a strong link between urban form and sustainable development, although it is not simple and straightforward. Compact urban form can be a major means in guiding urban development to sustainability, especially in reducing the negative e€ ects of the present dispersed development in Western cities. However, one of the frequent problems in the compact city debate is the lack of proper tools to ensure successful implementation of the compact city because of its complexity (Burton et al. 1996). This study demonstrates that the constrained CA can be used to model compact cities and sustainable urban forms based on local, regional and global constraints. 3. Suitability and constraints for sustainable urban forms using CA In this constrained CA model, there are three important aspects of sustainable urban forms that need to be consideredÐ compact patterns, land q", "title": "" }, { "docid": "320dbbbc643ff97e97d928130a51384d", "text": "Deep evolutionary network structured representation (DENSER) is a novel evolutionary approach for the automatic generation of deep neural networks (DNNs) which combines the principles of genetic algorithms (GAs) with those of dynamic structured grammatical evolution (DSGE). The GA-level encodes the macro structure of evolution, i.e., the layers, learning, and/or data augmentation methods (among others); the DSGE-level specifies the parameters of each GA evolutionary unit and the valid range of the parameters. The use of a grammar makes DENSER a general purpose framework for generating DNNs: one just needs to adapt the grammar to be able to deal with different network and layer types, problems, or even to change the range of the parameters. DENSER is tested on the automatic generation of convolutional neural networks (CNNs) for the CIFAR-10 dataset, with the best performing networks reaching accuracies of up to 95.22%. Furthermore, we take the fittest networks evolved on the CIFAR-10, and apply them to classify MNIST, Fashion-MNIST, SVHN, Rectangles, and CIFAR-100. The results show that the DNNs discovered by DENSER during evolution generalise, are robust, and scale. The most impressive result is the 78.75% classification accuracy on the CIFAR-100 dataset, which, to the best of our knowledge, sets a new state-of-the-art on methods that seek to automatically design CNNs.", "title": "" }, { "docid": "41c99f4746fc299ae886b6274f899c4b", "text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.", "title": "" } ]
scidocsrr
e095a3f8b3f574aa8111915f4094dc1a
Securing Embedded User Interfaces: Android and Beyond
[ { "docid": "b7758121f5c24dd87e6c5fd795140066", "text": "Conflicts between security and usability goals can be avoided by considering the goals together throughout an iterative design process. A successful design involves addressing users' expectations and inferring authorization based on their acts of designation.", "title": "" } ]
[ { "docid": "3f5083aca7cb8952ba5bf421cb34fab6", "text": "Thyroid gland is butterfly shaped organ which consists of two cone lobes and belongs to the endocrine system. It lies in front of the neck below the adams apple. Thyroid disorders are some kind of abnormalities in thyroid gland which can give rise to nodules like hypothyroidism, hyperthyroidism, goiter, benign and malignant etc. Ultrasound (US) is one among the hugely used modality to detect the thyroid disorders because it has some benefits over other techniques like non-invasiveness, low cost, free of ionizing radiations etc. This paper provides a concise overview about segmentation of thyroid nodules and importance of neural networks comparative to other techniques.", "title": "" }, { "docid": "62bf93deeb73fab74004cb3ced106bac", "text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.", "title": "" }, { "docid": "8a13bb1aa34da7284fc1777e2d23ca5e", "text": "By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.", "title": "" }, { "docid": "e2f57214cd2ec7b109563d60d354a70f", "text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .", "title": "" }, { "docid": "e442b7944062f6201e779aa1e7d6c247", "text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.", "title": "" }, { "docid": "93325e6f1c13889fb2573f4631d021a5", "text": "The difference between a computer game and a simulator can be a small one both require the same capabilities from the computer: realistic graphics, behavior consistent with the laws of physics, a variety of scenarios where difficulties can emerge, and some assessment technique to inform users of performance. Computer games are a multi-billion dollar industry in the United States, and as the production costs and complexity of games have increased, so has the effort to make their creation easier. Commercial software products have been developed to greatly simpl ify the game-making process, allowing developers to focus on content rather than on programming. This paper investigates Unity3D game creation software for making threedimensional engine-room simulators. Unity3D is arguably the best software product for game creation, and has been used for numerous popular and successful commercial games. Maritime universities could greatly benefit from making custom simulators to fit specific applications and requirements, as well as from reducing the cost of purchasing simulators. We use Unity3D to make a three-dimensional steam turbine simulator that achieves a high degree of realism. The user can walk around the turbine, open and close valves, activate pumps, and run the turbine. Turbine operating parameters such as RPM, condenser vacuum, lube oil temperature. and governor status are monitored. In addition, the program keeps a log of any errors made by the operator. We find that with the use of Unity3D, students and faculty are able to make custom three-dimensional ship and engine room simulators that can be used as training and evaluation tools.", "title": "" }, { "docid": "1a66727305984ae359648e4bd3e75ba2", "text": "Self-organizing models constitute valuable tools for data visualization, clustering, and data mining. Here, we focus on extensions of basic vector-based models by recursive computation in such a way that sequential and tree-structured data can be processed directly. The aim of this article is to give a unified review of important models recently proposed in literature, to investigate fundamental mathematical properties of these models, and to compare the approaches by experiments. We first review several models proposed in literature from a unifying perspective, thereby making use of an underlying general framework which also includes supervised recurrent and recursive models as special cases. We shortly discuss how the models can be related to different neuron lattices. Then, we investigate theoretical properties of the models in detail: we explicitly formalize how structures are internally stored in different context models and which similarity measures are induced by the recursive mapping onto the structures. We assess the representational capabilities of the models, and we shortly discuss the issues of topology preservation and noise tolerance. The models are compared in an experiment with time series data. Finally, we add an experiment for one context model for tree-structured data to demonstrate the capability to process complex structures.", "title": "" }, { "docid": "f64390896e5529f676484b9b0f4eab84", "text": "Identifying the object that attracts human visual attention is an essential function for automatic services in smart environments. However, existing solutions can compute the gaze direction without providing the distance to the target. In addition, most of them rely on special devices or infrastructure support. This paper explores the possibility of using a smartphone to detect the visual attention of a user. By applying the proposed VADS system, acquiring the location of the intended object only requires one simple action: gazing at the intended object and holding up the smartphone so that the object as well as user's face can be simultaneously captured by the front and rear cameras. We extend the current advances of computer vision to develop efficient algorithms to obtain the distance between the camera and user, the user's gaze direction, and the object's direction from camera. The object's location can then be computed by solving a trigonometric problem. VADS has been prototyped on commercial off-the-shelf (COTS) devices. Extensive evaluation results show that VADS achieves low error (about 1.5° in angle and 0.15m in distance for objects within 12m) as well as short latency. We believe that VADS enables a large variety of applications in smart environments.", "title": "" }, { "docid": "8eb907b00933dfa59c95b919dd0579e9", "text": "Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).", "title": "" }, { "docid": "73d9e6a019b45639927752bdc4070876", "text": "An increasingly important challenge in data analytics is dirty data in the form of missing, duplicate, incorrect, or inconsistent values. In the SampleClean project, we have developed a new suite of algorithms to estimate the results of different types of analytic queries after applying data cleaning only to a sample. First, this article describes methods for computing statistically bounded estimates of SUM, COUNT, and AVG queries from samples of data corrupted with duplications and incorrect values. Some types of data error, such as duplication, can affect sampling probabilities so results have to be re-weighted to compensate for biases. Then it presents an application of these query processing and data cleaning methods to materialized views maintenance. The view cleaning algorithm applies hashing to efficiently maintain a uniform sample of rows in a materialized view, and then dirty data query processing techniques to correct stale query results. Finally, the article describes a gradient-descent algorithm that extends this idea to the increasingly common Machine Learning-based analytics.", "title": "" }, { "docid": "a8a4bad208ee585ae4b4a0b3c5afe97a", "text": "English-speaking children with specific language impairment (SLI) are known to have particular difficulty with the acquisition of grammatical morphemes that carry tense and agreement features, such as the past tense -ed and third-person singular present -s. In this study, an Extended Optional Infinitive (EOI) account of SLI is evaluated. In this account, -ed, -s, BE, and DO are regarded as finiteness markers. This model predicts that finiteness markers are omitted for an extended period of time for nonimpaired children, and that this period will be extended for a longer time in children with SLI. At the same time, it predicts that if finiteness markers are present, they will be used correctly. These predictions are tested in this study. Subjects were 18 5-year-old children with SLI with expressive and receptive language deficits and two comparison groups of children developing language normally: 22 CA-equivalent (5N) and 20 younger, MLU-equivalent children (3N). It was found that the children with SLI used nonfinite forms of lexical verbs, or omitted BE and DO, more frequently than children in the 5N and 3N groups. At the same time, like the normally developing children, when the children with SLI marked finiteness, they did so appropriately. Most strikingly, the SLI group was highly accurate in marking agreement on BE and DO forms. The findings are discussed in terms of the predictions of the EOI model, in comparison to other models of the grammatical limitations of children with SLI.", "title": "" }, { "docid": "b2a9264030e56595024ce0e02da6c73f", "text": "Traditional citation analysis has been widely applied to detect patterns of scientific collaboration, map the landscapes of scholarly disciplines, assess the impact of research outputs, and observe knowledge transfer across domains. It is, however, limited, as it assumes all citations are of similar value and weights each equally. Content-based citation analysis (CCA) addresses a citation’s value by interpreting each one based on its context at both the syntactic and semantic levels. This paper provides a comprehensive overview of CAA research in terms of its theoretical foundations, methodical approaches, and example applications. In addition, we highlight how increased computational capabilities and publicly available full-text resources have opened this area of research to vast possibilities, which enable deeper citation analysis, more accurate citation prediction, and increased knowledge discovery.", "title": "" }, { "docid": "725bfdbd65a62d3d7ac50fee087d752f", "text": "BACKGROUND\nIndividuals with autism spectrum disorders (ASDs) often display symptoms from other diagnostic categories. Studies of clinical and psychosocial outcome in adult patients with ASDs without concomitant intellectual disability are few. The objective of this paper is to describe the clinical psychiatric presentation and important outcome measures of a large group of normal-intelligence adult patients with ASDs.\n\n\nMETHODS\nAutistic symptomatology according to the DSM-IV-criteria and the Gillberg & Gillberg research criteria, patterns of comorbid psychopathology and psychosocial outcome were assessed in 122 consecutively referred adults with normal intelligence ASDs. The subjects consisted of 5 patients with autistic disorder (AD), 67 with Asperger's disorder (AS) and 50 with pervasive developmental disorder not otherwise specified (PDD NOS). This study group consists of subjects pooled from two studies with highly similar protocols, all seen on an outpatient basis by one of three clinicians.\n\n\nRESULTS\nCore autistic symptoms were highly prevalent in all ASD subgroups. Though AD subjects had the most pervasive problems, restrictions in non-verbal communication were common across all three subgroups and, contrary to current DSM criteria, so were verbal communication deficits. Lifetime psychiatric axis I comorbidity was very common, most notably mood and anxiety disorders, but also ADHD and psychotic disorders. The frequency of these diagnoses did not differ between the ASD subgroups or between males and females. Antisocial personality disorder and substance abuse were more common in the PDD NOS group. Of all subjects, few led an independent life and very few had ever had a long-term relationship. Female subjects more often reported having been bullied at school than male subjects.\n\n\nCONCLUSION\nASDs are clinical syndromes characterized by impaired social interaction and non-verbal communication in adulthood as well as in childhood. They also carry a high risk for co-existing mental health problems from a broad spectrum of disorders and for unfavourable psychosocial life circumstances. For the next revision of DSM, our findings especially stress the importance of careful examination of the exclusion criterion for adult patients with ASDs.", "title": "" }, { "docid": "63405ca71cf052b0011106e5fda6a9ea", "text": "Device-to-Device (D2D) communication has emerged as a promising technology for optimizing spectral efficiency in future cellular networks. D2D takes advantage of the proximity of communicating devices for efficient utilization of available resources, improving data rates, reducing latency, and increasing system capacity. The research community is actively investigating the D2D paradigm to realize its full potential and enable its smooth integration into the future cellular system architecture. Existing surveys on this paradigm largely focus on interference and resource management. We review recently proposed solutions in over explored and under explored areas in D2D. These solutions include protocols, algorithms, and architectures in D2D. Furthermore, we provide new insights on open issues in these areas. Finally, we discuss potential future research directions.", "title": "" }, { "docid": "5d48cd6c8cc00aec5f7f299c346405c9", "text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of", "title": "" }, { "docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a", "text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053", "title": "" }, { "docid": "5a2bf6b24abcbad24f4c01847b66dd2e", "text": "Sparse representations of text such as bag-ofwords models or extended explicit semantic analysis (ESA) representations are commonly used in many NLP applications. However, for short texts, the similarity between two such sparse vectors is not accurate due to the small term overlap. While there have been multiple proposals for dense representations of words, measuring similarity between short texts (sentences, snippets, paragraphs) requires combining these token level similarities. In this paper, we propose to combine ESA representations and word2vec representations as a way to generate denser representations and, consequently, a better similarity measure between short texts. We study three densification mechanisms that involve aligning sparse representation via many-to-many, many-to-one, and oneto-one mappings. We then show the effectiveness of these mechanisms on measuring similarity between short texts.", "title": "" }, { "docid": "c8f39a710ca3362a4d892879f371b318", "text": "While sentiment and emotion analysis has received a considerable amount of research attention, the notion of understanding and detecting the intensity of emotions is relatively less explored. This paper describes a system developed for predicting emotion intensity in tweets. Given a Twitter message, CrystalFeel uses features derived from parts-of-speech, ngrams, word embedding, and multiple affective lexicons including Opinion Lexicon, SentiStrength, AFFIN, NRC Emotion & Hash Emotion, and our in-house developed EI Lexicons to predict the degree of the intensity associated with fear, anger, sadness, and joy in the tweet. We found that including the affective lexicons-based features allowed the system to obtain strong prediction performance, while revealing interesting emotion word-level and message-level associations. On gold test data, CrystalFeel obtained Pearson correlations of .717 on average emotion intensity and of .816 on sentiment intensity.", "title": "" }, { "docid": "7a3053844afda6f06785058f1dda4648", "text": "Mutation analysis evaluates a testing technique by measur- ing how well it detects seeded faults (mutants). Mutation analysis is hampered by inherent scalability problems — a test suite is executed for each of a large number of mutants. Despite numerous optimizations presented in the literature, this scalability issue remains, and this is one of the reasons why mutation analysis is hardly used in practice. Whereas most previous optimizations attempted to stati- cally reduce the number of executions or their computational overhead, this paper exploits information available only at run time to further reduce the number of executions. First, state infection conditions can reveal — with a single test execution of the unmutated program — which mutants would lead to a different state, thus avoiding unnecessary test executions. Second, determining whether an infected execution state propagates can further reduce the number of executions. Mutants that are embedded in compound expressions may infect the state locally without affecting the outcome of the compound expression. Third, those mutants that do infect the state can be partitioned based on the resulting infected state — if two mutants lead to the same infected state, only one needs to be executed as the result of the other can be inferred. We have implemented these optimizations in the Major mu- tation framework and empirically evaluated them on 14 open source programs. The optimizations reduced the mutation analysis time by 40% on average.", "title": "" } ]
scidocsrr
f4b614cb9723511bfce27ae4db485ddd
A survey on the communication architectures in smart grid
[ { "docid": "0b33249df17737a826dcaa197adccb74", "text": "In the competitive electricity structure, demand response programs enable customers to react dynamically to changes in electricity prices. The implementation of such programs may reduce energy costs and increase reliability. To fully harness such benefits, existing load controllers and appliances need around-the-clock price information. Advances in the development and deployment of advanced meter infrastructures (AMIs), building automation systems (BASs), and various dedicated embedded control systems provide the capability to effectively address this requirement. In this paper we introduce a meter gateway architecture (MGA) to serve as a foundation for integrated control of loads by energy aggregators, facility hubs, and intelligent appliances. We discuss the requirements that motivate the architecture, describe its design, and illustrate its application to a small system with an intelligent appliance and a legacy appliance using a prototype implementation of an intelligent hub for the MGA and ZigBee wireless communications.", "title": "" } ]
[ { "docid": "5249a94aa9d9dbb211bb73fa95651dfd", "text": "Power and energy have become increasingly important concerns in the design and implementation of today's multicore/manycore chips. In this paper, we present two priority-based CPU scheduling algorithms, Algorithm Cache Miss Priority CPU Scheduler (CM-PCS) and Algorithm Context Switch Priority CPU Scheduler (CS-PCS), which take advantage of often ignored dynamic performance data, in order to reduce power consumption by over 20 percent with a significant increase in performance. Our algorithms utilize Linux cpusets and cores operating at different fixed frequencies. Many other techniques, including dynamic frequency scaling, can lower a core's frequency during the execution of a non-CPU intensive task, thus lowering performance. Our algorithms match processes to cores better suited to execute those processes in an effort to lower the average completion time of all processes in an entire task, thus improving performance. They also consider a process's cache miss/cache reference ratio, number of context switches and CPU migrations, and system load. Finally, our algorithms use dynamic process priorities as scheduling criteria. We have tested our algorithms using a real AMD Opteron 6134 multicore chip and measured results directly using the “KillAWatt” meter, which samples power periodically during execution. Our results show not only a power (energy/execution time) savings of 39 watts (21.43 percent) and 38 watts (20.88 percent), but also a significant improvement in the performance, performance per watt, and execution time · watt (energy) for a task consisting of 24 concurrently executing benchmarks, when compared to the default Linux scheduler and CPU frequency scaling governor.", "title": "" }, { "docid": "7f3686b783273c4df7c4fb41fe7ccefd", "text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ef2cf439b0765c44e9e4db87836401e7", "text": "Phishing is defined as mimicking a creditable company's website aiming to take private information of a user. In order to eliminate phishing, different solutions proposed. However, only one single magic bullet cannot eliminate this threat completely. Data mining is a promising technique used to detect phishing attacks. In this paper, an intelligent system to detect phishing attacks is presented. We used different data mining techniques to decide categories of websites: legitimate or phishing. Different classifiers were used in order to construct accurate intelligent system for phishing website detection. Classification accuracy, area under receiver operating characteristic (ROC) curves (AUC) and F-measure is used to evaluate the performance of the data mining techniques. Results showed that Random Forest has outperformed best among the classification methods by achieving the highest accuracy 97.36%. Random forest runtimes are quite fast, and it can deal with different websites for phishing detection.", "title": "" }, { "docid": "3c5a5ee0b855625c959593a08d6e1e24", "text": "We present Scalable Host-tree Embeddings for Efficient Partitioning (Sheep), a distributed graph partitioning algorithm capable of handling graphs that far exceed main memory. Sheep produces high quality edge partitions an order of magnitude faster than both state of the art offline (e.g., METIS) and streaming partitioners (e.g., Fennel). Sheep’s partitions are independent of the input graph distribution, which means that graph elements can be assigned to processing nodes arbitrarily without affecting the partition quality. Sheep transforms the input graph into a strictly smaller elimination tree via a distributed map-reduce operation. By partitioning this tree, Sheep finds an upper-bounded communication volume partitioning of the original graph. We describe the Sheep algorithm and analyze its spacetime requirements, partition quality, and intuitive characteristics and limitations. We compare Sheep to contemporary partitioners and demonstrate that Sheep creates competitive partitions, scales to larger graphs, and has better runtime.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "cbbd8c44de7e060779ed60c6edc31e3c", "text": "This letter presents a compact broadband microstrip-line-fed sleeve monopole antenna for application in the DTV system. The design of meandering the monopole into a compact structure is applied for size reduction. By properly selecting the length and spacing of the sleeve, the broadband operation for the proposed design can be achieved, and the obtained impedance bandwidth covers the whole DTV (470862 MHz) band. Most importantly, the matching condition over a wide frequency range can be performed well even when a small ground-plane length is used; meanwhile, a small variation in the impedance bandwidth is observed for the ground-plane length varied in a great range.", "title": "" }, { "docid": "b700c177ab4ee014cea9a3a2fd870230", "text": "Exploiting network data (i.e., graphs) is a rather particular case of data mining. The size and relevance of network domains justifies research on graph mining, but also brings forth severe complications. Computational aspects like scalability and parallelism have to be reevaluated, and well as certain aspects of the data mining process. One of those are the methodologies used to evaluate graph mining methods, particularly when processing large graphs. In this paper we focus on the evaluation of a graph mining task known as Link Prediction. First we explore the available solutions in traditional data mining for that purpose, discussing which methods are most appropriate. Once those are identified, we argue about their capabilities and limitations for producing a faithful and useful evaluation. Finally, we introduce a novel modification to a traditional evaluation methodology with the goal of adapting it to the problem of Link Prediction on large graphs.", "title": "" }, { "docid": "0972f1690f5bba5a8bdec67cd133d690", "text": "We use a deep learning model trained only on a patient’s blood oxygenation data (measurable with an inexpensive fingertip sensor) to predict impending hypoxemia (low blood oxygen) more accurately than trained anesthesiologists with access to all the data recorded in a modern operating room. We also provide a simple way to visualize the reason why a patient’s risk is low or high by assigning weight to the patient’s past blood oxygen values. This work has the potential to provide cuttingedge clinical decision support in low-resource settings, where rates of surgical complication and death are substantially greater than in high-resource areas.", "title": "" }, { "docid": "2c93fcf96c71c7c0a8dcad453da53f81", "text": "Production cars are designed to understeer and rarely do they oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying operating conditions. Four-wheel steering is a serious effort on the part of automotive design engineers to provide near-neutral steering. Also in situations like low speed cornering, vehicle parking and driving in city conditions with heavy traffic in tight spaces, driving would be very difficult due to vehicle’s larger wheelbase and track width. Hence there is a requirement of a mechanism which result in less turning radius and it can be achieved by implementing four wheel steering mechanism instead of regular two wheel steering. In this project Maruti Suzuki 800 is considered as a benchmark vehicle. The main aim of this project is to turn the rear wheels out of phase to the front wheels. In order to achieve this, a mechanism which consists of two bevel gears and intermediate shaft which transmit 100% torque as well turns rear wheels in out of phase was developed. The mechanism was modelled using CATIA and the motion simulation was done using ADAMS. A physical prototype was realised. The prototype was tested for its cornering ability through constant radius test and was found 50% reduction in turning radius and the vehicle was operated at low speed of 10 kmph.", "title": "" }, { "docid": "4961f878fecbe0153a679210fb986a8a", "text": "Wikis are collaborative systems in which virtually anyone can edit anything. Although wikis have become highly popular in many domains, their mutable nature often leads them to be distrusted as a reliable source of information. Here we describe a social dynamic analysis tool called WikiDashboard which aims to improve social transparency and accountability on Wikipedia articles. Early reactions from users suggest that the increased transparency afforded by the tool can improve the interpretation, communication, and trustworthiness of Wikipedia articles.", "title": "" }, { "docid": "e104e306d90605a5bc9d853180567917", "text": "An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds. It is based on the well-known autocorrelation method with a number of modifications that combine to prevent errors. The algorithm has several desirable features. Error rates are about three times lower than the best competing methods, as evaluated over a database of speech recorded together with a laryngograph signal. There is no upper limit on the frequency search range, so the algorithm is suited for high-pitched voices and music. The algorithm is relatively simple and may be implemented efficiently and with low latency, and it involves few parameters that must be tuned. It is based on a signal model (periodic signal) that may be extended in several ways to handle various forms of aperiodicity that occur in particular applications. Finally, interesting parallels may be drawn with models of auditory processing.", "title": "" }, { "docid": "a9c00556e3531ba81cc009ae3f5a1816", "text": "A systematic, tiered approach to assess the safety of engineered nanomaterials (ENMs) in foods is presented. The ENM is first compared to its non-nano form counterpart to determine if ENM-specific assessment is required. Of highest concern from a toxicological perspective are ENMs which have potential for systemic translocation, are insoluble or only partially soluble over time or are particulate and bio-persistent. Where ENM-specific assessment is triggered, Tier 1 screening considers the potential for translocation across biological barriers, cytotoxicity, generation of reactive oxygen species, inflammatory response, genotoxicity and general toxicity. In silico and in vitro studies, together with a sub-acute repeat-dose rodent study, could be considered for this phase. Tier 2 hazard characterisation is based on a sentinel 90-day rodent study with an extended range of endpoints, additional parameters being investigated case-by-case. Physicochemical characterisation should be performed in a range of food and biological matrices. A default assumption of 100% bioavailability of the ENM provides a 'worst case' exposure scenario, which could be refined as additional data become available. The safety testing strategy is considered applicable to variations in ENM size within the nanoscale and to new generations of ENM.", "title": "" }, { "docid": "cf4089c8c3b8408e2d2966e3abd8af09", "text": "The deployment of wireless sensor networks and mobile ad-hoc networks in applications such as emergency services, warfare and health monitoring poses the threat of various cyber hazards, intrusions and attacks as a consequence of these networks’ openness. Among the most significant research difficulties in such networks safety is intrusion detection, whose target is to distinguish between misuse and abnormal behavior so as to ensure secure, reliable network operations and services. Intrusion detection is best delivered by multi-agent system technologies and advanced computing techniques. To date, diverse soft computing and machine learning techniques in terms of computational intelligence have been utilized to create Intrusion Detection and Prevention Systems (IDPS), yet the literature does not report any state-ofthe-art reviews investigating the performance and consequences of such techniques solving wireless environment intrusion recognition issues as they gain entry into cloud computing. The principal contribution of this paper is a review and categorization of existing IDPS schemes in terms of traditional artificial computational intelligence with a multi-agent support. The significance of the techniques and methodologies and their performance and limitations are additionally analyzed in this study, and the limitations are addressed as challenges to obtain a set of requirements for IDPS in establishing a collaborative-based wireless IDPS (Co-WIDPS) architectural design. It amalgamates a fuzzy reinforcement learning knowledge management by creating a far superior technological platform that is far more accurate in detecting attacks. In conclusion, we elaborate on several key future research topics with the potential to accelerate the progress and deployment of computational intelligence based Co-WIDPSs. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5afb121d5e4a5ab8daa80580c8bd8253", "text": "In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.", "title": "" }, { "docid": "cc93f5a421ad0e5510d027b01582e5ae", "text": "This paper assesses the impact of financial reforms in Zimbabwe on savings and credit availability to small and medium scale enterprises (SMEs) and the poor. We established that the reforms improved domestic savings mobilization due to high deposit rates, the emergence of new financial institutions and products and the general increase in real incomes after the 1990 economic reforms. The study uncovered that inflation and real income were the major determinants of savings during the sample period. High lending rates and the use of conventional lending methodologies by banks restricted access to credit by the SMEs and the poor. JEL Classification Numbers: E21, O16.", "title": "" }, { "docid": "1b3b2b8872d3b846120502a7a40e03d0", "text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.", "title": "" }, { "docid": "57c6d587b602b17a3cbf3b9b3c72c6c9", "text": "OBJECTIVE\nDevelopment of a rational and enforceable basis for controlling the impact of cannabis use on traffic safety.\n\n\nMETHODS\nAn international working group of experts on issues related to drug use and traffic safety evaluated evidence from experimental and epidemiological research and discussed potential approaches to developing per se limits for cannabis.\n\n\nRESULTS\nIn analogy to alcohol, finite (non-zero) per se limits for delta-9-tetrahydrocannabinol (THC) in blood appear to be the most effective approach to separating drivers who are impaired by cannabis use from those who are no longer under the influence. Limited epidemiological studies indicate that serum concentrations of THC below 10 ng/ml are not associated with an elevated accident risk. A comparison of meta-analyses of experimental studies on the impairment of driving-relevant skills by alcohol or cannabis suggests that a THC concentration in the serum of 7-10 ng/ml is correlated with an impairment comparable to that caused by a blood alcohol concentration (BAC) of 0.05%. Thus, a suitable numerical limit for THC in serum may fall in that range.\n\n\nCONCLUSIONS\nThis analysis offers an empirical basis for a per se limit for THC that allows identification of drivers impaired by cannabis. The limited epidemiological data render this limit preliminary.", "title": "" }, { "docid": "6721d6fb3b2f97062303eb63e6e9de31", "text": "Business process modeling is a big part in the industry, mainly to document, analyze, and optimize workflows. Currently, the EPC process modeling notation is used very wide, because of the excellent integration in the ARIS Toolset and the long existence of this process language. But as a change of time, BPMN gets popular and the interest in the industry and companies gets growing up. It is standardized, has more expressiveness than EPC and the tool support increase very rapidly. With having tons of existing EPC process models; a big need from the industry is to have an automated transformation from EPC to BPMN. This paper specified a direct approach of a transformation from EPC process model elements to BPMN. Thereby it is tried to map every construct in EPC fully automated to BPMN. But as it is described, not for every process element works this out, so in addition, some extensions and semantics rules are defined.", "title": "" }, { "docid": "51f47a5e873f7b24cd15aff4ceb8d35c", "text": "We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.", "title": "" }, { "docid": "93d40aa40a32edab611b6e8c4a652dbb", "text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.", "title": "" } ]
scidocsrr
aefe8698bdcbbc5e3be3fda46c3d563b
Compact Offset Microstrip-Fed MIMO Antenna for Band-Notched UWB Applications
[ { "docid": "ba13195d39b28d5205b33452bfebd6e7", "text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications. The antenna consists of two open L-shaped slot (LS) antenna elements and a narrow slot on the ground plane. The antenna elements are placed perpendicularly to each other to obtain high isolation, and the narrow slot is added to reduce the mutual coupling of antenna elements in the low frequency band (3-4.5 GHz). The proposed MIMO antenna has a compact size of 32 ×32 mm2, and the antenna prototype is fabricated and measured. The measured results show that the proposed antenna design achieves an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than 15 dB, and a low envelope correlation coefficient of better than 0.02 across the frequency band, which are suitable for portable UWB applications.", "title": "" }, { "docid": "b3c9bc55f5a9d64a369ec67e1364c4fc", "text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.", "title": "" } ]
[ { "docid": "4163070f45dd4d252a21506b1abcfff4", "text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.", "title": "" }, { "docid": "bf2c7b1d93b6dee024336506fb5a2b32", "text": "In this paper we present the first public, online demonstration of MaxTract; a tool that converts PDF files containing mathematics into multiple formats including LTEX, HTML with embedded MathML, and plain text. Using a bespoke PDF parser and image analyser, we directly extract character and font information to use as input for a linear grammar which, in conjunction with specialised drivers, can accurately recognise and reproduce both the two dimensional relationships between symbols in mathematical formulae and the one dimensional relationships present in standard text. The main goals of MaxTract are to provide translation services into standard mathematical markup languages and to add accessibility to mathematical documents on multiple levels. This includes both accessibility in the narrow sense of providing access to content for print impaired users, such as those with visual impairments, dyslexia or dyspraxia, as well as more generally to enable any user access to the mathematical content at more re-usable levels than merely visual. MaxTract produces output compatible with web browsers, screen readers, and tools such as copy and paste, which is achieved by enriching the regular text with mathematical markup. The output can also be used directly, within the limits of the presentation MathML produced, as machine readable mathematical input to software systems such as Mathematica or Maple.", "title": "" }, { "docid": "5b1fabc6a25409b25b37ea34a1e57cf8", "text": "Global contrast considers the color difference between a target region or pixel and the rest of the image. It is frequently used to measure the saliency of the region or pixel. In previous global contrast-based methods, saliency is usually measured by the sum of contrast from the entire image. We find that the spatial distribution of contrast is one important cue of saliency that is neglected by previous works. Foreground pixel usually has high contrast from all directions, since it is surrounded by the background. Background pixel often shows low contrast in at least one direction, as it has to connect to the background. Motivated by this intuition, we first compute directional contrast from different directions for each pixel, and propose minimum directional contrast (MDC) as raw saliency metric. Then an O(1) computation of MDC using integral image is proposed. It takes only 1.5 ms for an input image of the QVGA resolution. In saliency post-processing, we use marker-based watershed algorithm to estimate each pixel as foreground or background, followed by one linear function to highlight or suppress its saliency. Performance evaluation is carried on four public data sets. The proposed method significantly outperforms other global contrast-based methods, and achieves comparable or better performance than the state-of-the-art methods. The proposed method runs at 300 FPS and shows six times improvement in runtime over the state-of-the-art methods.", "title": "" }, { "docid": "f6446f5853ea6cb1ad3705c23b96edae", "text": "Cloud-based radio access networks (C-RAN) have been proposed as a cost-efficient way of deploying small cells. Unlike conventional RANs, a C-RAN decouples the baseband processing unit (BBU) from the remote radio head (RRH), allowing for centralized operation of BBUs and scalable deployment of light-weight RRHs as small cells. In this work, we argue that the intelligent configuration of the front-haul network between the BBUs and RRHs, is essential in delivering the performance and energy benefits to the RAN and the BBU pool, respectively. We then propose FluidNet - a scalable, light-weight framework for realizing the full potential of C-RAN. FluidNet deploys a logically re-configurable front-haul to apply appropriate transmission strategies in different parts of the network and hence cater effectively to both heterogeneous user profiles and dynamic traffic load patterns. FluidNet's algorithms determine configurations that maximize the traffic demand satisfied on the RAN, while simultaneously optimizing the compute resource usage in the BBU pool. We prototype FluidNet on a 6 BBU, 6 RRH WiMAX C-RAN testbed. Prototype evaluations and large-scale simulations reveal that FluidNet's ability to re-configure its front-haul and tailor transmission strategies provides a 50% improvement in satisfying traffic demands, while reducing the compute resource usage in the BBU pool by 50% compared to baseline transmission schemes.", "title": "" }, { "docid": "3725922023dbb52c1bde309dbe4d76ca", "text": "BACKGROUND\nRecent studies demonstrate that low-level laser therapy (LLLT) modulates many biochemical processes, especially the decrease of muscle injures, the increase in mitochondrial respiration and ATP synthesis for accelerating the healing process.\n\n\nOBJECTIVE\nIn this work, we evaluated mitochondrial respiratory chain complexes I, II, III and IV and succinate dehydrogenase activities after traumatic muscular injury.\n\n\nMETHODS\nMale Wistar rats were randomly divided into three groups (n=6): sham (uninjured muscle), muscle injury without treatment, muscle injury with LLLT (AsGa) 5J/cm(2). Gastrocnemius injury was induced by a single blunt-impact trauma. LLLT was used 2, 12, 24, 48, 72, 96, and 120 hours after muscle-trauma.\n\n\nRESULTS\nOur results showed that the activities of complex II and succinate dehydrogenase after 5days of muscular lesion were significantly increased when compared to the control group. Moreover, our results showed that LLLT significantly increased the activities of complexes I, II, III, IV and succinate dehydrogenase, when compared to the group of injured muscle without treatment.\n\n\nCONCLUSION\nThese results suggest that the treatment with low-level laser may induce an increase in ATP synthesis, and that this may accelerate the muscle healing process.", "title": "" }, { "docid": "42c7c881935df8b22068dabdd48a05e8", "text": "Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions.", "title": "" }, { "docid": "7a3b5a4c4968085d219fac481a4d316b", "text": "Potassium based ceramic materials composed from leucite in which 5 % of Al is exchanged with Fe and 4 % of hematite was synthesized by mechanochemical homogenization and annealing of K2O-SiO2-Al2O3-Fe2O3 mixtures. Synthesized material was characterized by X-ray Powder Diffraction (XRPD) and Scanning Electron Microscopy coupled with Energy Dispersive X-ray spectroscopy (SEM/EDX). The two methods are in good agreement in regard to the specimen chemical composition suggesting that a leucite chemical formula is K0.8Al0.7Fe0.15Si2.25O6. Rietveld structure refinement results reveal that about 20 % of vacancies exist in the position of K atoms.", "title": "" }, { "docid": "de1ec3df1fa76e5a419ac8506cd63286", "text": "It is hard to estimate optical flow given a realworld video sequence with camera shake and other motion blur. In this paper, we first investigate the blur parameterization for video footage using near linear motion elements. We then combine a commercial 3D pose sensor with an RGB camera, in order to film video footage of interest together with the camera motion. We illustrates that this additional camera motion/trajectory channel can be embedded into a hybrid framework by interleaving an iterative blind deconvolution and warping based optical flow scheme. Our method yields improved accuracy within three other state-of-the-art baselines given our proposed ground truth blurry sequences; and several other realworld sequences filmed by our imaging system.", "title": "" }, { "docid": "a83b417c2be604427eacf33b1db91468", "text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.", "title": "" }, { "docid": "5512bb4600d4cefa79508d75bc5c6898", "text": "Spark, a subset of Ada for engineering safety and security-critical systems, is one of the best commercially available frameworks for formal-methodssupported development of critical software. Spark is designed for verification and includes a software contract language for specifying functional properties of procedures. Even though Spark and its static analysis components are beneficial and easy to use, its contract language is almost never used due to the burdens the associated tool support imposes on developers. Symbolic execution (SymExe) techniques have made significant strides in automating reasoning about deep semantic properties of source code. However, most work on SymExe has focused on bugfinding and test case generation as opposed to tasks that are more verificationoriented such as contract checking. In this paper, we present: (a) SymExe techniques for checking software contracts in embedded critical systems, and (b) Bakar Kiasan, a tool that implements these techniques in an integrated development environment for Spark. We describe a methodology for using Bakar Kiasan that provides significant increases in automation, usability, and functionality over existing Spark tools, and we present results from experiments on its application to industrial examples.", "title": "" }, { "docid": "a71efe137054cd9102ed05e7d5c139f4", "text": "In this paper we argue for the use of Unstructured Supplementary Service Data (USSD) as a platform for universal cell phone applications. We examine over a decade of ICT4D research, analyzing how USSD can extend and complement current uses of IVR and SMS for data collection, messaging, information access, social networking and complex user initiated transactions. Based on these findings we identify situations when a mobile based project should consider using USSD with increasingly common third party gateways over other mediums. This analysis also motivates the design and implementation of an open source library for rapid development of USSD applications. Finally, we explore three USSD use cases, demonstrating how USSD opens up a design space not available with IVR or SMS.", "title": "" }, { "docid": "5c8e509d42148fef01e1c5ac00286aac", "text": "Graphs can represent biological networks at the molecular, protein, or species level. An important query is to find all matches of a pattern graph to a target graph. Accomplishing this is inherently difficult (NP-complete) and the efficiency of heuristic algorithms for the problem may depend upon the input graphs. The common aim of existing algorithms is to eliminate unsuccessful mappings as early as and as inexpensively as possible. We propose a new subgraph isomorphism algorithm which applies a search strategy to significantly reduce the search space without using any complex pruning rules or domain reduction procedures. We compare our method with the most recent and efficient subgraph isomorphism algorithms (VFlib, LAD, and our C++ implementation of FocusSearch which was originally distributed in Modula2) on synthetic, molecules, and interaction networks data. We show a significant reduction in the running time of our approach compared with these other excellent methods and show that our algorithm scales well as memory demands increase. Subgraph isomorphism algorithms are intensively used by biochemical tools. Our analysis gives a comprehensive comparison of different software approaches to subgraph isomorphism highlighting their weaknesses and strengths. This will help researchers make a rational choice among methods depending on their application. We also distribute an open-source package including our system and our own C++ implementation of FocusSearch together with all the used datasets ( http://ferrolab.dmi.unict.it/ri.html ). In future work, our findings may be extended to approximate subgraph isomorphism algorithms.", "title": "" }, { "docid": "d274ad45c79237b9e63e9dc18881064b", "text": "Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag\"good for teaching\"do achieve higher altmetric counts than papers without this tag - if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (\"new finding\"). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers' subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters' journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics.", "title": "" }, { "docid": "e083b5fdf76bab5cdc8fcafc77db23f7", "text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.", "title": "" }, { "docid": "e5dc07c94c7519f730d03aa6c53ca98e", "text": "Brown adipose tissue (BAT) is specialized to dissipate chemical energy in the form of heat as a defense against cold and excessive feeding. Interest in the field of BAT biology has exploded in the past few years because of the therapeutic potential of BAT to counteract obesity and obesity-related diseases, including insulin resistance. Much progress has been made, particularly in the areas of BAT physiology in adult humans, developmental lineages of brown adipose cell fate, and hormonal control of BAT thermogenesis. As we enter into a new era of brown fat biology, the next challenge will be to develop strategies for activating BAT thermogenesis in adult humans to increase whole-body energy expenditure. This article reviews the recent major advances in this field and discusses emerging questions.", "title": "" }, { "docid": "2c5eb3fb74c6379dfd38c1594ebe85f4", "text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.", "title": "" }, { "docid": "764c38722f53229344184248ac94a096", "text": "Verbal fluency tasks have long been used to assess and estimate group and individual differences in executive functioning in both cognitive and neuropsychological research domains. Despite their ubiquity, however, the specific component processes important for success in these tasks have remained elusive. The current work sought to reveal these various components and their respective roles in determining performance in fluency tasks using latent variable analysis. Two types of verbal fluency (semantic and letter) were compared along with several cognitive constructs of interest (working memory capacity, inhibition, vocabulary size, and processing speed) in order to determine which constructs are necessary for performance in these tasks. The results are discussed within the context of a two-stage cyclical search process in which participants first search for higher order categories and then search for specific items within these categories.", "title": "" }, { "docid": "2a0315f4e95ee3475ec9a359eae98632", "text": "The measurement of safe driving distance based on stereo vision is proposed. The model of camera imaging is established using traditional camera calibration method firstly. Secondly, the projection matrix is deduced according to camera's internal and external parameter and used to calibrate the camera. The method of camera calibration based on two-dimensional target plane is adopted. Then the distortion parameters are calculated when the nonlinear geometric model of camera imaging is built. Moreover, the camera's internal and external parameters are optimized on the basis of the projection error' least squares criterion so that the un-distortion image can be obtained. The matching is done between the left image and the right image corresponding to angular point. The parallax error and the distance between the target vehicle and the camera can be calculated. The experimental results show that the measurement scheme is an effective one in a security vehicles spacing survey. The proposed system is convenient for driver to control in time and precisely. It is able to increase the security in intelligent transportation vehicles.", "title": "" }, { "docid": "0594068f88a89de0dbc9d4b82e15d31f", "text": "We describe mechanical metamaterials created by folding flat sheets in the tradition of origami, the art of paper folding, and study them in terms of their basic geometric and stiffness properties, as well as load bearing capability. A periodic Miura-ori pattern and a non-periodic Ron Resch pattern were studied. Unexceptional coexistence of positive and negative Poisson's ratio was reported for Miura-ori pattern, which are consistent with the interesting shear behavior and infinity bulk modulus of the same pattern. Unusually strong load bearing capability of the Ron Resch pattern was found and attributed to the unique way of folding. This work paves the way to the study of intriguing properties of origami structures as mechanical metamaterials.", "title": "" } ]
scidocsrr
2196d51908364187a9c56b0f73884c8c
A fully-adaptive wideband 0.5–32.75Gb/s FPGA transceiver in 16nm FinFET CMOS technology
[ { "docid": "09af9b0987537e54b7456fb36407ffe3", "text": "The introduction of high-speed backplane transceivers inside FPGAs has addressed critical issues such as the ease in scalability of performance, high availability, flexible architectures, the use of standards, and rapid time to market. These have been crucial to address the ever-increasing demand for bandwidth in communication and storage systems [1-3], requiring novel techniques in receiver (RX) and clocking circuits.", "title": "" } ]
[ { "docid": "c246f445b8341d2ae400a1fba2f64205", "text": "This paper presents a novel design of cylindrical modified Luneberg lens antenna at millimeter-wave (mm-wave) frequencies in which no dielectric is needed as lens material. The cylindrical modified Luneberg lens consists of two air-filled, almost-parallel plates whose spacing continuously varies with the radius to simulate the general Luneberg's Law. A planar antipodal linearly-tapered slot antenna (ALTSA) is placed between the parallel plates at the focal position of the lens as a feed antenna. A combined ray-optics/diffraction method and CST-MWS are used to analyze and design this lens antenna. Measured results of a fabricated cylindrical modified Luneberg lens with a diameter of 100 mm show good agreement with theoretical predictions. At the design frequency of 30 GHz, the measured 3-dB E- and H-plane beamwidths are 8.6° and 68°, respectively. The first sidelobe level in the E-plane is -20 dB, and the cross-polarization is -28 dB below peak. The measured aperture efficiency is 68% at 30 GHz, and varies between 50% and 71% over the tested frequency band of 29-32 GHz. Due to its rotational symmetry, this lens can be used to launch multiple beams by implementing an arc array of planar ALTSA elements at the periphery of the lens. A 21-element antenna array with a -3-D dB beam crossover and a scan angle of 180° is demonstrated. The measured overall scan coverage is up to ±80° with gain drop less than -3 dB.", "title": "" }, { "docid": "7d44a9227848baaf54b9bfb736727551", "text": "Introduction: The causal relation between tongue thrust swallowing or habit and development of anterior open bite continues to be made in clinical orthodontics yet studies suggest a lack of evidence to support a cause and effect. Treatment continues to be directed towards closing the anterior open bite frequently with surgical intervention to reposition the maxilla and mandible. This case report illustrates a highly successful non-surgical orthodontic treatment without extractions.", "title": "" }, { "docid": "09ee1b6d80facc1c21248e855f17a17d", "text": "AIM\nTo examine the relationship between calf circumference and muscle mass, and to evaluate the suitability of calf circumference as a surrogate marker of muscle mass for the diagnosis of sarcopenia among middle-aged and older Japanese men and women.\n\n\nMETHODS\nA total of 526 adults aged 40-89 years participated in the present cross-sectional study. The maximum calf circumference was measured in a standing position. Appendicular skeletal muscle mass was measured using dual-energy X-ray absorptiometry, and the skeletal muscle index was calculated as appendicular skeletal muscle mass divided by the square of the height (kg/m(2)). The cut-off values for sarcopenia were defined as a skeletal muscle index of less than -2 standard deviations of the mean value for Japanese young adults, as defined previously.\n\n\nRESULTS\nCalf circumference was positively correlated with appendicular skeletal muscle (r = 0.81 in men, r = 0.73 in women) and skeletal muscle index (r = 0.80 in men, r = 0.69 in women). In receiver operating characteristic analysis, the optimal calf circumference cut-off values for predicting sarcopenia were 34 cm (sensitivity 88%, specificity 91%) in men and 33 cm (sensitivity 76%, specificity 73%) in women.\n\n\nCONCLUSIONS\nCalf circumference was positively correlated with appendicular skeletal muscle mass and skeletal muscle index, and could be used as a surrogate marker of muscle mass for diagnosing sarcopenia. The suggested cut-off values of calf circumference for predicting low muscle mass are <34 cm in men and <33 cm in women.", "title": "" }, { "docid": "abd5a7566cefd263be3c082b4974c1e6", "text": "Interconnect architectures which leverage high-bandwidth optical channels offer a promising solution to address the increasing chip-to-chip I/O bandwidth demands. This paper describes a dense, high-speed, and low-power CMOS optical interconnect transceiver architecture. Vertical-cavity surface-emitting laser (VCSEL) data rate is extended for a given average current and corresponding reliability level with a four-tap current summing FIR transmitter. A low-voltage integrating and double-sampling optical receiver front-end provides adequate sensitivity in a power efficient manner by avoiding linear high-gain elements common in conventional transimpedance-amplifier (TIA) receivers. Clock recovery is performed with a dual-loop architecture which employs baud-rate phase detection and feedback interpolation to achieve reduced power consumption, while high-precision phase spacing is ensured at both the transmitter and receiver through adjustable delay clock buffers. A prototype chip fabricated in 1 V 90 nm CMOS achieves 16 Gb/s operation while consuming 129 mW and occupying 0.105 mm2.", "title": "" }, { "docid": "2059db0707ffc28fd62b7387ba6d09ae", "text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.", "title": "" }, { "docid": "c2177b7e3cdca3800b3d465229835949", "text": "BACKGROUND\nIn 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy.\n\n\nMETHODS\nDatabases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis.\n\n\nRESULTS\nEight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome.\n\n\nCONCLUSIONS\nThe results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent.\n\n\nTRIAL REGISTRATION\nThe review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .", "title": "" }, { "docid": "1675208fd7adefb20784a7708d655763", "text": "The number of crime incidents that is reported per day in India is increasing dramatically. The criminals today use various advanced technologies and commit crimes in really tactful ways. This makes crime investigation a more complicated process. Thus the police officers have to perform a lot of manual tasks to get a thread for investigation. This paper deals with the study of data mining based systems for analyzing crime information and thus automates the crime investigation procedure of the police officers. The majority of these frameworks utilize a blend of data mining methods such as clustering and classification for the effective investigation of the criminal acts.", "title": "" }, { "docid": "976507b0b89c2202ab603ccedae253f5", "text": "We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to ngram-based scores while providing more relevant outputs.", "title": "" }, { "docid": "fdb88cbc66d6eccb76cfbecdaf596c77", "text": "Recent studies show that more than 86% of Internet paths allow well-designed TCP extensions, meaning that it is still possible to deploy transport layer improvements despite the existence of middleboxes in the network. Hence, the blame for the slow evolution of protocols (with extensions taking many years to nbecome widely used) should be placed on end systems.\n In this paper, we revisit the case for moving protocols stacks up into user space in order to ease the deployment of new protocols, extensions, or performance optimizations. We present MultiStack, operating system support for user-level protocol stacks. MultiStack runs within commodity operating systems, can concurrently host a large number of isolated stacks, has a fall-back path to the legacy host stack, and is able to process packets at rates of 10Gb/s.\n We validate our design by showing that our mux/demux layer can validate and switch packets at line rate (up to 14.88 Mpps) on a 10 Gbit port using 1-2 cores, and that a proof-of-concept HTTP server running over a basic userspace TCP outperforms by 18-90% both the same server and nginx running over the kernel's stack.", "title": "" }, { "docid": "28f8be68a0fe4762af272a0e11d53f7d", "text": "In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.", "title": "" }, { "docid": "5816f70a7f4d7d0beb6e0653db962df3", "text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.", "title": "" }, { "docid": "37a6f3773aebf46cc40266b8bb5692af", "text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.", "title": "" }, { "docid": "8a339bdfd3966e56b0132ca82c2eb824", "text": "This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called proto-value functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A three-phased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using least-squares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nyström extension for out-of-sample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end.", "title": "" }, { "docid": "0da299fb53db5980a10e0ae8699d2209", "text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.", "title": "" }, { "docid": "f2b3643ca7a9a1759f038f15847d7617", "text": "Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.", "title": "" }, { "docid": "679e7b448f0b3bc2f1713cdb852ac6b2", "text": "There are many advantages of using high frequency PWM (in the range of 50 to 100 kHz) in motor drive applications. High motor efficiency, fast control response, lower motor torque ripple, close to ideal sinusoidal motor current waveform, smaller filter size, lower cost filter, etc. are a few of the advantages. However, higher frequency PWM is also associated with severe voltage reflection and motor insulation breakdown issues at the motor terminals. If standard Si IGBT based inverters are employed, losses in the switches make it difficult to overcome significant drop in efficiency of converting electrical power to mechanical power. Work on SiC and GaN based inverter has progressed and variable frequency drives (VFDs) can now be operated efficiently at carrier frequencies in the 50 to 200 kHz range, using these devices. Using soft magnetic material, the overall efficiency of filtering can be improved. The switching characteristics of SiC and GaN devices are such that even at high switching frequency, the turn on and turn off losses are minimal. Hence, there is not much penalty in increasing the carrier frequency of the VFD. Losses in AC motors due to PWM waveform are significantly reduced. All the above features put together improves system efficiency. This paper presents results obtained on using a 6-in-1 GaN module for VFD application, operating at a carrier frequency of 100 kHz with an output sine wave filter. Experimental results show the improvement in motor efficiency and system efficiency on using a GaN based VFD in comparison to the standard Si IGBT based VFD.", "title": "" }, { "docid": "72e6d897e8852fca481d39237cf04e36", "text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.", "title": "" }, { "docid": "11d1a8d8cd9fdabfbdc77d4a0accf007", "text": "Blockchain technology like Bitcoin is a rapidly growing field of research which has found a wide array of applications. However, the power consumption of the mining process in the Bitcoin blockchain alone is estimated to be at least as high as the electricity consumption of Ireland which constitutes a serious liability to the widespread adoption of blockchain technology. We propose a novel instantiation of a proof of human-work which is a cryptographic proof that an amount of human work has been exercised, and show its use in the mining process of a blockchain. Next to our instantiation there is only one other instantiation known which relies on indistinguishability obfuscation, a cryptographic primitive whose existence is only conjectured. In contrast, our construction is based on the cryptographic principle of multiparty computation (which we use in a black box manner) and thus is the first known feasible proof of human-work scheme. Our blockchain mining algorithm called uMine, can be regarded as an alternative energy-efficient approach to mining.", "title": "" }, { "docid": "b418470025d74d745e75225861a1ed7e", "text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.", "title": "" }, { "docid": "6416eb9235954730b8788b7b744d9e5b", "text": "This paper presents a machine learning based handover management scheme for LTE to improve the Quality of Experience (QoE) of the user in the presence of obstacles. We show that, in this scenario, a state-of-the-art handover algorithm is unable to select the appropriate target cell for handover, since it always selects the target cell with the strongest signal without taking into account the perceived QoE of the user after the handover. In contrast, our scheme learns from past experience how the QoE of the user is affected when the handover was done to a certain eNB. Our performance evaluation shows that the proposed scheme substantially improves the number of completed downloads and the average download time compared to state-of-the-art. Furthermore, its performance is close to an optimal approach in the coverage region affected by an obstacle.", "title": "" } ]
scidocsrr
b911e784b37a1675b21acb722d294daf
Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning
[ { "docid": "dc3495ec93462e68f606246205a8416d", "text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "title": "" } ]
[ { "docid": "d46329330906d2ea997cb63cb465bec0", "text": "We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.", "title": "" }, { "docid": "9c9e36a64d82beada8807546636aef20", "text": "Nowadays, FMCW (Frequency Modulated Continuous Wave) radar is widely adapted due to the use of solid state microwave amplifier to generate signal source. The FMCW radar can be implemented and analyzed at low cost and less complexity by using Software Defined Radio (SDR). In this paper, SDR based FMCW radar for target detection and air traffic control radar application is implemented in real time. The FMCW radar model is implemented using open source software and hardware. GNU Radio is utilized for software part of the radar and USRP (Universal Software Radio Peripheral) N210 for hardware part. Log-periodic antenna operating at 1GHZ frequency is used for transmission and reception of radar signals. From the beat signal obtained at receiver end and range resolution of signal, target is detected. Further low pass filtering followed by Fast Fourier Transform (FFT) is performed to reduce computational complexity.", "title": "" }, { "docid": "631b473342cc30360626eaea0734f1d8", "text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.", "title": "" }, { "docid": "768336582eb1aece4454ec461f3840d2", "text": "This paper presents an Iterative Linear Quadratic Regulator (ILQR) me thod for locally-optimal feedback control of nonlinear dynamical systems. The method is applied to a musculo-s ke etal arm model with 10 state dimensions and 6 controls, and is used to compute energy-optimal reach ing movements. Numerical comparisons with three existing methods demonstrate that the new method converge s substantially faster and finds slightly better solutions.", "title": "" }, { "docid": "f52b170e25eaf9478e520a0e81e96386", "text": "General unsupervised learning is a long-standing conceptual problem in machine learning. Supervised learning is successful because it can be solved by the minimization of the training error cost function. Unsupervised learning is not as successful, because the unsupervised objective may be unrelated to the supervised task of interest. For an example, density modelling and reconstruction have often been used for unsupervised learning, but they did not produced the sought-after performance gains, because they have no knowledge of the sought-after supervised tasks. In this paper, we present an unsupervised cost function which we name the Output Distribution Matching (ODM) cost, which measures a divergence between the distribution of predictions and distributions of labels. The ODM cost is appealing because it is consistent with the supervised cost in the following sense: a perfect supervised classifier is also perfect according to the ODM cost. Therefore, by aggressively optimizing the ODM cost, we are almost guaranteed to improve our supervised performance whenever the space of possible predictions is exponentially large. We demonstrate that the ODM cost works well on number of small and semiartificial datasets using no (or almost no) labelled training cases. Finally, we show that the ODM cost can be used for one-shot domain adaptation, which allows the model to classify inputs that differ from the input distribution in significant ways without the need for prior exposure to the new domain.", "title": "" }, { "docid": "a0251ae10bfabd188766aa2453b8cebb", "text": "This paper presents the development of automatic vehicle plate detection system using image processing technique. The famous name for this system is Automatic Number Plate Recognition (ANPR). Automatic vehicle plate detection system is commonly used in field of safety and security systems especially in car parking area. Beside the safety aspect, this system is applied to monitor road traffic such as the speed of vehicle and identification of the vehicle's owner. This system is designed to assist the authorities in identifying the stolen vehicle not only for car but motorcycle as well. In this system, the Optical Character Recognition (OCR) technique was the prominent technique employed by researchers to analyse image of vehicle plate. The limitation of this technique was the incapability of the technique to convert text or data accurately. Besides, the characters, the background and the size of the vehicle plate are varied from one country to other country. Hence, this project proposes a combination of image processing technique and OCR to obtain the accurate vehicle plate recognition for vehicle in Malaysia. The outcome of this study is the system capable to detect characters and numbers of vehicle plate in different backgrounds (black and white) accurately. This study also involves the development of Graphical User Interface (GUI) to ease user in recognizing the characters and numbers in the vehicle or license plates.", "title": "" }, { "docid": "1bf93bf9bd826c4701df5d2036b83226", "text": "In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an wordlevel to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs.", "title": "" }, { "docid": "e5a1f6546de9683e7dc90af147d73d40", "text": "Progress in both speech and language processing has spurred efforts to support applications that rely on spoken rather than written language input. A key challenge in moving from text-based documents to such spoken documents is that spoken language lacks explicit punctuation and formatting, which can be crucial for good performance. This article describes different levels of speech segmentation, approaches to automatically recovering segment boundary locations, and experimental results demonstrating impact on several language processing tasks. The results also show a need for optimizing segmentation for the end task rather than independently.", "title": "" }, { "docid": "02b3d799fa78e2c23de1cbb7a04e0ee9", "text": "Users derive many benefits by storing personal data in cloud computing services; however the drawback of storing data in these services is that the user cannot access his/her own data when an internet connection is not available. To solve this problem in an efficient and elegant way, we propose the cloud-dew architecture. Cloud-dew architecture is an extension of the client-server architecture. In the extension, servers are further classified into cloud servers and dew servers. The dew servers are web servers that reside on user’s local computers and have a pluggable structure so that scripts and databases of websites can be installed easily. The cloud-dew architecture not only makes the personal data stored in the cloud continuously accessible by the user, but also enables a new application: web-surfing without an internet connection. An experimental system is presented to demonstrate the ideas of the cloud-dew architecture.", "title": "" }, { "docid": "919342b88482e827c3923d66e0c50cb7", "text": "Scoring sentences in documents given abstract summaries created by humans is important in extractive multi-document summarization. In this paper, we formulate extractive summarization as a two step learning problem building a generative model for pattern discovery and a regression model for inference. We calculate scores for sentences in document clusters based on their latent characteristics using a hierarchical topic model. Then, using these scores, we train a regression model based on the lexical and structural characteristics of the sentences, and use the model to score sentences of new documents to form a summary. Our system advances current state-of-the-art improving ROUGE scores by ∼7%. Generated summaries are less redundant and more coherent based upon manual quality evaluations.", "title": "" }, { "docid": "69058572e8baaef255a3be6ac9eef878", "text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.", "title": "" }, { "docid": "390505bd6f04e899a15c64c26beac606", "text": "Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for nontarget prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task.", "title": "" }, { "docid": "8b1a811e09ba0c468044e2bf2d6ef700", "text": "Ensembles of learning machines are promising for software effort estimation (SEE), but need to be tailored for this task to have their potential exploited. A key issue when creating ensembles is to produce diverse and accurate base models. Depending on how differently different performance measures behave for SEE, they could be used as a natural way of creating SEE ensembles. We propose to view SEE model creation as a multiobjective learning problem. A multiobjective evolutionary algorithm (MOEA) is used to better understand the tradeoff among different performance measures by creating SEE models through the simultaneous optimisation of these measures. We show that the performance measures behave very differently, presenting sometimes even opposite trends. They are then used as a source of diversity for creating SEE ensembles. A good tradeoff among different measures can be obtained by using an ensemble of MOEA solutions. This ensemble performs similarly or better than a model that does not consider these measures explicitly. Besides, MOEA is also flexible, allowing emphasis of a particular measure if desired. In conclusion, MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models.", "title": "" }, { "docid": "6922a913c6ede96d5062f055b55377e7", "text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.", "title": "" }, { "docid": "041b308fe83ac9d5a92e33fd9c84299a", "text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.", "title": "" }, { "docid": "564ef7dc6c0d7bed77c198bc8d1b6d9f", "text": "A method for requirements analysis is proposed that accounts for individual and personal goals, and the effect of time and context on personal requirements. First a framework to analyse the issues inherent in requirements that change over time and location is proposed. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. A scenario-based analysis method is described for specifying requirements goals and their potential change. The method addresses goal setting for measurement and monitoring, and conflict resolution when requirements at different layers (group, individual) and from different sources (personal, advice from an external authority) conflict. The method links requirements analysis to design by modelling alternative solution pathways. Different implementation pathways have cost–benefit implications for stakeholders, so cost–benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system. The first case study illustrates personal requirements to help cognitively disabled users communicate via e-mail, while the second addresses personal and mobile requirements to help disabled users make journeys on their own, assisted by a mobile PDA guide. In both case studies the experience from requirements analysis to implementation, requirements monitoring, and requirements evolution is reported.", "title": "" }, { "docid": "89a04e656c8e42a78363a5087771b58d", "text": "Analyzing the security of Wearable Internet-of-Things (WIoT) devices is considered a complex task due to their heterogeneous nature. In addition, there is currently no mechanism that performs security testing for WIoT devices in different contexts. In this article, we propose an innovative security testbed framework targeted at wearable devices, where a set of security tests are conducted, and a dynamic analysis is performed by realistically simulating environmental conditions in which WIoT devices operate. The architectural design of the proposed testbed and a proof-of-concept, demonstrating a preliminary analysis and the detection of context-based attacks executed by smartwatch devices, are presented.", "title": "" }, { "docid": "4899e13d5c85b63a823db9c4340824e7", "text": "With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.", "title": "" }, { "docid": "73f6ba4ad9559cd3c6f7a88223e4b556", "text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.", "title": "" } ]
scidocsrr
b0dd3f1aad518c98c1f4ff4f042a5703
Semantic smart grid services: Enabling a standards-compliant Internet of energy platform with IEC 61850 and OPC UA
[ { "docid": "ed06226e548fac89cc06a798618622c6", "text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.", "title": "" }, { "docid": "3bc9eb46e389b7be4141950142c606dd", "text": "Within this contribution, we outline the use of the new automation standards family OPC Unified Architecture (IEC 62541) in scope with the IEC 61850 field automation standard. The IEC 61850 provides both an abstract data model and an abstract communication interface. Different technology mappings to implement the model exist. With the upcoming OPC UA, a new communication model to implement abstract interfaces has been introduced. We outline its use in this contribution and also give examples on how it can be used alongside the IEC 61970 Common Information Model to properly integrate ICT and field automation at communication standards level.", "title": "" } ]
[ { "docid": "008f94637ed982a75c51577f4bfc3c34", "text": "Revelations of large scale electronic surveillance and data mining by governments and corporations have fueled increased adoption of HTTPS. We present a traffic analysis attack against over 6000 webpages spanning the HTTPS deployments of 10 widely used, industryleading websites in areas such as healthcare, finance, legal services and streaming video. Our attack identifies individual pages in the same website with 89% accuracy, exposing personal details including medical conditions, financial and legal affairs and sexual orientation. We examine evaluation methodology and reveal accuracy variations as large as 18% caused by assumptions affecting caching and cookies. We present a novel defense reducing attack accuracy to 27% with a 9% traffic increase, and demonstrate significantly increased effectiveness of prior defenses in our evaluation context, inclusive of enabled caching, user-specific cookies and pages within the same website.", "title": "" }, { "docid": "a5e23ca50545378ef32ed866b97fd418", "text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.", "title": "" }, { "docid": "f905016b422d9c16ac11b85182f196c7", "text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.", "title": "" }, { "docid": "956b7139333421343e8ed245a63a7b4b", "text": "Purpose – During the last decades, different quality management concepts, including total quality management (TQM), six sigma and lean, have been applied by many different organisations. Although much important work has been documented regarding TQM, six sigma and lean, a number of questions remain concerning the applicability of these concepts in various organisations and contexts. Hence, the purpose of this paper is to describe the similarities and differences between the concepts, including an evaluation and criticism of each concept. Design/methodology/approach – Within a case study, a literature review and face-to-face interviews in typical TQM, six sigma and lean organisations have been carried out. Findings – While TQM, six sigma and lean have many similarities, especially concerning origin, methodologies, tools and effects, they differ in some areas, in particular concerning the main theory, approach and the main criticism. The lean concept is slightly different from TQM and six sigma. However, there is a lot to gain if organisations are able to combine these three concepts, as they are complementary. Six sigma and lean are excellent road-maps, which could be used one by one or combined, together with the values in TQM. Originality/value – The paper provides guidance to organisations regarding the applicability and properties of quality concepts. Organisations need to work continuously with customer-orientated activities in order to survive; irrespective of how these activities are labelled. The paper will also serve as a basis for further research in this area, focusing on practical experience of these concepts.", "title": "" }, { "docid": "4d18ea8816e9e4abf428b3f413c82f9e", "text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.", "title": "" }, { "docid": "bf7d502a818ac159cf402067b4416858", "text": "We present algorithms for evaluating and performing modeling operatyons on NURBS surfaces using the programmable fragment processor on the Graphics Processing Unit (GPU). We extend our GPU-based NURBS evaluator that evaluates NURBS surfaces to compute exact normals for either standard or rational B-spline surfaces for use in rendering and geometric modeling. We build on these calculations in our new GPU algorithms to perform standard modeling operations such as inverse evaluations, ray intersections, and surface-surface intersections on the GPU. Our modeling algorithms run in real time, enabling the user to sketch on the actual surface to create new features. In addition, the designer can edit the surface by interactively trimming it without the need for re-tessellation. We also present a GPU-accelerated algorithm to perform surface-surface intersection operations with NURBS surfaces that can output intersection curves in the model space as well as in the parametric spaces of both the intersecting surfaces at interactive rates.", "title": "" }, { "docid": "b3f423e513c543ecc9fe7003ff9880ea", "text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.", "title": "" }, { "docid": "b7062e40643ff1b879247a3f4ec3b07f", "text": "The question of whether there are different patterns of autonomic nervous system responses for different emotions is examined. Relevant conceptual issues concerning both the nature of emotion and the structure of the autonomic nervous system are discussed in the context of the development of research methods appropriate for studying this question. Are different emotional states associated with distinct patterns of autonomic nervous system (ANS) activity? This is an old question that is currently enjoying a modest revival in psychology. In the 1950s autonomic specificity was a key item on the agenda of the newly emerging discipline of psychophysiology, which saw as its mission the scientific exploration of the mind-body relationship using the tools of electrophysiological measurement. But the field of psychophysiology had the misfortune of coming of age during a period in which psychology drifted away from its physiological roots, a period in which psychology was dominated by learning, behaviourism, personality theory and later by cognition. Psychophysiology in the period between 1960 and 1980 reflected these broader trends in psychology by focusing on such issues as autonomic markers of perceptual states (e.g. orienting, stimulus processing), the interplay between personality factors and ANS responsivity, operant conditioning of autonomic functions, and finally, electrophysiological markers of cognitive states. Research on autonomic specificity in emotion became increasingly rare. Perhaps as a result of these historical trends in psychology, or perhaps because research on emotion and physiology is so difficult to do well, there 18 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION exists only a small body of studies on ANS specificity. Although almost all of these studies report some evidence for the existence of specificity, the prevailing zeitgeist has been that specificity has not been empirically established. At this point in time a review of the existing literature would not be very informative, for it would inevitably dissolve into a critique of methods. Instead, what I hope to accomplish in this chapter is to provide a new framework for thinking about ANS specificity, and to propose guidelines for carrying out research on this issue that will be cognizant of the recent methodological and theoretical advances that have been made both in psychophysiology and in research on emotion. Emotion as organization From the outset, the definition of emotion that underlies this chapter should be made explicit. For me the essential function of emotion is organization. The selection of emotion for preservation across time and species is based on the need for an efficient mechanism than can mobilize and organize disparate response systems to deal with environmental events that pose a threat to survival. In this view the prototypical context for human emotions is those situations in which a multi-system response must be organized quickly, where time is not available for the lengthy processes of deliberation, reformulation, planning and rehearsal; where a fine degree of co-ordination is required among systems as disparate as the muscles of the face and the organs of the viscera; and where adaptive behaviours that normally reside near the bottom of behavioural hierarchies must be instantaneously shifted to the top. Specificity versus undifferentiated arousal In this model of emotion as organization it is assumed that each component system is capable of a number of different responses, and that the emotion will guide the selection of responses from each system. Component systems differ in terms of the number of response possibilities. Thus, in the facial expressive system a selection must be made among a limited set of prototypic emotional expressions (which are but a subset of the enormous number of expressions the face is capable of assuming). A motor behaviour must also be selected from a similarly reduced set of responses consisting of fighting, fleeing, freezing, hiding, etc. All major theories of emotion would accept the proposition that activation of the ANS is one of the changes that occur during emotion. But theories differ as to how many different ANS patterns constitute the set of selection possibilities. At one extreme are those who would argue that there are only two ANS patterns: 'off' and 'on'. The 'on' ANS pattern, according to this view, consists EMOTION AND THE AUTONOMIC NERVOUS SYSTEM 19 of a high-level, global, diffuse ANS activation, mediated primarily by the sympathetic branch of the ANS. The manifestations of this pattern rapid and forcefulcontractions of the heart, rapid and deep breathing, increased systolic blood pressure, sweating, dry mouth, redirection of blood flow to large skeletal muscles, peripheral vasoconstriction, release of large amounts of epinephrine and norepinephrine from the adrenal medulla, and the resultant release of glucose from the liver are well known. Cannon (1927) described this pattern in some detail, arguing that this kind of high-intensity, undifferentiated arousal accompanied all emotions .. Among contemporary theories the notion of undifferentiated arousal is most clearly found in Mandler's theory (Mandler, 1975). However, undifferentiated arousal also played a major role in the extraordinarily influential cognitive/physiological theory of Schachter and Singer (1962). According to this theory, undifferentiated arousal is a necessary precondition for emotionan extremely plastic medium to be moulded by cognitive processes working in concert with the available cues from the social environment. At the other extreme are those who argue that there are a large number of patterns of ANS activation, each associated with a different emotion (or subset of emotions). This is the traditional specificity position. Its classic statement is often attributed to James (1884), although Alexander (1950) provided an even more radical version. The specificity position fuelled a number of experimental studies in the 1950s and 1960s, all attempting to identify some of these autonomic patterns (e.g. Averill, 1969; Ax, 1953; Funkenstein, King and Drolette, 1954; Schachter, 1957; Sternbach, 1962). Despite these studies, all of which reported support for ANS specificity, the undifferentiated arousal theory, especially as formulated by Schachter and Singer (1962) and their followers, has been dominant for a great many years. Is the ANS capable of specific action No matter how appealing the notion of ANS specificity might be in the abstract, there would be little reason to pursue it in the laboratory if the ANS were only capable of producing one pattern of arousal. There is no question that the pattern of high-level sympathetic arousal described earlier is one pattern that the ANS can produce. Cannon's arguments notwithstanding, I believe there now is quite ample evidence that the ANS is capable of a number of different patterns of activation. Whether these patterns are reliably associated with different emotions remains an empirical question, but the potential is surely there. A case in support of this potential for specificity can be based on: (a) the neural structure of the ANS; (b) the stimulation neurochemistry of the ANS; and (c) empirical findings. 20 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION", "title": "" }, { "docid": "7b0e63115a7d085a180e047ae1ab2139", "text": "We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.", "title": "" }, { "docid": "09e2a91a25e4ecccc020a91e14a35282", "text": "A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems.", "title": "" }, { "docid": "c97e005d827b712e7d61d8a911c3bed6", "text": "Industries and individuals outsource database to realize convenient and low-cost applications and services. In order to provide sufficient functionality for SQL queries, many secure database schemes have been proposed. However, such schemes are vulnerable to privacy leakage to cloud server. The main reason is that database is hosted and processed in cloud server, which is beyond the control of data owners. For the numerical range query (“>,” “<,” and so on), those schemes cannot provide sufficient privacy protection against practical challenges, e.g., privacy leakage of statistical properties, access pattern. Furthermore, increased number of queries will inevitably leak more information to the cloud server. In this paper, we propose a two-cloud architecture for secure database, with a series of intersection protocols that provide privacy preservation to various numeric-related range queries. Security analysis shows that privacy of numerical information is strongly protected against cloud providers in our proposed scheme.", "title": "" }, { "docid": "6c2b19b2888d00fccb1eae37352d653d", "text": "Between June 1985 and January 1987, the Therac-25 medical electron accelerator was involved in six massive radiation overdoses. As a result, several people died and others were seriously injured. A detailed investigation of the factors involved in the software-related overdoses and attempts by users, manufacturers, and government agencies to deal with the accidents is presented. The authors demonstrate the complex nature of accidents and the need to investigate all aspects of system development and operation in order to prevent future accidents. The authors also present some lessons learned in terms of system engineering, software engineering, and government regulation of safety-critical systems containing software components.<<ETX>>", "title": "" }, { "docid": "7dc652c9b86f63c0a6b546396980783b", "text": "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.", "title": "" }, { "docid": "39e71a3228331eb8b1574173cfb1e04a", "text": "Euler Number is one of the most important characteristics in topology. In two-dimension digital images, the Euler characteristic is locally computable. The form of Euler Number formula is different under 4-connected and 8-connected conditions. Based on the definition of the Foreground Segment and Neighbor Number, a formula of the Euler Number computing is proposed and is proved in this paper. It is a new idea to locally compute Euler Number of 2D image.", "title": "" }, { "docid": "b2d1a0befef19d466cd29868d5cf963b", "text": "Accurate prediction of the functional effect of genetic variation is critical for clinical genome interpretation. We systematically characterized the transcriptome effects of protein-truncating variants, a class of variants expected to have profound effects on gene function, using data from the Genotype-Tissue Expression (GTEx) and Geuvadis projects. We quantitated tissue-specific and positional effects on nonsense-mediated transcript decay and present an improved predictive model for this decay. We directly measured the effect of variants both proximal and distal to splice junctions. Furthermore, we found that robustness to heterozygous gene inactivation is not due to dosage compensation. Our results illustrate the value of transcriptome data in the functional interpretation of genetic variants.", "title": "" }, { "docid": "c51e1b845d631e6d1b9328510ef41ea0", "text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.", "title": "" }, { "docid": "57c2422bac0a8f44b186fadbfcadb393", "text": "In this paper, we propose a vision-based multiple lane boundaries detection and estimation structure that fuses the edge features and the high intensity features. Our approach utilizes a camera as the only input sensor. The application of Kalman filter for information fusion and tracking significantly improves the reliability and robustness of our system. We test our system on roads with different driving scenarios, including day, night, heavy traffic, rain, confusing textures and shadows. The feasibility of our approach is demonstrated by quantitative evaluation using manually labeled video clips.", "title": "" }, { "docid": "838b599024a14e952145af0c12509e31", "text": "In this paper, a broadband high-power eight-way coaxial waveguide power combiner with axially symmetric structure is proposed. A combination of circuit model and full electromagnetic wave methods is used to simplify the design procedure by increasing the role of the circuit model and, in contrast, reducing the amount of full wave optimization. The presented structure is compact and easy to fabricate. Keeping its return loss greater than 12 dB, the constructed combiner operates within 112% bandwidth from 520 to 1860 MHz.", "title": "" }, { "docid": "6de71e8106d991d2c3d2b845a9e0a67e", "text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.", "title": "" }, { "docid": "007f741a718d0c4a4f181676a39ed54a", "text": "Following the development of computing and communication technologies, the idea of Internet of Things (IoT) has been realized not only at research level but also at application level. Among various IoT-related application fields, biometrics applications, especially face recognition, are widely applied in video-based surveillance, access control, law enforcement and many other scenarios. In this paper, we introduce a Face in Video Recognition (FivR) framework which performs real-time key-frame extraction on IoT edge devices, then conduct face recognition using the extracted key-frames on the Cloud back-end. With our key-frame extraction engine, we are able to reduce the data volume hence dramatically relief the processing pressure of the cloud back-end. Our experimental results show with IoT edge device acceleration, it is possible to implement face in video recognition application without introducing the middle-ware or cloud-let layer, while still achieving real-time processing speed.", "title": "" } ]
scidocsrr
7e64e4d4a7a6540a565c08e05c87cde6
Smart grid standards for home and building automation
[ { "docid": "7edb8a803734f4eb9418b8c34b1bf07c", "text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.", "title": "" } ]
[ { "docid": "72f9891b711ebc261fc081a0b356c31b", "text": "This paper presents a flat, high gain, wide scanning, broadband continuous transverse stub (CTS) array. The design procedure, the fabrication, and an exhaustive antenna characterization are described in details. The array comprises 16 radiating slots and is fed by a corporate-feed network in hollow parallel plate waveguide (PPW) technology. A pillbox-based linear source illuminates the corporate network and allows for beam steering. The antenna is designed by using an ad hoc mode matching code recently developed for CTS arrays, providing design guidelines. The assembly technique ensures the electrical contact among the various stages of the network without using any electromagnetic choke and any bonding process. The main beam of the antenna is mechanically steered over ±40° in elevation, by moving a compact horn within the focal plane of the pillbox feeding system. Excellent performances are achieved. The features of the beam are stable within the design 27.5-31 GHz band and beyond, in the entire Ka-band (26.5-40 GHz). An antenna gain of about 29 dBi is measured at broadside at 29.25 GHz and scan losses lower than 2 dB are reported at ±40°. The antenna efficiency exceeds 80% in the whole scan range. The very good agreement between measurements and simulations validates the design procedure. The proposed design is suitable for Satcom Ka-band terminals in moving platforms, e.g., trains and planes, and also for mobile ground stations, as a multibeam sectorial antenna.", "title": "" }, { "docid": "78ffcec1e3d5164d7360aa8a93848fc4", "text": "During a long period of time we are combating overfitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.", "title": "" }, { "docid": "5820a54cf9235a08fbf3d6221c42f1d0", "text": "Restoring nasal lining is one of the essential parts during reconstruction of full-thickness defects of the nose. Without a sufficient nasal lining the whole reconstruction will fail. Nasal lining has to sufficiently cover the shaping subsurface framework. But in addition, lining must not compromise or even block nasal ventilation. This article demonstrates different possibilities of lining reconstruction. The use of composite grafts for small rim defects is described. The limits and technical components for application of skin grafts are discussed. Then the advantages and limitations of endonasal, perinasal, and hingeover flaps are demonstrated. Strategies to restore lining with one or two forehead flaps are presented. Finally, the possibilities and technical aspects to reconstruct nasal lining with a forearm flap are demonstrated. Technical details are explained by intraoperative pictures. Clinical cases are shown to illustrate the different approaches and should help to understand the process of decision making. It is concluded that although the lining cannot be seen after reconstruction of the cover it remains one of the key components for nasal reconstruction. When dealing with full-thickness nasal defects, there is no way to avoid learning how to restore nasal lining.", "title": "" }, { "docid": "bcea969179b1701179dac2087e57e749", "text": "We present a simple yet effective unsupervised domain adaptation method that can be generally applied for different NLP tasks. Our method uses unlabeled target domain instances to induce a set of instance similarity features. These features are then combined with the original features to represent labeled source domain instances. Using three NLP tasks, we show that our method consistently outperforms a few baselines, including SCL, an existing general unsupervised domain adaptation method widely used in NLP. More importantly, our method is very easy to implement and incurs much less computational cost than SCL.", "title": "" }, { "docid": "93b880dbc635a49ffc7a9e6906b094f6", "text": "Abstract machines provide a certain separation between platform-dependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the specific abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, program-independent profiling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/verification of time properties, certification of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization", "title": "" }, { "docid": "0efe3ccc1c45121c5167d3792a7fcd25", "text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.", "title": "" }, { "docid": "0d1f88dbd4a04748a83fe741a86518c1", "text": "The focus of this paper is to investigate how writing computer programs can help children develop their storytelling and creative writing abilities. The process of writing a program---coding---has long been considered only in terms of computer science, but such coding is also reflective of the imaginative and narrative elements of fiction writing workshops. Writing to program can also serve as programming to write, in which a child learns the importance of sequence, structure, and clarity of expression---three aspects characteristic of effective coding and good storytelling alike. While there have been efforts examining how learning to write code can be facilitated by storytelling, there has been little exploration as to how such creative coding can also be directed to teach students about the narrative and storytelling process. Using the introductory programming language Scratch, this paper explores the potential of having children create their own digital stories with the software and how the narrative structure of these stories offers kids the opportunity to better understand the process of expanding an idea into the arc of a story.", "title": "" }, { "docid": "0daa43669ae68a81e5eb71db900976c6", "text": "Fertilizer plays an important role in maintaining soil fertility, increasing yields and improving harvest quality. However, a significant portion of fertilizers are lost, increasing agricultural cost, wasting energy and polluting the environment, which are challenges for the sustainability of modern agriculture. To meet the demands of improving yields without compromising the environment, environmentally friendly fertilizers (EFFs) have been developed. EFFs are fertilizers that can reduce environmental pollution from nutrient loss by retarding, or even controlling, the release of nutrients into soil. Most of EFFs are employed in the form of coated fertilizers. The application of degradable natural materials as a coating when amending soils is the focus of EFF research. Here, we review recent studies on materials used in EFFs and their effects on the environment. The major findings covered in this review are as follows: 1) EFF coatings can prevent urea exposure in water and soil by serving as a physical barrier, thereby reducing the urea hydrolysis rate and decreasing nitrogen oxide (NOx) and dinitrogen (N2) emissions, 2) EFFs can increase the soil organic matter content, 3) hydrogel/superabsorbent coated EFFs can buffer soil acidity or alkalinity and lead to an optimal pH for plants, and 4) hydrogel/superabsorbent coated EFFs can improve water-retention and water-holding capacity of soil. In conclusion, EFFs play an important role in enhancing nutrients efficiency and reducing environmental pollution.", "title": "" }, { "docid": "e83ad9ba6d0d134b9691714fcdfe165e", "text": "With the adoption of a globalized and distributed IC design flow, IP piracy, reverse engineering, and counterfeiting threats are becoming more prevalent. Logic obfuscation techniques including logic locking and IC camouflaging have been developed to address these emergent challenges. A major challenge for logic locking and camouflaging techniques is to resist Boolean satisfiability (SAT) based attacks that can circumvent state-of-the-art solutions within minutes. Over the past year, multiple SAT attack resilient solutions such as Anti-SAT and AND-tree insertion (ATI) have been presented. In this paper, we perform a security analysis of these countermeasures and show that they leave structural traces behind in their attempts to thwart the SAT attack. We present three attacks, namely “signal probability skew” (SPS) attack, “AppSAT guided removal (AGR) attack, and “sensitization guided SAT” (SGS) attack”, that can break Anti-SAT and ATI, within minutes.", "title": "" }, { "docid": "0e1dfbbc366ae86a0bea1dad2a97d467", "text": "The discriminative power of modern deep learning models for 3D human action recognition is growing ever so potent. In conjunction with the recent resurgence of 3D human action representation with 3D skeletons, the quality and the pace of recent progress have been significant. However, the inner workings of state-of-the-art learning based methods in 3D human action recognition still remain mostly black-box. In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition. TCN provides us a way to explicitly learn readily interpretable spatio-temporal representations for 3D human action recognition. Through this work, we wish to take a step towards a spatio-temporal model that is easier to understand, explain and interpret. The resulting model, Res-TCN, achieves state-of-the-art results on the largest 3D human action recognition dataset, NTU-RGBD.", "title": "" }, { "docid": "70180fa9be4c8c87ce119772b2bcca23", "text": "The energy domain currently struggles with radical legal and technological changes, such as, smart meters. This results in new use cases which can be implemented based on business process technology. Understanding and automating business processes requires to model and test them. However, existing process testing approaches frequently struggle with the testing of process resources, such as ERP systems, and negative testing. Hence, this work presents a toolchain which tackles that limitations. The approach uses an open source process engine to generate event logs and applies process mining techniques in a novel way.", "title": "" }, { "docid": "6ab046862d1c5329b0538a85dd0b4ccd", "text": "In this study, a photosynthesis-fermentation model was proposed to merge the positive aspects of autotrophs and heterotrophs. Microalga Chlorella protothecoides was grown autotrophically for CO(2) fixation and then metabolized heterotrophically for oil accumulation. Compared to typical heterotrophic metabolism, 69% higher lipid yield on glucose was achieved at the fermentation stage in the photosynthesis-fermentation model. An elementary flux mode study suggested that the enzyme Rubisco-catalyzed CO(2) re-fixation, enhancing carbon efficiency from sugar to oil. This result may explain the higher lipid yield. In this new model, 61.5% less CO(2) was released compared with typical heterotrophic metabolism. Immunoblotting and activity assay further showed that Rubisco functioned in sugar-bleaching cells at the fermentation stage. Overall, the photosynthesis-fermentation model with double CO(2) fixation in both photosynthesis and fermentation stages, enhances carbon conversion ratio of sugar to oil and thus provides an efficient approach for the production of algal lipid.", "title": "" }, { "docid": "fd28f048f6ac4a7894022d0afee871f3", "text": "Graph clustering and graph outlier detection have been studied extensively on plain graphs, with various applications. Recently, algorithms have been extended to graphs with attributes as often observed in the real-world. However, all of these techniques fail to incorporate the user preference into graph mining, and thus, lack the ability to steer algorithms to more interesting parts of the attributed graph. In this work, we overcome this limitation and introduce a novel user-oriented approach for mining attributed graphs. The key aspect of our approach is to infer user preference by the so-called focus attributes through a set of user-provided exemplar nodes. In this new problem setting, clusters and outliers are then simultaneously mined according to this user preference. Specifically, our FocusCO algorithm identifies the focus, extracts focused clusters and detects outliers. Moreover, FocusCO scales well with graph size, since we perform a local clustering of interest to the user rather than global partitioning of the entire graph. We show the effectiveness and scalability of our method on synthetic and real-world graphs, as compared to both existing graph clustering and outlier detection approaches.", "title": "" }, { "docid": "305dac2ffd4a04fa0ef9ca727edc6247", "text": "A new control strategy for obtaining the maximum traction force of electric vehicles with individual rear-wheel drive is presented. A sliding-mode observer is proposed to estimate the wheel slip and vehicle velocity under unknown road conditions by measuring only the wheel speeds. The proposed observer is based on the LuGre dynamic friction model and allows the maximum transmissible torque for each driven wheel to be obtained instantaneously. The maximum torque can be determined at any operating point and road condition, thus avoiding wheel skid. The proposed strategy maximizes the traction force while avoiding tire skid by controlling the torque of each traction motor. Simulation results using a complete vehicle model under different road conditions are presented to validate the proposed strategy.", "title": "" }, { "docid": "31d22f8a296b3054d1beff53a7a495a0", "text": "Spectral Matching (SM) is a computationally efficient approach to approximate the solution of pairwise matching problems that are np-hard. In this paper, we present a probabilistic interpretation of spectral matching schemes and derive a novel Probabilistic Matching (PM) scheme that is shown to outperform previous approaches. We show that spectral matching can be interpreted as a Maximum Likelihood (ML) estimate of the assignment probabilities and that the Graduated Assignment (GA) algorithm can be cast as a Maximum a Posteriori (MAP) estimator. Based on this analysis, we derive a ranking scheme for spectral matchings based on their reliability, and propose a novel iterative probabilistic matching algorithm that relaxes some of the implicit assumptions used in prior works. We experimentally show our approaches to outperform previous schemes when applied to exhaustive synthetic tests as well as the analysis of real image sequences.", "title": "" }, { "docid": "32e33ef33a9ac42b856d49b270113ba2", "text": "Generalized frequency division multiplexing (GFDM) is a promising candidate waveform for next generation wireless communications systems. Unlike conventional orthogonal frequency division multiplexing (OFDM) based systems, it is a non-orthogonal waveform subject to inter-carrier and intersymbol interference. In multiple-input multiple-output (MIMO) systems, the additional inter-antenna interference also takes place. The presence of such three-dimensional interference challenges the receiver design. This paper addresses the MIMOGFDM channel estimation problem with the aid of known reference signals also referred as pilots. Specifically, the received signal is expressed as the joint effect of the pilot part, unknown data part and noise part. On top of this formulation, least squares (LS) and linear minimum mean square error (LMMSE) estimators are presented, while their performance is evaluated for various pilot arrangements.", "title": "" }, { "docid": "44c2cfd9dfacee55c7ff4bdca45024cd", "text": "An integrative computational methodology is developed for the management of nonpoint source pollution from watersheds. The associated decision support system is based on an interface between evolutionary algorithms (EAs) and a comprehensive watershed simulation model, and is capable of identifying optimal or near-optimal land use patterns to satisfy objectives. Specifically, a genetic algorithm (GA) is linked with the U.S. Department of Agriculture’s Soil and Water Assessment Tool (SWAT) for single objective evaluations, and a Strength Pareto Evolutionary Algorithm has been integrated with SWAT for multiobjective optimization. The model can be operated at a small spatial scale, such as a farm field, or on a larger watershed scale. A secondary model that also uses a GA is developed for calibration of the simulation model. Sensitivity analysis and parameterization are carried out in a preliminary step to identify model parameters that need to be calibrated. Application to a demonstration watershed located in Southern Illinois reveals the capability of the model in achieving its intended goals. However, the model is found to be computationally demanding as a direct consequence of repeated SWAT simulations during the search for favorable solutions. An artificial neural network (ANN) has been developed to mimic SWAT outputs and ultimately replace it during the search process. Replacement of SWAT by the ANN results in an 84% reduction in computational time required to identify final land use patterns. The ANN model is trained using a hybrid of evolutionary programming (EP) and the back propagation (BP) algorithms. The hybrid algorithm was found to be more effective and efficient than either EP or BP alone. Overall, this study demonstrates the powerful and multifaceted role that EAs and artificial intelligence techniques could play in solving the complex and realistic problems of environmental and water resources systems. CE Database subject headings: Algorithms; Neural networks; Watershed management; Pollution control; Calibration; Computation.", "title": "" }, { "docid": "64fbd2207a383bc4b04c66e8ee867922", "text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.", "title": "" }, { "docid": "59bf93d2242104de07a960e944838118", "text": "Software requirements specifications (SRS) are usually validated by inspections, in which several reviewers read all or part of the specification and search for defects. We hypothesize that diflerent methods for conducting these searches may have significantly diflerent rat es of success. Using a controlled experiment, we show that a Scenario-based detection method, in which each reviewer executes a specific procedure to discover a particular class of defects has a higher defect detection rate than either Ad Hoc or Checklist methods. We describe the design, execution, and analysis of the expem”ment so others may reproduce it and test our results for diflerent kinds of software developments and different populations of software engineers.", "title": "" }, { "docid": "9bd06a8a8c490cd8b686169d1a984a14", "text": "This review of research explores characteristics associated with massive open online courses (MOOCs). Three key characteristics are revealed: varied definitions of openness, barriers to persistence, and a distinct structure that takes the form as one of two pedagogical approaches. The concept of openness shifts among different MOOCs, models, researchers, and facilitators. The high dropout rates show that the barriers to learning are a significant challenge. Research has focused on engagement, motivation, and presence to mitigate risks of learner isolation. The pedagogical structure of the connectivist MOOC model (cMOOC) incorporates a social, distributed, networked approach and significant learner autonomy that is geared towards adult lifelong learners interested in personal or professional development. This connectivist approach relates to situated and social learning theories such as social constructivism (Kop, 2011). By contrast, the design of the Stanford Artificial Intelligence (AI) model (xMOOC) uses conventional directed instruction in the context of formal postsecondary educational institutions. This traditional pedagogical approach is categorized as cognitive-behaviorist (Rodriguez, 2012). These two distinct MOOC models attract different audiences, use different learning approaches, and employ different teaching methods. The purpose of this review is to synthesize the research describing the phenomenon of MOOCs in informal and postsecondary online learning. Massive open online courses (MOOCs) are a relatively new phenomenon sweeping higher education. By definition, MOOCs take place online. They could be affiliated with a university, but not necessarily. They are larger than typical college classes, sometimes much larger. They are open, which has multiple meanings evident in this research. While the literature is growing on this topic, it is yet limited. Scholars are taking notice of the literature around MOOCs in all its forms from conceptual to technical. Conference proceedings and magazine articles make up the majority of literature on MOOCs (Liyanagunawardena, Adams, & Williams, 2013). In order to better understand the characteristics associated with MOOCs, this review of literature focuses solely on original research published in scholarly journals. This emphasis on peer-reviewed research is an essential first step to form a more critical and comprehensive perspective by tempering the media hype. While most of the early scholarly research examines aspects of the cMOOC model, much of the hype and controversy surrounds the scaling innovation of the xMOOC model in postsecondary learning contexts. Naidu (2013) calls out the massive open online repetitions of failed pedagogy (MOORFAPs) and forecasts a transformation to massive open online learning opportunities (MOOLOs). Informed educators will be better equipped to make evidence-based decisions, foster the positive growth of this innovation, and adapt it for their own unique contexts. This research synthesis is framed by a withinand Journal of Interactive Online Learning Kennedy 2 between-study literature analysis (Onwuegbuzie, Leech, & Collins, 2012) and situated within the context of online teaching and learning.", "title": "" } ]
scidocsrr
d0e0ba0e3ed70b12b352235199356bde
Hierarchical target type identification for entity-oriented queries
[ { "docid": "f3a531c1979e1a179cc97c15a329d100", "text": "This paper addresses the problem of Named Entity Recognition in Query (NERQ), which involves detection of the named entity in a given query and classification of the named entity into predefined classes. NERQ is potentially useful in many applications in web search. The paper proposes taking a probabilistic approach to the task using query log data and Latent Dirichlet Allocation. We consider contexts of a named entity (i.e., the remainders of the named entity in queries) as words of a document, and classes of the named entity as topics. The topic model is constructed by a novel and general learning method referred to as WS-LDA (Weakly Supervised Latent Dirichlet Allocation), which employs weakly supervised learning (rather than unsupervised learning) using partially labeled seed entities. Experimental results show that the proposed method based on WS-LDA can accurately perform NERQ, and outperform the baseline methods.", "title": "" }, { "docid": "c7741eed703b0b896b58d272cd1a19fe", "text": "In this paper, we propose a novel unsupervised approach to query segmentation, an important task in Web search. We use a generative query model to recover a query's underlying concepts that compose its original segmented form. The model's parameters are estimated using an expectation-maximization (EM) algorithm, optimizing the minimum description length objective function on a partial corpus that is specific to the query. To augment this unsupervised learning, we incorporate evidence from Wikipedia.\n Experiments show that our approach dramatically improves performance over the traditional approach that is based on mutual information, and produces comparable results with a supervised method. In particular, the basic generative language model contributes a 7.4% improvement over the mutual information based method (measured by segment F1 on the Intersection test set). EM optimization further improves the performance by 14.3%. Additional knowledge from Wikipedia provides another improvement of 24.3%, adding up to a total of 46% improvement (from 0.530 to 0.774).", "title": "" }, { "docid": "aaf110cdf2a8ce96756c2ef0090d6e54", "text": "The heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web documents are not enough to find good answer documents. Link information and URL information compensates for the insufficiencies of content information. However, static combination of multiple evidences may lower the retrieval performance. We need different strategies to find target documents according to a query type. We can classify user queries as three categories, the topic relevance task, the homepage finding task, and the service finding task. In this paper, a user query classification scheme is proposed. This scheme uses the difference of distribution, mutual information, the usage rate as anchor texts, and the POS information for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used.", "title": "" } ]
[ { "docid": "d7aac1208aa2ef63ed9a4ef5b67d8017", "text": "We contrast two theoretical approaches to social influence, one stressing interpersonal dependence, conceptualized as normative and informational influence (Deutsch & Gerard, 1955), and the other stressing group membership, conceptualized as self-categorization and referent informational influence (Turner, Hogg, Oakes, Reicher & Wetherell, 1987). We argue that both social comparisons to reduce uncertainty and the existence of normative pressure to comply depend on perceiving the source of influence as belonging to one's own category. This study tested these two approaches using three influence paradigms. First we demonstrate that, in Sherif's (1936) autokinetic effect paradigm, the impact of confederates on the formation of a norm decreases as their membership of a different category is made more salient to subjects. Second, in the Asch (1956) conformity paradigm, surveillance effectively exerts normative pressure if done by an in-group but not by an out-group. In-group influence decreases and out-group influence increases when subjects respond privately. Self-report data indicate that in-group confederates create more subjective uncertainty than out-group confederates and public responding seems to increase cohesiveness with in-group - but decrease it with out-group - sources of influence. In our third experiment we use the group polarization paradigm (e.g. Burnstein & Vinokur, 1973) to demonstrate that, when categorical differences between two subgroups within a discussion group are made salient, convergence of opinion between the subgroups is inhibited. Taken together the experiments show that self-categorization can be a crucial determining factor in social influence.", "title": "" }, { "docid": "e5b73193158b98a536d2d296e816c325", "text": "We use a low-dimensional linear model to describe the user rating matrix in a recommendation system. A non-negativity constraint is enforced in the linear model to ensure that each user’s rating profile can be represented as an additive linear combination of canonical coordinates. In order to learn such a constrained linear model from an incomplete rating matrix, we introduce two variations on Non-negative Matrix Factorization (NMF): one based on the Expectation-Maximization (EM) procedure and the other a Weighted Nonnegative Matrix Factorization (WNMF). Based on our experiments, the EM procedure converges well empirically and is less susceptible to the initial starting conditions than WNMF, but the latter is much more computationally efficient. Taking into account the advantages of both algorithms, a hybrid approach is presented and shown to be effective in real data sets. Overall, the NMF-based algorithms obtain the best prediction performance compared with other popular collaborative filtering algorithms in our experiments; the resulting linear models also contain useful patterns and features corresponding to user communities.", "title": "" }, { "docid": "588a4eccb49bf0edf45456319b6d8ee4", "text": "The VIENNA rectifiers have advantages of high efficiency as well as low output harmonics and are widely utilized in power conversion system when dc power sources are needed for supplying dc loads. VIENNA rectifiers based on three-phase/level can provide two voltage outputs with a neutral line at relatively low costs. However, total harmonic distortion (THD) of input current deteriorates seriously when unbalanced voltages occur. In addition, voltage outputs depend on system parameters, especially multiple loads. Therefore, unbalance output voltage controller and modified carrier-based pulse-width modulation (CBPWM) are proposed in this paper to solve the above problems. Unbalanced output voltage controller is designed based on average model considering independent output voltage and loads conditions. Meanwhile, reference voltages are modified according to different neutral point voltage conditions. The simulation and experimental results are presented to verify the proposed method.", "title": "" }, { "docid": "b784ff4a0e4458d19482d6715454f63d", "text": "We address two questions for training a convolutional neural network (CNN) for hyperspectral image classification: i) is it possible to build a pre-trained network? and ii) is the pretraining effective in furthering the performance? To answer the first question, we have devised an approach that pre-trains a network on multiple source datasets that differ in their hyperspectral characteristics and fine-tunes on a target dataset. This approach effectively resolves the architectural issue that arises when transferring meaningful information between the source and the target networks. To answer the second question, we carried out several ablation experiments. Based on the experimental results, a network trained from scratch performs as good as a network fine-tuned from a pre-trained network. However, we observed that pre-training the network has its own advantage in achieving better performances when deeper networks are required.", "title": "" }, { "docid": "1e176f66a29b6bd3dfce649da1a4db9d", "text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.", "title": "" }, { "docid": "62d1574e23fcf07befc54838ae2887c1", "text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.", "title": "" }, { "docid": "6a993cdfbb701b43bb1cf287380e5b2e", "text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.", "title": "" }, { "docid": "a54f2e7a7d00cf5c9879e86009b60221", "text": "OBJECTIVES\nThis study was aimed to compare the effectiveness of aromatherapy and acupressure massage intervention strategies on the sleep quality and quality of life (QOL) in career women.\n\n\nDESIGN\nThe randomized controlled trial experimental design was used in the present study. One hundred and thirty-two career women (24-55 years) voluntarily participated in this study and they were randomly assigned to (1) placebo (distilled water), (2) lavender essential oil (Lavandula angustifolia), (3) blended essential oil (1:1:1 ratio of L. angustifolia, Salvia sclarea, and Origanum majorana), and (4) acupressure massage groups for a 4-week treatment. The Pittsburgh Sleep Quality Index and Short Form 36 Health Survey were used to evaluate the intervention effects at pre- and postintervention.\n\n\nRESULTS\nAfter a 4-week treatment, all experimental groups (blended essential oil, lavender essential oil, and acupressure massage) showed significant improvements in sleep quality and QOL (p < 0.05). Significantly greater improvement in QOL was observed in the participants with blended essential oil treatment compared with those with lavender essential oil (p < 0.05), and a significantly greater improvement in sleep quality was observed in the acupressure massage and blended essential oil groups compared with the lavender essential oil group (p < 0.05).\n\n\nCONCLUSIONS\nThe blended essential oil exhibited greater dual benefits on improving both QOL and sleep quality compared with the interventions of lavender essential oil and acupressure massage in career women. These results suggest that aromatherapy and acupressure massage improve the sleep and QOL and may serve as the optimal means for career women to improve their sleep and QOL.", "title": "" }, { "docid": "ab430da4dbaae50c2700f3bb9b1dbde5", "text": "Visual appearance score, appearance mixture type and deformation are three important information sources for human pose estimation. This paper proposes to build a multi-source deep model in order to extract non-linear representation from these different aspects of information sources. With the deep model, the global, high-order human body articulation patterns in these information sources are extracted for pose estimation. The task for estimating body locations and the task for human detection are jointly learned using a unified deep model. The proposed approach can be viewed as a post-processing of pose estimation results and can flexibly integrate with existing methods by taking their information sources as input. By extracting the non-linear representation from multiple information sources, the deep model outperforms state-of-the-art by up to 8.6 percent on three public benchmark datasets.", "title": "" }, { "docid": "c4c482cc453884d0016c442b580e3424", "text": "PURPOSE/OBJECTIVES\nTo better understand treatment-induced changes in sexuality from the patient perspective, to learn how women manage these changes in sexuality, and to identify what information they want from nurses about this symptom.\n\n\nRESEARCH APPROACH\nQualitative descriptive methods.\n\n\nSETTING\nAn outpatient gynecologic clinic in an urban area in the southeastern United States served as the recruitment site for patients.\n\n\nPARTICIPANTS\nEight women, ages 33-69, receiving first-line treatment for ovarian cancer participated in individual interviews. Five women, ages 40-75, participated in a focus group and their status ranged from newly diagnosed to terminally ill from ovarian cancer.\n\n\nMETHODOLOGIC APPROACH\nBoth individual interviews and a focus group were conducted. Content analysis was used to identify themes that described the experience of women as they became aware of changes in their sexuality. Triangulation of approach, the researchers, and theory allowed for a rich description of the symptom experience.\n\n\nFINDINGS\nRegardless of age, women reported that ovarian cancer treatment had a detrimental impact on their sexuality and that the changes made them feel \"no longer whole.\" Mechanical changes caused by surgery coupled with hormonal changes added to the intensity and dimension of the symptom experience. Physiologic, psychological, and social factors also impacted how this symptom was experienced.\n\n\nCONCLUSIONS\nRegardless of age or relationship status, sexuality is altered by the diagnosis and treatment of ovarian cancer.\n\n\nINTERPRETATION\nNurses have an obligation to educate women with ovarian cancer about anticipated changes in their sexuality that may come from treatment.", "title": "" }, { "docid": "7089c02cfebb857b809dc04589246ae0", "text": "Context. Mobile web apps represent a large share of the Internet today. However, they still lag behind native apps in terms of user experience. Progressive Web Apps (PWAs) are a new technology introduced by Google that aims at bridging this gap, with a set of APIs known as service workers at its core. Goal. In this paper, we present an empirical study that evaluates the impact of service workers on the energy efficiency of PWAs, when operating in different network conditions on two different generations of mobile devices. Method. We designed an empirical experiment with two main factors: the use of service workers and the type of network available (2G or WiFi). We performed the experiment by running a total of 7 PWAs on two devices (an LG G2 and a Nexus 6P) that we evaluated as blocking factor. Our response variable is the energy consumption of the devices. Results. Our results show that service workers do not have a significant impact over the energy consumption of the two devices, regardless of the network conditions. Also, no interaction was detected between the two factors. However, some patterns in the data show different behaviors among PWAs. Conclusions. This paper represents a first empirical investigation on PWAs. Our results show that the PWA and service workers technology is promising in terms of energy efficiency.", "title": "" }, { "docid": "959f2723ba18e71b2f4acd6108350dd3", "text": "The manufacturing, converting and ennobling processes of paper are truly large area and reel-to-reel processes. Here, we describe a project focusing on using the converting and ennobling processes of paper in order to introduce electronic functions onto the paper surface. As key active electronic materials we are using organic molecules and polymers. We develop sensor, communication and display devices on paper and the main application areas are packaging and paper display applications.", "title": "" }, { "docid": "c6d1ad31d52ed40d2fdba3c5840cbb63", "text": "Classification is one of the most active research and application areas of neural networks. The literature is vast and growing. This paper summarizes the some of the most important developments in neural network classification research. Specifically, the issues of posterior probability estimation, the link between neural and conventional classifiers, learning and generalization tradeoff in classification, the feature variable selection, as well as the effect of misclassification costs are examined. Our purpose is to provide a synthesis of the published research in this area and stimulate further research interests and efforts in the identified topics.", "title": "" }, { "docid": "917ab22adee174259bef5171fe6f14fb", "text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.", "title": "" }, { "docid": "a427c3c0bcbfa10ce9ec1e7477697abe", "text": "We present a system for real-time general object recognition (gor) for indoor robot in complex scenes. A point cloud image containing the object to be recognized from a Kinect sensor, for general object at will, must be extracted a point cloud model of the object with the Cluster Extraction method, and then we can compute the global features of the object model, making up the model database after processing many frame images. Here the global feature we used is Clustered Viewpoint Feature Histogram (CVFH) feature from Point Cloud Library (PCL). For real-time gor we must preprocess all the point cloud images streamed from the Kinect into clusters based on a clustering threshold and the min-max cluster sizes related to the size of the model, for reducing the amount of the clusters and improving the processing speed, and also compute the CVFH features of the clusters. For every cluster of a frame image, we search the several nearer features from the model database with the KNN method in the feature space, and we just consider the nearest model. If the strings of the model name contain the strings of the object to be recognized, it can be considered that we have recognized the general object; otherwise, we compute another cluster again and perform the above steps. The experiments showed that we had achieved the real-time recognition, and ensured the speed and accuracy for the gor.", "title": "" }, { "docid": "2d7d20d578573dab8af8aff960010fea", "text": "Two flavors of the recommendation problem are the explicit and the implicit feedback settings. In the explicit feedback case, users rate items and the user-item preference relationship can be modelled on the basis of the ratings. In the harder but more common implicit feedback case, the system has to infer user preferences from indirect information: presence or absence of events, such as a user viewed an item. One approach for handling implicit feedback is to minimize a ranking objective function instead of the conventional prediction mean squared error. The naive minimization of a ranking objective function is typically expensive. This difficulty is usually overcome by a trade-off: sacrificing the accuracy to some extent for computational efficiency by sampling the objective function. In this paper, we present a computationally effective approach for the direct minimization of a ranking objective function, without sampling. We demonstrate by experiments on the Y!Music and Netflix data sets that the proposed method outperforms other implicit feedback recommenders in many cases in terms of the ErrorRate, ARP and Recall evaluation metrics.", "title": "" }, { "docid": "badfe178923af250baa80c2871aae5bc", "text": "We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.", "title": "" }, { "docid": "e56abb473e262fec3c0260202564be0a", "text": "This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: “An android is a robot” vs. “Snowcap is unmistakable”. Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.", "title": "" }, { "docid": "23919d976b6a25dc032fa23350195713", "text": "I interactive multimedia technologies enable online firms to employ a variety of formats to present and promote their products: They can use pictures, videos, and sounds to depict products, as well as give consumers the opportunity to try out products virtually. Despite the several previous endeavors that studied the effects of different product presentation formats, the functional mechanisms underlying these presentation methods have not been investigated in a comprehensive way. This paper investigates a model showing how these functional mechanisms (namely, vividness and interactivity) influence consumers’ intentions to return to a website and their intentions to purchase products. A study conducted to test this model has largely confirmed our expectations: (1) both vividness and interactivity of product presentations are the primary design features that influence the efficacy of the presentations; (2) consumers’ perceptions of the diagnosticity of websites, their perceptions of the compatibility between online shopping and physical shopping, and their shopping enjoyment derived from a particular online shopping experience jointly influence consumers’ attitudes toward shopping at a website; and (3) both consumers’ attitudes toward products and their attitudes toward shopping at a website contribute to their intentions to purchase the products displayed on the website.", "title": "" }, { "docid": "c3b652b561e38a51f1fa40483532e22d", "text": "Vertical integration refers to one of the options that firms make decisions in the supply of oligopoly market. It was impacted by competition game between upstream firms and downstream firms. Based on the game theory and other previous studies,this paper built a dynamic game model of two-stage competition between the oligopoly suppliers of upstream and the vertical integration firms of downstream manufacturers. In the first stage, it analyzed the influences on integration degree by prices of intermediate goods when an oligopoly firm engages in a Bertrand-game if outputs are not limited. Moreover, it analyzed the influences on integration degree by price-diverge of intermediate goods if outputs were not restricted within a Bertrand Duopoly game equilibrium. In the second stage, there is a Cournot duopoly game between downstream specialization firms and downstream integration firms. Their marginal costs are affected by the integration degree and their yields are affected either under indifferent manufacture conditions. Finally, prices of intermediate goods are determined by the competition of upstream firms, the prices of intermediate goods affect the changes of integration degree between upstream firms and downstream firms. The conclusions can be referenced to decision-making of integration in market competition.", "title": "" } ]
scidocsrr
4e583fb9f1c2d96a77cfcb6e7bdf8715
Impedance Measurement System for Determination of Capacitive Electrode Coupling
[ { "docid": "8cfdd59ba7271d48ea0d41acc2ef795a", "text": "The Cole single-dispersion impedance model is based upon a constant phase element (CPE), a conductance parameter as a dependent parameter and a characteristic time constant as an independent parameter. Usually however, the time constant of tissue or cell suspensions is conductance dependent, and so the Cole model is incompatible with general relaxation theory and not a model of first choice. An alternative model with conductance as a free parameter influencing the characteristic time constant of the biomaterial has been analyzed. With this free-conductance model it is possible to separately follow CPE and conductive processes, and the nominal time constant no longer corresponds to the apex of the circular arc in the complex plane.", "title": "" } ]
[ { "docid": "cf5d0f7079bd7bc1a197573e28b5569a", "text": "More and more people rely on mobile devices to access the Internet, which also increases the amount of private information that can be gathered from people's devices. Although today's smartphone operating systems are trying to provide a secure environment, they fail to provide users with adequate control over and visibility into how third-party applications use their private data. Whereas there are a few tools that alert users when applications leak private information, these tools are often hard to use by the average user or have other problems. To address these problems, we present PrivacyGuard, an open-source VPN-based platform for intercepting the network traffic of applications. PrivacyGuard requires neither root permissions nor any knowledge about VPN technology from its users. PrivacyGuard does not significantly increase the trusted computing base since PrivacyGuard runs in its entirety on the local device and traffic is not routed through a remote VPN server. We implement PrivacyGuard on the Android platform by taking advantage of the VPNService class provided by the Android SDK.\n PrivacyGuard is configurable, extensible, and useful for many different purposes. We investigate its use for detecting the leakage of multiple types of sensitive data, such as a phone's IMEI number or location data. PrivacyGuard also supports modifying the leaked information and replacing it with crafted data for privacy protection. According to our experiments, PrivacyGuard can detect more leakage incidents by applications and advertisement libraries than TaintDroid. We also demonstrate that PrivacyGuard has reasonable overhead on network performance and almost no overhead on battery consumption.", "title": "" }, { "docid": "93b87e8dde0de0c1b198f6a073858d80", "text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.", "title": "" }, { "docid": "ec6b1d26b06adc99092659b4a511da44", "text": "Social identity threat is the notion that one of a person's many social identities may be at risk of being devalued in a particular context (C. M. Steele, S. J. Spencer, & J. Aronson, 2002). The authors suggest that in domains in which women are already negatively stereotyped, interacting with a sexist man can trigger social identity threat, undermining women's performance. In Study 1, male engineering students who scored highly on a subtle measure of sexism behaved in a dominant and sexually interested way toward an ostensible female classmate. In Studies 2 and 3, female engineering students who interacted with such sexist men, or with confederates trained to behave in the same way, performed worse on an engineering test than did women who interacted with nonsexist men. Study 4 replicated this finding and showed that women's underperformance did not extend to an English test, an area in which women are not negatively stereotyped. Study 5 showed that interacting with sexist men leads women to suppress concerns about gender stereotypes, an established mechanism of stereotype threat. Discussion addresses implications for social identity threat and for women's performance in school and at work.", "title": "" }, { "docid": "ad11946cfb127e19b0ee80f5d77dbe93", "text": "Air quality has great impact on individual and community health. In this demonstration, we present Citisense: a mobile air quality system that enables users to track their personal air quality exposure for discovery, self-reflection, and sharing within their local communities and online social networks.", "title": "" }, { "docid": "10ae6cdb445e4faf1e6bed5cad6eb3ba", "text": "It this paper we revisit the fast stylization method introduced in Ulyanov et al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will be made available at https://github.com/DmitryUlyanov/texture_nets.", "title": "" }, { "docid": "0fd6b9eb35de8d91d28544920e525ee6", "text": "A great many control schemes for a robot manipulator interacting with the environment have been developed in the literature in the past two decades. This paper is aimed at presenting a survey of robot interaction control schemes for a manipulator, the end effector of which comes in contact with a compliant surface. A salient feature of the work is the implementation of the schemes on an industrial robot with open control architecture equipped with a wrist force sensor. Two classes of control strategies are considered, namely, those based on static model-based compensation and those based on dynamic model-based compensation. The former provide a good steadystate behavior, while the latter enhance the behavior during the transient. The performance of the various schemes is compared in the light of disturbance rejection, and a thorough analysis is developed by means of a number of case studies.", "title": "" }, { "docid": "2e5981a41d13ee2d588ee0e9fe04e1ec", "text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.", "title": "" }, { "docid": "945c5c7cd9eb2046c1b164e64318e52f", "text": "This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches.", "title": "" }, { "docid": "08f26c702f7d0bb5e21b51d7681869a2", "text": "Millions of posts are being generated in real-time by users in social networking services, such as Twitter. However, a considerable number of those posts are mundane posts that are of interest to the authors and possibly their friends only. This paper investigates the problem of automatically discovering valuable posts that may be of potential interest to a wider audience. Specifically, we model the structure of Twitter as a graph consisting of users and posts as nodes and retweet relations between the nodes as edges. We propose a variant of the HITS algorithm for producing a static ranking of posts. Experimental results on real world data demonstrate that our method can achieve better performance than several baseline methods.", "title": "" }, { "docid": "878bdefc419be3da8d9e18111d26a74f", "text": "PURPOSE\nTo estimate prevalence and chronicity of insomnia and the impact of chronic insomnia on health and functioning of adolescents.\n\n\nMETHODS\nData were collected from 4175 youths 11-17 at baseline and 3134 a year later sampled from managed care groups in a large metropolitan area. Insomnia was assessed by youth-reported DSM-IV symptom criteria. Outcomes are three measures of somatic health, three measures of mental health, two measures of substance use, three measures of interpersonal problems, and three of daily activities.\n\n\nRESULTS\nOver one-fourth reported one or more symptoms of insomnia at baseline and about 5% met diagnostic criteria for insomnia. Almost 46% of those who reported one or more symptoms of insomnia in Wave 1 continued to be cases at Wave 2 and 24% met DSM-IV symptom criteria for chronic insomnia (cases in Wave 1 were also cases in Wave 2). Multivariate analyses found chronic insomnia increased subsequent risk for somatic health problems, interpersonal problems, psychological problems, and daily activities. Significant odds (p < .05) ranged from 1.6 to 5.6 for poor outcomes. These results are the first reported on chronic insomnia among youths, and corroborate, using prospective data, previous findings on correlates of disturbed sleep based on cross-sectional studies.\n\n\nCONCLUSIONS\nInsomnia is both common and chronic among adolescents. The data indicate that the burden of insomnia is comparable to that of other psychiatric disorders such as mood, anxiety, disruptive, and substance use disorders. Chronic insomnia severely impacts future health and functioning of youths. Those with chronic insomnia are more likely to seek medical care. These data suggest primary care settings might provide a venue for screening and early intervention for adolescent insomnia.", "title": "" }, { "docid": "cccc206a025f6ae2a47a4068b6ded4c6", "text": "Most existing methods for audio sentiment analysis use automatic speech recognition to convert speech to text, and feed the textual input to text-based sentiment classifiers. This study shows that such methods may not be optimal, and proposes an alternate architecture where a single keyword spotting system (KWS) is developed for sentiment detection. In the new architecture, the text-based sentiment classifier is utilized to automatically determine the most powerful sentiment-bearing terms, which is then used as the term list for KWS. In order to obtain a compact yet powerful term list, a new method is proposed to reduce text-based sentiment classifier model complexity while maintaining good classification accuracy. Finally, the term list information is utilized to build a more focused language model for the speech recognition system. The result is a single integrated solution which is focused on vocabulary that directly impacts classification. The proposed solution is evaluated on videos from YouTube.com and UT-Opinion corpus (which contains naturalistic opinionated audio collected in real-world conditions). Our experimental results show that the KWS based system significantly outperforms the traditional architecture in difficult practical tasks.", "title": "" }, { "docid": "51c8570d20a43ed923cfa884b55df8c9", "text": "Electricity is a non-storable commodity for consumers, while hydropower producers may store future electricity as water in their reservoirs. Consequently, there is an asymmetry between producers’ and consumers’ possibilities of spot-futures arbitrage. Furthermore, marginal warehousing costs in hydro based electricity production are zero as long as water reservoirs are not full, jumping to the prevailing spot price in the case that dams are filled up and water is running over the edge without being utilised. In this explorative study, we analyse price relationships at the world’s largest multinational market place for electricity (Nord Pool). We find tha the futures price at Nord Pool periodically has been outside its (theoretical) arbitrage limits. Furthermore, the futures price and the basis have been biased and poor predictors of subsequent spot price levels and changes, respectively. Forecast errors have been systematic, and the futures price does not seem to incorporate available information. The findings indicate non-rational pricing behaviour. Alternatively, the results may represent circumstantial evidence of market power on the producer side.", "title": "" }, { "docid": "56f18b39a740dd65fc2907cdef90ac99", "text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.", "title": "" }, { "docid": "205a5a9a61b6ac992f01c8c2fc09678a", "text": "We present the OWL API, a high level Application Programming Interface (API) for working with OWL ontologies. The OWL API is closely aligned with the OWL 2 structural specification. It supports parsing and rendering in the syntaxes defined in the W3C specification (Functional Syntax, RDF/XML, OWL/XML and the Manchester OWL Syntax); manipulation of ontological structures; and the use of reasoning engines. The reference implementation of the OWL API, written in Java, includes validators for the various OWL 2 profiles OWL 2 QL, OWL 2 EL and OWL 2 RL. The OWL API has widespread usage in a variety of tools and applications.", "title": "" }, { "docid": "6df423e9d21b6505b8205792f6cd5f85", "text": "The effective use of technologies supporting decision making is essential to companies’ survival. Recent studies analyzed social media technologies (SMT) in the context of smalland mediumsized enterprises (SMEs), contributing to the discussion on SMT benefits from the marketing perspective. This article focuses on the effects of SMT use on innovation. Our findings provide empirical evidence on the positive effects of SMT use for acquiring external information and for sharing knowledge and innovation performance.", "title": "" }, { "docid": "dcd919590e0b6b52ea3a6be7378d5d25", "text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.", "title": "" }, { "docid": "a3fe8cf8b2689269fe8a1050cf7789d2", "text": "A boosting algorithm, AdaBoost.RT, is proposed for regression problems. The idea is to filter out examples with a relative estimation error that is higher than the pre-set threshold value, and then follow the AdaBoost procedure. Thus it requires to select the sub-optimal value of relative error threshold to demarcate predictions from the predictor as correct or incorrect. Some experimental results using the M5 model tree as a weak learning machine for benchmark data sets and for hydrological modeling are reported, and compared to other boosting methods, bagging and artificial neural networks, and to a single M5 model tree. AdaBoost.Rt is proved to perform better on most of the considered data sets.", "title": "" }, { "docid": "170e7a72a160951e880f18295d100430", "text": "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are used to construct capsules in the first capsule layer. Capsule layers are connected via dynamic routing mechanism. The last capsule layer consists of only one capsule to produce a vector output. The length of this vector output is used to measure the plausibility of the triple. Our proposed CapsE obtains state-of-the-art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17 dataset.", "title": "" }, { "docid": "d0623e90f8bce6818c6cb2f150757659", "text": "In this paper, an efficient offline signature verification method based on an interval symbolic representation and a fuzzy similarity measure is proposed. In the feature extraction step, a set of local binary pattern-based features is computed from both the signature image and its under-sampled bitmap. Interval-valued symbolic data is then created for each feature in every signature class. As a result, a signature model composed of a set of interval values (corresponding to the number of features) is obtained for each individual’s handwritten signature class. A novel fuzzy similarity measure is further proposed to compute the similarity between a test sample signature and the corresponding interval-valued symbolic model for the verification of the test sample. To evaluate the proposed verification approach, a benchmark offline English signature data set (GPDS-300) and a large data set (BHSig260) composed of Bangla and Hindi offline signatures were used. A comparison of our results with some recent signature verification methods available in the literature was provided in terms of average error rate and we noted that the proposed method always outperforms when the number of training samples is eight or more.", "title": "" }, { "docid": "b8b3761b658e37783afb1157ef0844b5", "text": "Biometric recognition refers to the automated recognition of individuals based on their biological and behavioral characteristics such as fingerprint, face, iris, and voice. The first scientific paper on automated fingerprint matching was published by Mitchell Trauring in the journal Nature in 1963. The first objective of this paper is to document the significant progress that has been achieved in the field of biometric recognition in the past 50 years since Trauring’s landmark paper. This progress has enabled current state-of-the-art biometric systems to accurately recognize individuals based on biometric trait(s) acquired under controlled environmental conditions from cooperative users. Despite this progress, a number of challenging issues continue to inhibit the full potential of biometrics to automatically recognize humans. The second objective of this paper is to enlist such challenges, analyze the solutions proposed to overcome them, and highlight the research opportunities in this field. One of the foremost challenges is the design of robust algorithms for representing and matching biometric samples obtained from uncooperative subjects under unconstrained environmental conditions (e.g., recognizing faces in a crowd). In addition, fundamental questions such as the distinctiveness and persistence of biometric traits need greater attention. Problems related to the security of biometric data and robustness of the biometric system against spoofing and obfuscation attacks, also remain unsolved. Finally, larger system-level issues like usability, user privacy concerns, integration with the end application, and return on investment have not been adequately addressed. Unlocking the full potential of biometrics through inter-disciplinary research in the above areas will not only lead to widespread adoption of this promising technology, but will also result in wider user acceptance and societal impact. c © 2016 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
97a6ba2b4cfe9b96377e57559cc35430
Orchestrating Caching, Transcoding and Request Routing for Adaptive Video Streaming Over ICN
[ { "docid": "d0253bb3efe714e6a34e8dd5fc7dcf81", "text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.", "title": "" } ]
[ { "docid": "8b3042021e48c86873e00d646f65b052", "text": "We derive a numerical method for Darcy flow, hence also for Poisson’s equation in first order form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is its discretization on simplicial complexes such as triangle and tetrahedral meshes. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. Our method requires the use of meshes in which each simplex contains its circumcenter. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solution in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this paper. We also include a discussion of the boundary condition in terms of exterior calculus.", "title": "" }, { "docid": "80759a5c2e60b444ed96c9efd515cbdf", "text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.", "title": "" }, { "docid": "e6d359934523ed73b2f9f2ac66fd6096", "text": "We investigate a novel and important application domain for deep RL: network routing. The question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven approach has received much attention recently. We explore this question in the context of the, arguably, most fundamental networking task: routing. Can ideas and techniques from machine learning be leveraged to automatically generate “good” routing configurations? We observe that the routing domain poses significant challenges for data-driven network protocol design and report on preliminary results regarding the power of data-driven routing. Our results suggest that applying deep reinforcement learning to this context yields high performance and is thus a promising direction for further research. We outline a research agenda for data-driven routing.", "title": "" }, { "docid": "ea5e08627706532504b9beb6f4dc6650", "text": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia.", "title": "" }, { "docid": "d14812771115b4736c6d46aecadb2d8a", "text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.", "title": "" }, { "docid": "59d7685a127b1fd98f2506c993d5ec6e", "text": "Software defect prediction helps to optimize testing resources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar project settings as those under prediction. However, such historical data is not always available in practice. One potential way of predicting defects in projects without historical data is to learn predictors from data of other projects. This paper investigates defect predictions in the cross-project context focusing on the selection of training data. We conduct three large-scale experiments on 34 data sets obtained from 10 open source projects. Major conclusions from our experiments include: (1) in the best cases, training data from other projects can provide better prediction results than training data from the same project; (2) the prediction results obtained using training data from other projects meet our criteria for acceptance on the average level, defects in 18 out of 34 cases were predicted at a Recall greater than 70% and a Precision greater than 50%; (3) results of cross-project defect predictions are related with the distributional characteristics of data sets which are valuable for training data selection. We further propose an approach to automatically select suitable training data for projects without historical data. Prediction results provided by the training data selected by using our approach are comparable with those provided by training data from the same project.", "title": "" }, { "docid": "7dfb6a3a619f7062452aa97aaa134c45", "text": "Most companies favour the creation and nurturing of long-term relationships with customers because retaining customers is more profitable than acquiring new ones. Churn prediction is a predictive analytics technique to identify churning customers ahead of their departure and enable customer relationship managers to take action to keep them. This work evaluates the development of an expert system for churn prediction and prevention using a Hidden Markov model (HMM). A HMM is implemented on unique data from a mobile application and its predictive performance is compared to other algorithms that are commonly used for churn prediction: Logistic Regression, Neural Network and Support Vector Machine. Predictive performance of the HMM is not outperformed by the other algorithms. HMM has substantial advantages for use in expert systems though due to low storage and computational requirements and output of highly relevant customer motivational states. Generic session data of the mobile app is used to train and test the models which makes the system very easy to deploy and the findings applicable to the whole ecosystem of mobile apps distributed in Apple's App and Google's Play Store.", "title": "" }, { "docid": "e62fd95ccd6c10960acc7358ad0a5071", "text": "The view information of a chest X-ray (CXR), such as frontal or lateral, is valuable in computer aided diagnosis (CAD) of CXRs. For example, it helps for the selection of atlas models for automatic lung segmentation. However, very often, the image header does not provide such information. In this paper, we present a new method for classifying a CXR into two categories: frontal view vs. lateral view. The method consists of three major components: image pre-processing, feature extraction, and classification. The features we selected are image profile, body size ratio, pyramid of histograms of orientation gradients, and our newly developed contour-based shape descriptor. The method was tested on a large (more than 8,200 images) CXR dataset hosted by the National Library of Medicine. The very high classification accuracy (over 99% for 10-fold cross validation) demonstrates the effectiveness of the proposed method.", "title": "" }, { "docid": "aec82326c1fea34da9935731e4c476f4", "text": "This paper presents a trajectory tracking control design which provides the essential spatial-temporal feedback control capability for fixed-wing unmanned aerial vehicles (UAVs) to execute a time critical mission reliably. In this design, a kinematic trajectory tracking control law and a control gain selection method are developed to allow the control law to be implemented on a fixed-wing UAV based on the platform's dynamic capability. The tracking control design assumes the command references of the heading and airspeed control systems are the accessible control inputs, and it does not impose restrictive model assumptions on the UAV's control systems. The control design is validated using a high-fidelity nonlinear six degrees of freedom (6DOF) model and the reported results suggest that the proposed tracking control design is able to track time-parameterized trajectories stably with robust control performance.", "title": "" }, { "docid": "36fef38de53386e071ee2a1996aa733f", "text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.", "title": "" }, { "docid": "bb5e00ac09e12f3cdb097c8d6cfde9a9", "text": "3D biomaterial printing has emerged as a potentially revolutionary technology, promising to transform both research and medical therapeutics. Although there has been recent progress in the field, on-demand fabrication of functional and transplantable tissues and organs is still a distant reality. To advance to this point, there are two major technical challenges that must be overcome. The first is expanding upon the limited variety of available 3D printable biomaterials (biomaterial inks), which currently do not adequately represent the physical, chemical, and biological complexity and diversity of tissues and organs within the human body. Newly developed biomaterial inks and the resulting 3D printed constructs must meet numerous interdependent requirements, including those that lead to optimal printing, structural, and biological outcomes. The second challenge is developing and implementing comprehensive biomaterial ink and printed structure characterization combined with in vitro and in vivo tissueand organ-specific evaluation. This perspective outlines considerations for addressing these technical hurdles that, once overcome, will facilitate rapid advancement of 3D biomaterial printing as an indispensable tool for both investigating complex tissue and organ morphogenesis and for developing functional devices for a variety of diagnostic and regenerative medicine applications. PAPER 5 Contributed equally to this work. REcEivEd", "title": "" }, { "docid": "b9bb07dd039c0542a7309f2291732f82", "text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional", "title": "" }, { "docid": "5625166c3e84059dd7b41d3c0e37e080", "text": "External border surveillance is critical to the security of every state and the challenges it poses are changing and likely to intensify. Wireless sensor networks (WSN) are a low cost technology that provide an intelligence-led solution to effective continuous monitoring of large, busy, and complex landscapes. The linear network topology resulting from the structure of the monitored area raises challenges that have not been adequately addressed in the literature to date. In this paper, we identify an appropriate metric to measure the quality of WSN border crossing detection. Furthermore, we propose a method to calculate the required number of sensor nodes to deploy in order to achieve a specified level of coverage according to the chosen metric in a given belt region, while maintaining radio connectivity within the network. Then, we contribute a novel cross layer routing protocol, called levels division graph (LDG), designed specifically to address the communication needs and link reliability for topologically linear WSN applications. The performance of the proposed protocol is extensively evaluated in simulations using realistic conditions and parameters. LDG simulation results show significant performance gains when compared with its best rival in the literature, dynamic source routing (DSR). Compared with DSR, LDG improves the average end-to-end delays by up to 95%, packet delivery ratio by up to 20%, and throughput by up to 60%, while maintaining comparable performance in terms of normalized routing load and energy consumption.", "title": "" }, { "docid": "535ebbee465f6a009a2a85c47115a51b", "text": "Online social networks (OSNs) are increasingly threatened by social bots which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster, which collaborate to conduct malicious behavior while mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. We also propose the corresponding countermeasures and evaluate their effectiveness. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems.", "title": "" }, { "docid": "ff8f72d7afb43513c7a7a6b041a13040", "text": "The paper first discusses the reasons why simplified solutions for the mechanical structure of fingers in robotic hands should be considered a worthy design goal. After a brief discussion about the mechanical solutions proposed so far for robotic fingers, a different design approach is proposed. It considers finger structures made of rigid links connected by flexural hinges, with joint actuation obtained by means of flexures that can be guided inside each finger according to different patterns. A simplified model of one of these structures is then presented, together with preliminary results of simulation, in order to evaluate the feasibility of the concept. Examples of technological implementation are finally presented and the perspective and problems of application are briefly discussed.", "title": "" }, { "docid": "8aca909e0f83a8ac917a453fdcc73b6f", "text": "Nearly half a century ago, military organizations introduced “Tempest” emission-security test standards to control information leakage from unintentional electromagnetic emanations of digital electronics. The nature of these emissions has changed with evolving technology; electromechanic devices have vanished and signal frequencies increased several orders of magnitude. Recently published eavesdropping attacks on modern flat-panel displays and cryptographic coprocessors demonstrate that the risk remains acute for applications with high protection requirements. The ultra-wideband signal processing technology needed for practical attacks finds already its way into consumer electronics. Current civilian RFI limits are entirely unsuited for emission security purposes. Only an openly available set of test standards based on published criteria will help civilian vendors and users to estimate and manage emission-security risks appropriately. This paper outlines a proposal and rationale for civilian electromagnetic emission-security limits. While the presented discussion aims specifically at far-field video eavesdropping in the VHF and UHF bands, the most easy to demonstrate risk, much of the presented approach for setting test limits could be adapted equally to address other RF emanation risks.", "title": "" }, { "docid": "c47881213aa27d29d11579840f7ef1ae", "text": "While patients with poor functional health literacy (FHL) have difficulties reading and comprehending written medical instructions, it is not known whether these patients also experience problems with other modes of communication, such as face-to-face encounters with primary care physicians. We enrolled 408 English- and Spanish-speaking diabetes patients to examine whether patients with inadequate FHL report worse communication than patients with adequate FHL. We assessed patients' experiences of communication using sub-scales from the Interpersonal Processes of Care in Diverse Populations instrument. In multivariate models, patients with inadequate FHL, compared to patients with adequate FHL, were more likely to report worse communication in the domains of general clarity (adjusted odds ratio [AOR] 6.29, P<0.01), explanation of condition (AOR 4.85, P=0.03), and explanation of processes of care (AOR 2.70, p=0.03). Poor FHL appears to be a marker for oral communication problems, particularly in the technical, explanatory domains of clinician-patient dialogue. Research is needed to identify strategies to improve communication for this group of patients.", "title": "" }, { "docid": "f78fcf875104f8bab2fa465c414331c6", "text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.", "title": "" }, { "docid": "a6ddbe0f834c38079282db91599e076d", "text": "BACKGROUND\nThe efficacy of closure of a patent foramen ovale (PFO) in the prevention of recurrent stroke after cryptogenic stroke is uncertain. We investigated the effect of PFO closure combined with antiplatelet therapy versus antiplatelet therapy alone on the risks of recurrent stroke and new brain infarctions.\n\n\nMETHODS\nIn this multinational trial involving patients with a PFO who had had a cryptogenic stroke, we randomly assigned patients, in a 2:1 ratio, to undergo PFO closure plus antiplatelet therapy (PFO closure group) or to receive antiplatelet therapy alone (antiplatelet-only group). Imaging of the brain was performed at the baseline screening and at 24 months. The coprimary end points were freedom from clinical evidence of ischemic stroke (reported here as the percentage of patients who had a recurrence of stroke) through at least 24 months after randomization and the 24-month incidence of new brain infarction, which was a composite of clinical ischemic stroke or silent brain infarction detected on imaging.\n\n\nRESULTS\nWe enrolled 664 patients (mean age, 45.2 years), of whom 81% had moderate or large interatrial shunts. During a median follow-up of 3.2 years, clinical ischemic stroke occurred in 6 of 441 patients (1.4%) in the PFO closure group and in 12 of 223 patients (5.4%) in the antiplatelet-only group (hazard ratio, 0.23; 95% confidence interval [CI], 0.09 to 0.62; P=0.002). The incidence of new brain infarctions was significantly lower in the PFO closure group than in the antiplatelet-only group (22 patients [5.7%] vs. 20 patients [11.3%]; relative risk, 0.51; 95% CI, 0.29 to 0.91; P=0.04), but the incidence of silent brain infarction did not differ significantly between the study groups (P=0.97). Serious adverse events occurred in 23.1% of the patients in the PFO closure group and in 27.8% of the patients in the antiplatelet-only group (P=0.22). Serious device-related adverse events occurred in 6 patients (1.4%) in the PFO closure group, and atrial fibrillation occurred in 29 patients (6.6%) after PFO closure.\n\n\nCONCLUSIONS\nAmong patients with a PFO who had had a cryptogenic stroke, the risk of subsequent ischemic stroke was lower among those assigned to PFO closure combined with antiplatelet therapy than among those assigned to antiplatelet therapy alone; however, PFO closure was associated with higher rates of device complications and atrial fibrillation. (Funded by W.L. Gore and Associates; Gore REDUCE ClinicalTrials.gov number, NCT00738894 .).", "title": "" }, { "docid": "140fd854c8564b75609f692229ac616e", "text": "Modern search systems are based on dozens or even hundreds of ranking features. The dueling bandit gradient descent (DBGD) algorithm has been shown to effectively learn combinations of these features solely from user interactions. DBGD explores the search space by comparing a possibly improved ranker to the current production ranker. To this end, it uses interleaved comparison methods, which can infer with high sensitivity a preference between two rankings based only on interaction data. A limiting factor is that it can compare only to a single exploratory ranker. We propose an online learning to rank algorithm called multileave gradient descent (MGD) that extends DBGD to learn from so-called multileaved comparison methods that can compare a set of rankings instead of merely a pair. We show experimentally that MGD allows for better selection of candidates than DBGD without the need for more comparisons involving users. An important implication of our results is that orders of magnitude less user interaction data is required to find good rankers when multileaved comparisons are used within online learning to rank. Hence, fewer users need to be exposed to possibly inferior rankers and our method allows search engines to adapt more quickly to changes in user preferences.", "title": "" } ]
scidocsrr
32bf91d28b824afac3874285773666d9
From archaeon to eukaryote: the evolutionary dark ages of the eukaryotic cell.
[ { "docid": "023fa0ac94b2ea1740f1bbeb8de64734", "text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.", "title": "" } ]
[ { "docid": "179675ecf9ef119fcb0bc512995e2920", "text": "There is little evidence available on the use of robot-assisted therapy in subacute stroke patients. A randomized controlled trial was carried out to evaluate the short-time efficacy of intensive robot-assisted therapy compared to usual physical therapy performed in the early phase after stroke onset. Fifty-three subacute stroke patients at their first-ever stroke were enrolled 30 ± 7 days after the acute event and randomized into two groups, both exposed to standard therapy. Additional 30 sessions of robot-assisted therapy were provided to the Experimental Group. Additional 30 sessions of usual therapy were provided to the Control Group. The following impairment evaluations were performed at the beginning (T0), after 15 sessions (T1), and at the end of the treatment (T2): Fugl-Meyer Assessment Scale (FM), Modified Ashworth Scale-Shoulder (MAS-S), Modified Ashworth Scale-Elbow (MAS-E), Total Passive Range of Motion-Shoulder/Elbow (pROM), and Motricity Index (MI). Evidence of significant improvements in MAS-S (p = 0.004), MAS-E (p = 0.018) and pROM (p < 0.0001) was found in the Experimental Group. Significant improvement was demonstrated in both Experimental and Control Group in FM (EG: p < 0.0001, CG: p < 0.0001) and MI (EG: p < 0.0001, CG: p < 0.0001), with an higher improvement in the Experimental Group. Robot-assisted upper limb rehabilitation treatment can contribute to increasing motor recovery in subacute stroke patients. Focusing on the early phase of stroke recovery has a high potential impact in clinical practice.", "title": "" }, { "docid": "f7d535f9a5eeae77defe41318d642403", "text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.", "title": "" }, { "docid": "97582a93ef3977fab8b242a1ce102459", "text": "We propose a distributed, multi-camera video analysis paradigm for aiport security surveillance. We propose to use a new class of biometry signatures, which are called soft biometry including a person's height, built, skin tone, color of shirts and trousers, motion pattern, trajectory history, etc., to ID and track errant passengers and suspicious events without having to shut down a whole terminal building and cancel multiple flights. The proposed research is to enable the reliable acquisition, maintenance, and correspondence of soft biometry signatures in a coordinated manner from a large number of video streams for security surveillance. The intellectual merit of the proposed research is to address three important video analysis problems in a distributed, multi-camera surveillance network: sensor network calibration, peer-to-peer sensor data fusion, and stationary-dynamic cooperative camera sensing.", "title": "" }, { "docid": "a10b7c4b088c8df706381cfc3f1faec1", "text": "OBJECTIVE\nTo develop a clinical practice guideline for red blood cell transfusion in adult trauma and critical care.\n\n\nDESIGN\nMeetings, teleconferences and electronic-based communication to achieve grading of the published evidence, discussion and consensus among the entire committee members.\n\n\nMETHODS\nThis practice management guideline was developed by a joint taskforce of EAST (Eastern Association for Surgery of Trauma) and the American College of Critical Care Medicine (ACCM) of the Society of Critical Care Medicine (SCCM). We performed a comprehensive literature review of the topic and graded the evidence using scientific assessment methods employed by the Canadian and U.S. Preventive Task Force (Grading of Evidence, Class I, II, III; Grading of Recommendations, Level I, II, III). A list of guideline recommendations was compiled by the members of the guidelines committees for the two societies. Following an extensive review process by external reviewers, the final guideline manuscript was reviewed and approved by the EAST Board of Directors, the Board of Regents of the ACCM and the Council of SCCM.\n\n\nRESULTS\nKey recommendations are listed by category, including (A) Indications for RBC transfusion in the general critically ill patient; (B) RBC transfusion in sepsis; (C) RBC transfusion in patients at risk for or with acute lung injury and acute respiratory distress syndrome; (D) RBC transfusion in patients with neurologic injury and diseases; (E) RBC transfusion risks; (F) Alternatives to RBC transfusion; and (G) Strategies to reduce RBC transfusion.\n\n\nCONCLUSIONS\nEvidence-based recommendations regarding the use of RBC transfusion in adult trauma and critical care will provide important information to critical care practitioners.", "title": "" }, { "docid": "950fc4239ced87fef76ac687af3b09ac", "text": "Software developers’ activities are in general recorded in software repositories such as version control systems, bug trackers and mail archives. While abundant information is usually present in such repositories, successful information extraction is often challenged by the necessity to simultaneously analyze different repositories and to combine the information obtained. We propose to apply process mining techniques, originally developed for business process analysis, to address this challenge. However, in order for process mining to become applicable, different software repositories should be combined, and “related” software development events should be matched: e.g., mails sent about a file, modifications of the file and bug reports that can be traced back to it. The combination and matching of events has been implemented in FRASR (Framework for Analyzing Software Repositories), augmenting the process mining framework ProM. FRASR has been successfully applied in a series of case studies addressing such aspects of the development process as roles of different developers and the way bug reports are handled.", "title": "" }, { "docid": "ea31a93d54e45eede5ba3e6263e8a13e", "text": "Clustering methods for data-mining problems must be extremely scalable. In addition, several data mining applications demand that the clusters obtained be balanced, i.e., of approximately the same size or importance. In this paper, we propose a general framework for scalable, balanced clustering. The data clustering process is broken down into three steps: sampling of a small representative subset of the points, clustering of the sampled data, and populating the initial clusters with the remaining data followed by refinements. First, we show that a simple uniform sampling from the original data is sufficient to get a representative subset with high probability. While the proposed framework allows a large class of algorithms to be used for clustering the sampled set, we focus on some popular parametric algorithms for ease of exposition. We then present algorithms to populate and refine the clusters. The algorithm for populating the clusters is based on a generalization of the stable marriage problem, whereas the refinement algorithm is a constrained iterative relocation scheme. The complexity of the overall method is O(kN log N) for obtaining k balanced clusters from N data points, which compares favorably with other existing techniques for balanced clustering. In addition to providing balancing guarantees, the clustering performance obtained using the proposed framework is comparable to and often better than the corresponding unconstrained solution. Experimental results on several datasets, including high-dimensional (>20,000) ones, are provided to demonstrate the efficacy of the proposed framework.", "title": "" }, { "docid": "e37b3a68c850d1fb54c9030c22b5792f", "text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "title": "" }, { "docid": "9ca63cbf9fb0294aff706562d629e9d1", "text": "This demo showcases Scythe, a novel query-by-example system that can synthesize expressive SQL queries from inputoutput examples. Scythe is designed to help end-users program SQL and explore data simply using input-output examples. From a web-browser, users can obtain SQL queries with Scythe in an automated, interactive fashion: from a provided example, Scythe synthesizes SQL queries and resolves ambiguities via conversations with the users. In this demo, we first show Scythe how end users can formulate queries using Scythe; we then switch to the perspective of an algorithm designer to show how Scythe can scale up to handle complex SQL features, like outer joins and subqueries.", "title": "" }, { "docid": "e34d244a395a753b0cb97f8535b56add", "text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.", "title": "" }, { "docid": "c16428f049cebdc383c4ee24f75da6b0", "text": "Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. C © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14–23 DOI: 10.1002/widm.8", "title": "" }, { "docid": "3364f6fab787e3dbcc4cb611960748b8", "text": "Filamentous fungi can each produce dozens of secondary metabolites which are attractive as therapeutics, drugs, antimicrobials, flavour compounds and other high-value chemicals. Furthermore, they can be used as an expression system for eukaryotic proteins. Application of most fungal secondary metabolites is, however, so far hampered by the lack of suitable fermentation protocols for the producing strain and/or by low product titers. To overcome these limitations, we report here the engineering of the industrial fungus Aspergillus niger to produce high titers (up to 4,500 mg • l−1) of secondary metabolites belonging to the class of nonribosomal peptides. For a proof-of-concept study, we heterologously expressed the 351 kDa nonribosomal peptide synthetase ESYN from Fusarium oxysporum in A. niger. ESYN catalyzes the formation of cyclic depsipeptides of the enniatin family, which exhibit antimicrobial, antiviral and anticancer activities. The encoding gene esyn1 was put under control of a tunable bacterial-fungal hybrid promoter (Tet-on) which was switched on during early-exponential growth phase of A. niger cultures. The enniatins were isolated and purified by means of reverse phase chromatography and their identity and purity proven by tandem MS, NMR spectroscopy and X-ray crystallography. The initial yields of 1 mg • l−1 of enniatin were increased about 950 fold by optimizing feeding conditions and the morphology of A. niger in liquid shake flask cultures. Further yield optimization (about 4.5 fold) was accomplished by cultivating A. niger in 5 l fed batch fermentations. Finally, an autonomous A. niger expression host was established, which was independent from feeding with the enniatin precursor d-2-hydroxyvaleric acid d-Hiv. This was achieved by constitutively expressing a fungal d-Hiv dehydrogenase in the esyn1-expressing A. niger strain, which used the intracellular α-ketovaleric acid pool to generate d-Hiv. This is the first report demonstrating that A. niger is a potent and promising expression host for nonribosomal peptides with titers high enough to become industrially attractive. Application of the Tet-on system in A. niger allows precise control on the timing of product formation, thereby ensuring high yields and purity of the peptides produced.", "title": "" }, { "docid": "f562bd72463945bd35d42894e4815543", "text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.", "title": "" }, { "docid": "27caf5f3a638e5084ca361424e69e9d0", "text": "Digital watermarking of multimedia content has become a very active research area over the last several years. A general framework for watermark embedding and detection/decoding is presented here along with a review of some of the algorithms for different media types described in the literature. We highlight some of the differences based on application such as copyright protection, authentication, tamper detection, and data hiding as well as differences in technology and system requirements for different media types such as digital images, video, audio and text.", "title": "" }, { "docid": "869a2cfbb021104e7f3bc7cb214b82f9", "text": "The commoditization of high-performance networking has sparked research interest in the RDMA capability of this hardware. One-sided RDMA primitives, in particular, have generated substantial excitement due to the ability to directly access remote memory from within an application without involving the TCP/IP stack or the remote CPU. This paper considers how to leverage RDMA to improve the analytical performance of parallel database systems. To shuffle data efficiently using RDMA, one needs to consider a complex design space that includes (1) the number of open connections, (2) the contention for the shared network interface, (3) the RDMA transport function, and (4) how much memory should be reserved to exchange data between nodes during query processing. We contribute six designs that capture salient trade-offs in this design space. We comprehensively evaluate how transport-layer decisions impact the query performance of a database system for different generations of InfiniBand. We find that a shuffling operator that uses the RDMA Send/Receive transport function over the Unreliable Datagram transport service can transmit data up to 4× faster than an RDMA-capable MPI implementation in a 16-node cluster. The response time of TPC-H queries improves by as much as 2×.", "title": "" }, { "docid": "644ebe324c23a23bc081119f13190810", "text": "Most computer systems currently consist of DRAM as main memory and hard disk drives (HDDs) as storage devices. Due to the volatile nature of DRAM, the main memory may suffer from data loss in the event of power failures or system crashes. With rapid development of new types of non-volatile memory (NVRAM), such as PCM, Memristor, and STT-RAM, it becomes likely that one of these technologies will replace DRAM as main memory in the not-too-distant future. In an NVRAM based buffer cache, any updated pages can be kept longer without the urgency to be flushed to HDDs. This opens opportunities for designing new buffer cache policies that can achieve better storage performance. However, it is challenging to design a policy that can also increase the cache hit ratio. In this paper, we propose a buffer cache policy, named I/O-Cache, that regroups and synchronizes long sets of consecutive dirty pages to take advantage of HDDs' fast sequential access speed and the non-volatile property of NVRAM. In addition, our new policy can dynamically separate the whole cache into a dirty cache and a clean cache, according to the characteristics of the workload, to decrease storage writes. We evaluate our scheme with various traces. The experimental results show that I/O-Cache shortens I/O completion time, decreases the number of I/O requests, and improves the cache hit ratio compared with existing cache policies.", "title": "" }, { "docid": "9da15e2851124d6ca1524ba28572f922", "text": "With the growth of mobile data application and the ultimate expectations of 5G technology, the need to expand the capacity of the wireless networks is inevitable. Massive MIMO technique is currently taking a major part of the ongoing research, and expected to be the key player in the new cellular technologies. This papers presents an overview of the major aspects related to massive MIMO design including, antenna array general design, configuration, and challenges, in addition to advanced beamforming techniques and channel modeling and estimation issues affecting the implementation of such systems.", "title": "" }, { "docid": "e1a4e8b8c892f1e26b698cd9fd37c3db", "text": "Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework.", "title": "" }, { "docid": "354cbda757045bcee7044159bd353ca5", "text": "In this paper we present the preliminary work of a Basque poetry generation system. Basically, we have extracted the POS-tag sequences from some verse corpora and calculated the probability of each sequence. For the generation process we have defined 3 different experiments: Based on a strophe from the corpora, we (a) replace each word with other according to its POS-tag and suffixes, (b) replace each noun and adjective with another equally inflected word and (c) replace only nouns with semantically related ones (inflected). Finally we evaluate those strategies using a Turing Test-like evaluation.", "title": "" }, { "docid": "c479983e954695014417976275030746", "text": "Semi-Non-negative Matrix Factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies cannot interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different, unknown attributes of a given dataset. We also present a semi-supervised version of the algorithm, named Deep WSF, that allows the use of (partial) prior information for each of the known attributes of a dataset, that allows the model to be used on datasets with mixed attribute knowledge. Finally, we show that our models are able to learn low-dimensional representations that are better suited for clustering, but also classification, outperforming Semi-Non-negative Matrix Factorization, but also other state-of-the-art methodologies variants.", "title": "" }, { "docid": "81b5379abf3849e1ae4e233fd4955062", "text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.", "title": "" } ]
scidocsrr
6d3e17e4b44a2cadedc8f483ab186cb2
Add English to image Chinese captioning
[ { "docid": "210a777341f3557081d43f2580428c32", "text": "This paper studies the problem of associating images with descriptive sentences by embedding them in a common latent space. We are interested in learning such embeddings from hundreds of thousands or millions of examples. Unfortunately, it is prohibitively expensive to fully annotate this many training images with ground-truth sentences. Instead, we ask whether we can learn better image-sentence embeddings by augmenting small fully annotated training sets with millions of images that have weak and noisy annotations (titles, tags, or descriptions). After investigating several state-of-the-art scalable embedding methods, we introduce a new algorithm called Stacked Auxiliary Embedding that can successfully transfer knowledge from millions of weakly annotated images to improve the accuracy of retrieval-based image description.", "title": "" }, { "docid": "c879ee3945592f2e39bb3306602bb46a", "text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.", "title": "" }, { "docid": "9eaab923986bf74bdd073f6766ca45b2", "text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.", "title": "" } ]
[ { "docid": "b59965c405937a096186e41b2a3877c3", "text": "The culmination of many years of increasing research into the toxicity of tau aggregation in neurodegenerative disease has led to the consensus that soluble, oligomeric forms of tau are likely the most toxic entities in disease. While tauopathies overlap in the presence of tau pathology, each disease has a unique combination of symptoms and pathological features; however, most study into tau has grouped tau oligomers and studied them as a homogenous population. Established evidence from the prion field combined with the most recent tau and amyloidogenic protein research suggests that tau is a prion-like protein, capable of seeding the spread of pathology throughout the brain. Thus, it is likely that tau may also form prion-like strains or diverse conformational structures that may differ by disease and underlie some of the differences in symptoms and pathology in neurodegenerative tauopathies. The development of techniques and new technology for the detection of tau oligomeric strains may, therefore, lead to more efficacious diagnostic and treatment strategies for neurodegenerative disease. [Formula: see text].", "title": "" }, { "docid": "2827e0d197b7f66c7f6ceb846c6aaa27", "text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e84ca42f96cca0fe3ed7c70d90554a8d", "text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.", "title": "" }, { "docid": "2c39430076bf63a05cde06fe57a61ff4", "text": "With the advent of IoT based technologies; the overall industrial sector is amenable to undergo a fundamental and essential change alike to the industrial revolution. Online Monitoring solutions of environmental polluting parameter using Internet Of Things (IoT) techniques help us to gather the parameter values such as pH, temperature, humidity and concentration of carbon monoxide gas, etc. Using sensors and enables to have a keen control on the environmental pollution caused by the industries. This paper introduces a LabVIEW based online pollution monitoring of industries for the control over pollution caused by untreated disposal of waste. This paper proposes the use of an AT-mega 2560 Arduino board which collects the temperature and humidity parameter from the DHT-11 sensor, carbon dioxide concentration using MG-811 and update it into the online database using MYSQL. For monitoring and controlling, a website is designed and hosted which will give a real essence of IoT. To increase the reliability and flexibility an android application is also developed.", "title": "" }, { "docid": "bfb79421ca0ddfd5a584f009f8102a2c", "text": "In this paper, suppression of cross-polarized (XP) radiation of a circular microstrip patch antenna (CMPA) employing two new geometries of defected ground structures (DGSs), is experimentally investigated. One of the antennas employs a circular ring shaped defect in the ground plane, located bit away from the edge of the patch. This structure provides an improvement of XP level by 5 to 7 dB compared to an identical patch with normal ground plane. The second structure incorporates two arc-shaped DGSs in the H-plane of the patch. This configuration improves the XP radiation by about 7 to 12 dB over and above a normal CMPA. For demonstration of the concept, a set of prototypes have been examined at C-band. The experimental results have been presented.", "title": "" }, { "docid": "7ea3d3002506e0ea6f91f4bdab09c2d5", "text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.", "title": "" }, { "docid": "8c6c8ab24394ddfde8209cd0dacc9da3", "text": "The Intelligence in Wikipedia project at the University of Washington is combining self-supervised information extraction (IE) techniques with a mixed initiative interface designed to encourage communal content creation (CCC). Since IE and CCC are each powerful ways to produce large amounts of structured information, they have been studied extensively — but only in isolation. By combining the two methods in a virtuous feedback cycle, we aim for substantial synergy. While previous papers have described the details of individual aspects of our endeavor [25, 26, 24, 13], this report provides an overview of the project’s progress and vision.", "title": "" }, { "docid": "29786d164d0d5e76ea9c098944e27266", "text": "Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development.", "title": "" }, { "docid": "16f2811b6052a1a9e527d61b2ff6509b", "text": "Corneal topography is a non-invasive medical imaging techniqueto assess the shape of the cornea in ophthalmology. In this paper we demonstrate that in addition to its health care use, corneal topography could provide valuable biometric measurements for person authentication. To extract a feature vector from these images (topographies), we propose to fit the geometry of the corneal surface with Zernike polynomials, followed by a linear discriminant analysis (LDA) of the Zernike coefficients to select the most discriminating features. The results show that the proposed method reduced the typical d-dimensional Zernike feature vector (d=36) into a much lower r-dimensional feature vector (r=3), and improved the Equal Error Rate from 2.88% to 0.96%, with the added benefit of faster computation time.", "title": "" }, { "docid": "f9cc9e1ddc0d1db56f362a1ef409274d", "text": "Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates.", "title": "" }, { "docid": "1d724b07c232098e2a5e5af2bb1e7c83", "text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.", "title": "" }, { "docid": "012f30fbeed17fcfd098e5362bd95ee8", "text": "We prove that binary orthogonal arrays of strength 8, length 12 and cardinality 1536 do not exist. This implies the nonexistence of arrays of parameters (strength,length,cardinality) = (n, n + 4, 6.2) for every integer n ≥ 8.", "title": "" }, { "docid": "a50b7ab02d2fe934f5fb5bed14fcdad9", "text": "An empirical study has been conducted investigating the relationship between the performance of an aspect based language model in terms of perplexity and the corresponding information retrieval performance obtained. It is observed, on the corpora considered, that the perplexity of the language model has a systematic relationship with the achievable precision recall performance though it is not statistically significant.", "title": "" }, { "docid": "37a6f3773aebf46cc40266b8bb5692af", "text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.", "title": "" }, { "docid": "60eff31e8f742873cec993f1499385b5", "text": "There is an increasing interest in employing multiple sensors for surveillance and communications. Some of the motivating factors are reliability, survivability, increase in the number of targets under consideration, and increase in required coverage. Tenney and Sandell have recently treated the Bayesian detection problem with distributed sensors. They did not consider the design of data fusion algorithms. We present an optimum data fusion structure given the detectors. Individual decisions are weighted according to the reliability of the detector and then a threshold comparison is performed to obtain the global decision.", "title": "" }, { "docid": "a9d22e2568bcae7a98af7811546c7853", "text": "This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamicallygenerated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction — which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools — it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. Thesis Supervisor: Saman Amarasinghe Title: Associate Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "d5b004af32bd747c2b5ad175975f8c06", "text": "This paper presents a design of a quasi-millimeter wave wideband antenna array consisting of a leaf-shaped bowtie antenna (LSBA) and series-parallel feed networks in which parallel strip and microstrip lines are employed. A 16-element LSBA array is designed such that the antenna array operates over the frequency band of 22-30GHz. In order to demonstrate the effective performance of the presented configuration, characteristics of the designed LSBA array are evaluated by the finite-difference time domain (FDTD) analysis and measurements. Over the frequency range from 22GHz to 30GHz, the simulated reflection coefficient is observed to be less than -8dB, and the actual gain of 12.3-19.4dBi is obtained.", "title": "" }, { "docid": "95037e7dc3ae042d64a4b343ad4efd39", "text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.", "title": "" }, { "docid": "118526b566b800d9dea30d2e4c904feb", "text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, in this case domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper we introduce a query based, Arabic text, single document summarization using an existing Arabic language thesaurus and an extracted knowledge base. We use an Arabic corpus to extract domain knowledge represented by topic related concepts/ keywords and the lexical relations among them. The user’s query is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion. For the summarization dataset, Essex Arabic Summaries Corpus was used. It has many topic based articles with multiple human summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet.", "title": "" }, { "docid": "3aaffdda034c762ad36954386d796fb9", "text": "KNTU CDRPM is a cable driven redundant parallel manipulator, which is under investigation for possible high speed and large workspace applications. This newly developed mechanisms have several advantages compared to the conventional parallel mechanisms. Its rotational motion range is relatively large, its redundancy improves safety for failure in cables, and its design is suitable for long-time high acceleration motions. In this paper, collision-free workspace of the manipulator is derived by applying fast geometrical intersection detection method, which can be used for any fully parallel manipulator. Implementation of the algorithm on the Neuron design of the KNTU CDRPM leads to significant results, which introduce a new style of design of a spatial cable-driven parallel manipulators. The results are elaborated in three presentations; constant-orientation workspace, total orientation workspace and orientation workspace.", "title": "" } ]
scidocsrr
624e607dbd27503e328cfd000f7b9ac3
A Novel Variable Reluctance Resolver with Nonoverlapping Tooth–Coil Windings
[ { "docid": "94cb308e7b39071db4eda05c5ff16d95", "text": "A resolver generates a pair of signals proportional to the sine and cosine of the angular position of its shaft. A new low-cost method for converting the amplitudes of these sine/cosine transducer signals into a measure of the input angle without using lookup tables is proposed. The new method takes advantage of the components used to operate the resolver, the excitation (carrier) signal in particular. This is a feedforward method based on comparing the amplitudes of the resolver signals to those of the excitation signal together with another shifted by pi/2. A simple method is then used to estimate the shaft angle through this comparison technique. The poor precision of comparison of the signals around their highly nonlinear peak regions is avoided by using a simple technique that relies only on the alternating pseudolinear segments of the signals. This results in a better overall accuracy of the converter. Beside simplicity of implementation, the proposed scheme offers the advantage of robustness to amplitude fluctuation of the transducer excitation signal.", "title": "" }, { "docid": "b40b81e25501b08a07c64f68c851f4a6", "text": "Variable reluctance (VR) resolver is widely used in traction motor for battery electric vehicle as well as hybrid electric vehicle as a rotor position sensor. VR resolver generates absolute position signal by using resolver-to-digital converter (RDC) in order to deliver exact position of permanent magnets in a rotor of traction motor to motor controller. This paper deals with fault diagnosis of VR resolver by using co-simulation analysis with RDC for position angle detection. As fault conditions, eccentricity of VR resolver, short circuit condition of excitation coil and output signal coils, and material problem of silicon steel in a view point of permeability are considered. 2D FEM is used for the output signal waveforms of SIN, COS and these waveforms are converted into absolute position angle by using the algorithm of RDC. For the verification of proposed analysis results, experiment on fault conditions was conducted and compared with simulation ones.", "title": "" } ]
[ { "docid": "e7230519f0bd45b70c1cbd42f09cb9e8", "text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.", "title": "" }, { "docid": "2fbfe1fa8cda571a931b700cbb18f46e", "text": "A low-noise front-end and its controller are proposed for capacitive touch screen panels. The proposed front-end circuit based on a ΔΣ ADC uses differential sensing and integration scheme to maximize the input dynamic range. In addition, supply and internal reference voltage noise are effectively removed in the sensed touch signal. Furthermore, the demodulation process in front of the ΔΣ ADC provides the maximized oversampling ratio (OSR) so that the scan rate can be increased at the targeted resolution. The proposed IC is implemented in a mixed-mode 0.18-μm CMOS process. The measurement is performed on a bar-patterned 4.3-inch touch screen panel with 12 driving lines and 8 sensing channels. The report rate is 100 Hz, and SNR and spatial jitter are 54 dB and 0.11 mm, respectively. The chip area is 3 × 3 mm2 and total power consumption is 2.9 mW with 1.8-V and 3.3-V supply.", "title": "" }, { "docid": "8ae1ef032c0a949aa31b3ca8bc024cb5", "text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital", "title": "" }, { "docid": "d909528f98e49f8107bf0cee7a83bbfe", "text": "INTRODUCTION\nThe increasing use of cone-beam computed tomography in orthodontics has been coupled with heightened concern about the long-term risks of x-ray exposure in orthodontic populations. An industry response to this has been to offer low-exposure alternative scanning options in newer cone-beam computed tomography models.\n\n\nMETHODS\nEffective doses resulting from various combinations of field of view size and field location comparing child and adult anthropomorphic phantoms with the recently introduced i-CAT FLX cone-beam computed tomography unit (Imaging Sciences, Hatfield, Pa) were measured with optical stimulated dosimetry using previously validated protocols. Scan protocols included high resolution (360° rotation, 600 image frames, 120 kV[p], 5 mA, 7.4 seconds), standard (360°, 300 frames, 120 kV[p], 5 mA, 3.7 seconds), QuickScan (180°, 160 frames, 120 kV[p], 5 mA, 2 seconds), and QuickScan+ (180°, 160 frames, 90 kV[p], 3 mA, 2 seconds). Contrast-to-noise ratio was calculated as a quantitative measure of image quality for the various exposure options using the QUART DVT phantom.\n\n\nRESULTS\nChild phantom doses were on average 36% greater than adult phantom doses. QuickScan+ protocols resulted in significantly lower doses than standard protocols for the child (P = 0.0167) and adult (P = 0.0055) phantoms. The 13 × 16-cm cephalometric fields of view ranged from 11 to 85 μSv in the adult phantom and 18 to 120 μSv in the child phantom for the QuickScan+ and standard protocols, respectively. The contrast-to-noise ratio was reduced by approximately two thirds when comparing QuickScan+ with standard exposure parameters.\n\n\nCONCLUSIONS\nQuickScan+ effective doses are comparable with conventional panoramic examinations. Significant dose reductions are accompanied by significant reductions in image quality. However, this trade-off might be acceptable for certain diagnostic tasks such as interim assessment of treatment results.", "title": "" }, { "docid": "6f56fca8d3df57619866d9520f79e1a8", "text": "This paper explores how the remaining useful life (RUL) can be assessed for complex systems whose internal state variables are either inaccessible to sensors or hard to measure under operational conditions. Consequently, inference and estimation techniques need to be applied on indirect measurements, anticipated operational conditions, and historical data for which a Bayesian statistical approach is suitable. Models of electrochemical processes in the form of equivalent electric circuit parameters were combined with statistical models of state transitions, aging processes, and measurement fidelity in a formal framework. Relevance vector machines (RVMs) and several different particle filters (PFs) are examined for remaining life prediction and for providing uncertainty bounds. Results are shown on battery data.", "title": "" }, { "docid": "b32b16971f9dd1375785a85617b3bd2a", "text": "White matter hyperintensities (WMHs) in the brain are the consequence of cerebral small vessel disease, and can easily be detected on MRI. Over the past three decades, research has shown that the presence and extent of white matter hyperintense signals on MRI are important for clinical outcome, in terms of cognitive and functional impairment. Large, longitudinal population-based and hospital-based studies have confirmed a dose-dependent relationship between WMHs and clinical outcome, and have demonstrated a causal link between large confluent WMHs and dementia and disability. Adequate differential diagnostic assessment and management is of the utmost importance in any patient, but most notably those with incipient cognitive impairment. Novel imaging techniques such as diffusion tensor imaging might reveal subtle damage before it is visible on standard MRI. Even in Alzheimer disease, which is thought to be primarily caused by amyloid, vascular pathology, such as small vessel disease, may be of greater importance than amyloid itself in terms of influencing the disease course, especially in older individuals. Modification of risk factors for small vessel disease could be an important therapeutic goal, although evidence for effective interventions is still lacking. Here, we provide a timely Review on WMHs, including their relationship with cognitive decline and dementia.", "title": "" }, { "docid": "dfccff16f4600e8cc297296481e50b7b", "text": "Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes' trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.", "title": "" }, { "docid": "3f206b161dc55aea204dda594127bf3d", "text": "A key challenge in fine-grained recognition is how to find and represent discriminative local regions. Recent attention models are capable of learning discriminative region localizers only from category labels with reinforcement learning. However, not utilizing any explicit part information, they are not able to accurately find multiple distinctive regions. In this work, we introduce an attribute-guided attention localization scheme where the local region localizers are learned under the guidance of part attribute descriptions. By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm. The attribute labeling requirement of the scheme is more amenable than the accurate part location annotation required by traditional part-based fine-grained recognition methods. Experimental results on the CUB-200-2011 dataset [1] demonstrate the superiority of the proposed scheme on both fine-grained recognition and attribute recognition.", "title": "" }, { "docid": "c4387f3c791acc54d0a0655221947c8b", "text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.", "title": "" }, { "docid": "52fd33335eb177f989ae1b754527327a", "text": "For robot tutors, autonomy and personalizations are important factors in order to engage users as well as to personalize the content and interaction according to the needs of individuals. Œis paper presents the Programming Cognitive Robot (ProCRob) so‰ware architecture to target personalized social robotics in two complementary ways. ProCRob supports the development and personalization of social robot applications by teachers and therapists without computer programming background. It also supports the development of autonomous robots which can adapt according to the human-robot interaction context. ProCRob is based on our previous research on autonomous robotics and has been developed since 2015 by a multi-disciplinary team of researchers from the €elds of AI, Robotics and Psychology as well as artists and designers at the University of Luxembourg. ProCRob is currently being used and further developed for therapy of children with autism, and for encouraging rehabilitation activities in patients with post-stroke. Œis paper presents a summary of ProCRob and its application in autism.", "title": "" }, { "docid": "5da804fa4c1474e27a1c91fcf5682e20", "text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]", "title": "" }, { "docid": "a44264e4c382204606fdb140ab485617", "text": "Atrophoderma vermiculata is a rare genodermatosis with usual onset in childhood, characterized by a \"honey-combed\" reticular atrophy of the cheeks. The course is generally slow, with progressive worsening. We report successful treatment of 2 patients by means of the carbon dioxide and 585 nm pulsed dye lasers.", "title": "" }, { "docid": "ac08bc7d30b03fcb5cbe9f6354235ccd", "text": "The type III secretion (T3S) pathway allows bacteria to inject effector proteins into the cytosol of target animal or plant cells. T3S systems evolved into seven families that were distributed among Gram-negative bacteria by horizontal gene transfer. There are probably a few hundred effectors interfering with control and signaling in eukaryotic cells and offering a wealth of new tools to cell biologists.", "title": "" }, { "docid": "e96cf46cc99b3eff60d32f3feb8afc47", "text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "42d861f1b332db23e5dca67b6247828d", "text": "Information systems and intelligent knowledge processing are playing an increasing role in business, science and technology. Recently, advanced information systems have evolved to facilitate the co-evolution of human and information networks within communities. These advanced information systems use various paradigms including artificial intelligence, knowledge management, and neural science as well as conventional information processing paradigms.", "title": "" }, { "docid": "db0581e9f46516ee1ed26937bbec515b", "text": "In this paper we address the problem of offline Arabic handwriting word recognition. Offline recognition of handwritten words is a difficult task due to the high variability and uncertainty of human writing. The majority of the recent systems are constrained by the size of the lexicon to deal with and the number of writers. In this paper, we propose an approach for multi-writers Arabic handwritten words recognition using multiple Bayesian networks. First, we cut the image in several blocks. For each block, we compute a vector of descriptors. Then, we use K-means to cluster the low-level features including Zernik and Hu moments. Finally, we apply four variants of Bayesian networks classifiers (Naïve Bayes, Tree Augmented Naïve Bayes (TAN), Forest Augmented Naïve Bayes (FAN) and DBN (dynamic bayesian network) to classify the whole image of tunisian city name. The results demonstrate FAN and DBN outperform good recognition rates.", "title": "" }, { "docid": "6f6733c35f78b00b771cf7099c953954", "text": "This paper proposes an asymmetrical pulse width modulation (APWM) with frequency tracking control of full bridge series resonant inverter for induction heating application. In this method, APWM is used as power regulation, and phased locked loop (PLL) is used to attain zero-voltage-switching (ZVS) over a wide load range. The complete closed loop control model is obtained using small signal analysis. The validity of the proposed control is verified by simulation results.", "title": "" }, { "docid": "5e0bcb6cf54879c65e9da7a08d97bc6b", "text": "The present study made an attempt to analyze the existing buying behaviour of Instant Food Products by individual households and to predict the demand for Instant Food Products of Hyderabad city in Andra Padesh .All the respondents were aware of pickles and Sambar masala but only 56.67 per cent of respondents were aware of Dosa/Idli mix. About 96.11 per cent consumers of Dosa/Idli mix and more than half of consumers of pickles and Sambar masala prepared their own. Low cost of home preparation and differences in tastes were the major reasons for non consumption, whereas ready availability and save time of preparation were the reasons for consuming Instant Food Products. Retail shops are the major source of information and source of purchase of Instant Food Products. The average monthly expenditure on Instant Food Products was found to be highest in higher income groups. The average per capita purchase and per capita expenditure on Instant food Products had a positive relationship with income of households.High price and poor taste were the reasons for not purchasing particular brand whereas best quality, retailers influence and ready availability were considered for preferring particular brand of products by the consumers.", "title": "" }, { "docid": "8bd367e82f7a5c046f6887c5edbf51c5", "text": "Internet of Things (IoT) is a fast-growing innovation that will greatly change the way humans live. It can be thought of as the next big step in Internet technology. What really enable IoT to be a possibility are the various technologies that build it up. The IoT architecture mainly requires two types of technologies: data acquisition technologies and networking technologies. Many technologies are currently present that aim to serve as components to the IoT paradigm. This paper aims to categorize the various technologies present that are commonly used by Internet of Things.", "title": "" }, { "docid": "b91e67b9ae7dbad0100c0fa98d2203e5", "text": "We develop a flexible Conditional Random Field framework for supervised preference aggregation, which combines preferences from multiple experts over items to form a distribution over rankings. The distribution is based on an energy comprised of unary and pairwise potentials allowing us to effectively capture correlations between both items and experts. We describe procedures for learning in this modelnand demonstrate that inference can be done much more efficiently thannin analogous models. Experiments on benchmark tasks demonstrate significant performance gains over existing rank aggregation methods.", "title": "" } ]
scidocsrr
e5acf9f83c5142fe6b9a57179ce7787b
Friending your way up the ladder: Connecting massive multiplayer online game behaviors with offline leadership
[ { "docid": "90e76229ff20e253d8d28e09aad432dc", "text": "Playing online games is experience-oriented but few studies have explored the user’s initial (trial) reaction to game playing and how this further influences a player’s behavior. Drawing upon the Uses and Gratifications theory, we investigated players’ multiple gratifications for playing (i.e. achievement, enjoyment and social interaction) and their experience with the service mechanisms offered after they had played an online game. This study explores the important antecedents of players’ proactive ‘‘stickiness” to a specific online game and examines the relationships among these antecedents. The results show that both the gratifications and service mechanisms significantly affect a player’s continued motivation to play, which is crucial to a player’s proactive stickiness to an online game. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "b8be5a7904829b247436fa9c544110a6", "text": "Realization of Randomness had always been a controversial concept with great importance both from theoretical and practical Perspectives. This realization has been revolutionized in the light of recent studies especially in the realms of Chaos Theory, Algorithmic Information Theory and Emergent behavior in complex systems. We briefly discuss different definitions of Randomness and also different methods for generating it. The connection between all these approaches and the notion of Normality as the necessary condition of being unpredictable would be discussed. Then a complex-system-based Random Number Generator would be introduced. We will analyze its paradoxical features (Conservative Nature and reversibility in spite of having considerable variation) by using information theoretic measures in connection with other measures. The evolution of this Random Generator is equivalent to the evolution of its probabilistic description in terms of probability distribution over blocks of different lengths. By getting the aid of simulations we will show the ability of this system to preserve normality during the process of coarse graining. Keywords—Random number generators; entropy; correlation information; elementary cellular automata; reversibility", "title": "" }, { "docid": "6c4b027910830aea8e679720232cacf4", "text": "In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use such classifier.", "title": "" }, { "docid": "d04a6ca9c09b8c10daf64c9f7830c992", "text": "Slave servo clocks have an essential role in hardware and software synchronization techniques based on Precision Time Protocol (PTP). The objective of servo clocks is to remove the drift between slave and master nodes, while keeping the output timing jitter within given uncertainty boundaries. Up to now, no univocal criteria exist for servo clock design. In fact, the relationship between controller design, performances and uncertainty sources is quite evanescent. In this paper, we propose a quite simple, but exhaustive linear model, which is expected to be used in the design of enhanced servo clock architectures.", "title": "" }, { "docid": "bc4b1b48794f9db934c705ef3821cdcf", "text": "Expanding access to financial services holds the promise to help reduce poverty and spur economic development. But, as a practical matter, commercial banks have faced challenges expanding access to poor and low-income households in developing economies, and nonprofits have had limited reach. We review recent innovations that are improving the quantity and quality of financial access. They are taking possibilities well beyond early models centered on providing “microcredit” for small business investment. We focus on new credit mechanisms and devices that help households manage cash flows, save, and cope with risk. Our eye is on contract designs, product innovations, regulatory policy, and ultimately economic and social impacts. We relate the innovations and empirical evidence to theoretical ideas, drawing links in particular to new work in behavioral economics and to randomized evaluation methods.", "title": "" }, { "docid": "c26eabb377db5f1033ec6d354d890a6f", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "8e6debae3b3d3394e87e671a14f8819e", "text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.", "title": "" }, { "docid": "c1ca7ef76472258c6359111dd4d014d5", "text": "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read/answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.", "title": "" }, { "docid": "15a56973f3751dbc069fe62cd076682c", "text": "The software QBlade under General Public License is used for analysis and design of wind turbines. QBlade uses the Blade Element Momentum (BEM) method for the simulation of wind turbines and it is integrated with the XFOIL airfoil design and analysis. It is possible to predict wind turbine performance with it. Nowadays, Computational Fluid Dynamics (CFD) is used for optimization and design of turbine application. In this study, Horizontal wind turbine with a rotor diameter of 2 m, was designed and objected to performance analysis by QBlade and Ansys-Fluent. The graphic of the power coefficient vs. tip speed ratio (TSR) was obtained for each result. When the results are compared, the good agreement has been seen.", "title": "" }, { "docid": "13150a58d86b796213501d26e4b41e5b", "text": "In this work, CoMoO4@NiMoO4·xH2O core-shell heterostructure electrode is directly grown on carbon fabric (CF) via a feasible hydrothermal procedure with CoMoO4 nanowires (NWs) as the core and NiMoO4 nanosheets (NSs) as the shell. This core-shell heterostructure could provide fast ion and electron transfer, a large number of active sites, and good strain accommodation. As a result, the CoMoO4@NiMoO4·xH2O electrode yields high-capacitance performance with a high specific capacitance of 1582 F g-1, good cycling stability with the capacitance retention of 97.1% after 3000 cycles and good rate capability. The electrode also shows excellent mechanical flexibility. Also, a flexible Fe2O3 nanorods/CF electrode with enhanced electrochemical performance was prepared. A solid-state asymmetric supercapacitor device is successfully fabricated by using flexible CoMoO4@NiMoO4·xH2O as the positive electrode and Fe2O3 as the negative electrode. The asymmetric supercapacitor with a maximum voltage of 1.6 V demonstrates high specific energy (41.8 Wh kg-1 at 700 W kg-1), high power density (12000 W kg-1 at 26.7 Wh kg-1), and excellent cycle ability with the capacitance retention of 89.3% after 5000 cycles (at the current density of 3A g-1).", "title": "" }, { "docid": "17642e2f5ac7d6594df72deacab332fb", "text": "Paraphrase patterns are semantically equivalent patterns, which are useful in both paraphrase recognition and generation. This paper presents a pivot approach for extracting paraphrase patterns from bilingual parallel corpora, whereby the paraphrase patterns in English are extracted using the patterns in another language as pivots. We make use of log-linear models for computing the paraphrase likelihood between pattern pairs and exploit feature functions based on maximum likelihood estimation (MLE), lexical weighting (LW), and monolingual word alignment (MWA). Using the presented method, we extract more than 1 million pairs of paraphrase patterns from about 2 million pairs of bilingual parallel sentences. The precision of the extracted paraphrase patterns is above 78%. Experimental results show that the presented method significantly outperforms a well-known method called discovery of inference rules from text (DIRT). Additionally, the log-linear model with the proposed feature functions are effective. The extracted paraphrase patterns are fully analyzed. Especially, we found that the extracted paraphrase patterns can be classified into five types, which are useful in multiple natural language processing (NLP) applications.", "title": "" }, { "docid": "b0c62e2049ea4f8ada0d506e06adb4bb", "text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.", "title": "" }, { "docid": "9c8f6dddcb9bb099eea4433534cb40da", "text": "There has been an increasing interest in the applications of polarimctric n~icrowavc radiometers for ocean wind remote sensing. Aircraft and spaceborne radiometers have found significant wind direction signals in sea surface brightness temperatures, in addition to their sensitivities on wind speeds. However, it is not yet understood what physical scattering mechanisms produce the observed wind direction dependence. To this encl, polari]nctric microwave emissions from wind-generated sea surfaces are investigated with a polarimctric two-scale scattering model of sea surfaces, which relates the directional wind-wave spectrum to passive microwave signatures of sea surfaces. T)leoretical azimuthal modulations are found to agree well with experimental observations foI all Stokes paranletcrs from nearnadir to 65° incidence angles. The up/downwind asymmetries of brightness temperatures are interpreted usiIlg the hydrodynamic modulation. The contributions of Bragg scattering by short waves, geometric optics scattering by long waves and sea foam are examined. The geometric optics scattering mechanism underestimates the directicmal signals in the first three Stokes paranletcrs, and most importantly it predicts no signals in the fourth Stokes parameter (V), in disagreement with experimental datfi. In contrast, the Bragg scattering and and contributes to most of the wind direction signals from the two-scale model correctly predicts the phase changes of tl}e up/crosswind asymmetries in 7j U from middle to high incidence angles. The accuracy of the Bragg scattering theory for radiometric emission from water ripples is corroborated by the numerical Monte Carlo simulation of rough surface scattering. ‘I’his theoretical interpretation indicates the potential use of ]Jolarimctric brightness temperatures for retrieving the directional wave spectrum of capillary waves.", "title": "" }, { "docid": "5d417375c4ce7c47a90808971f215c91", "text": "While the RGB2GRAY conversion with fixed parameters is a classical and widely used tool for image decolorization, recent studies showed that adapting weighting parameters in a two-order multivariance polynomial model has great potential to improve the conversion ability. In this paper, by viewing the two-order model as the sum of three subspaces, it is observed that the first subspace in the two-order model has the dominating importance and the second and the third subspace can be seen as refinement. Therefore, we present a semiparametric strategy to take advantage of both the RGB2GRAY and the two-order models. In the proposed method, the RGB2GRAY result on the first subspace is treated as an immediate grayed image, and then the parameters in the second and the third subspace are optimized. Experimental results show that the proposed approach is comparable to other state-of-the-art algorithms in both quantitative evaluation and visual quality, especially for images with abundant colors and patterns. This algorithm also exhibits good resistance to noise. In addition, instead of the color contrast preserving ratio using the first-order gradient for decolorization quality metric, the color contrast correlation preserving ratio utilizing the second-order gradient is calculated as a new perceptual quality metric.", "title": "" }, { "docid": "60d21d395c472eb36bdfd014c53d918a", "text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.", "title": "" }, { "docid": "4fa7f7f723c2f2eee4c0e2c294273c74", "text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.", "title": "" }, { "docid": "e6d5781d32e76d9c5f7c4ea985568986", "text": "We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.", "title": "" }, { "docid": "1e2a64369279d178ee280ed7e2c0f540", "text": "We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.", "title": "" }, { "docid": "ed9d72566cdf3e353bf4b1e589bf85eb", "text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.", "title": "" }, { "docid": "f6227013273d148321cab1eef83c40e5", "text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.", "title": "" }, { "docid": "c612ee4ad1b4daa030e86a59543ca53b", "text": "The dominant approach for many NLP tasks are recurrent neura l networks, in particular LSTMs, and convolutional neural networks. However , these architectures are rather shallow in comparison to the deep convolutional n etworks which are very successful in computer vision. We present a new archite ctur for text processing which operates directly on the character level and uses o nly small convolutions and pooling operations. We are able to show that the performa nce of this model increases with the depth: using up to 29 convolutional layer s, we report significant improvements over the state-of-the-art on several public t ext classification tasks. To the best of our knowledge, this is the first time that very de ep convolutional nets have been applied to NLP.", "title": "" } ]
scidocsrr
db86988618b0f2e30c4f824784eba8ff
A phase space model of Fourier ptychographic microscopy.
[ { "docid": "0cce6366df945f079dbb0b90d79b790e", "text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.", "title": "" } ]
[ { "docid": "6d728174d576ac785ff093f4cdc16e1b", "text": "The stress-inducible protein heme oxygenase-1 provides protection against oxidative stress. The anti-inflammatory properties of heme oxygenase-1 may serve as a basis for this cytoprotection. We demonstrate here that carbon monoxide, a by-product of heme catabolism by heme oxygenase, mediates potent anti-inflammatory effects. Both in vivo and in vitro, carbon monoxide at low concentrations differentially and selectively inhibited the expression of lipopolysaccharide-induced pro-inflammatory cytokines tumor necrosis factor-α, interleukin-1β, and macrophage inflammatory protein-1β and increased the lipopolysaccharide-induced expression of the anti-inflammatory cytokine interleukin-10. Carbon monoxide mediated these anti-inflammatory effects not through a guanylyl cyclase–cGMP or nitric oxide pathway, but instead through a pathway involving the mitogen-activated protein kinases. These data indicate the possibility that carbon monoxide may have an important protective function in inflammatory disease states and thus has potential therapeutic uses.", "title": "" }, { "docid": "b06a3c929a934633e174bfe1adab21f1", "text": "In this paper, we analyze the radio channel characteristics at mmWave frequencies for 5G cellular communications in urban scenarios. 3D-ray tracing simulations in the downtown areas of Ottawa and Chicago are conducted in both the 2 GHz and 28 GHz bands. Each area has two different deployment scenarios, with different transmitter height and different density of buildings. Based on the observations of the ray-tracing experiments, important parameters of the radio channel model, such as path loss exponent, shadowing variance, delay spread and angle spread, are provided, forming the basis of a mmWave channel model. Based on the analysis and the 3GPP 3D-Spatial Channel Model (SCM) framework, we introduce a a preliminary mmWave channel model at 28 GHz.", "title": "" }, { "docid": "89b17ff10887b84270c1d627231a0721", "text": "A novel robust adaptive beamforming method for conformal array is proposed. By using interpolation technique, the cylindrical conformal array with directional antenna elements is transformed to a virtual uniform linear array with omni-directional elements. This method can compensate the amplitude and mutual coupling errors as well as desired signal point errors of the conformal array efficiently. It is a universal method and can be applied to other curved conformal arrays. After the transformation, most of the existing adaptive beamforming algorithms can be applied to conformal array directly. The efficiency of the proposed scheme is assessed through numerical simulations.", "title": "" }, { "docid": "1389323613225897330d250e9349867b", "text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .", "title": "" }, { "docid": "554d0255aef7ffac9e923da5d93b97e3", "text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.", "title": "" }, { "docid": "b623437391b298c2e618b0f42d3e19a9", "text": "In the era of the Social Web, crowdfunding has become an increasingly more important channel for entrepreneurs to raise funds from the crowd to support their startup projects. Previous studies examined various factors such as project goals, project durations, and categories of projects that might influence the outcomes of the fund raising campaigns. However, textual information of projects has rarely been studied for analyzing crowdfunding successes. The main contribution of our research work is the design of a novel text analytics-based framework that can extract latent semantics from the textual descriptions of projects to predict the fund raising outcomes of these projects. More specifically, we develop the Domain-Constraint Latent Dirichlet Allocation (DC-LDA) topic model for effective extraction of topical features from texts. Based on two real-world crowdfunding datasets, our experimental results reveal that the proposed framework outperforms a classical LDA-based method in predicting fund raising success by an average of 11% in terms of F1 score. The managerial implication of our research is that entrepreneurs can apply the proposed methodology to identify the most influential topical features embedded in project descriptions, Corresponding author at: School of Information, Renmin University of China, Beijing, 100872, P.R. China. Email address: hui.yuan@my.cityu.edu.hk (H. Yuan), raylau@cityu.edu.hk (R.Y.K. Lau), weixu@ruc.edu.cn (W. Xu) AC C EP TE D M AN U SC R IP T ACCEPTED MANUSCRIPT 2 and hence to better promote their projects and improving the chance of raising sufficient funds for their projects.", "title": "" }, { "docid": "07c185c21c9ce3be5754294a73ab5e3c", "text": "In order to support efficient workflow design, recent commercial workflow systems are providing templates of common business processes. These templates, called cases, can be modified individually or collectively into a new workflow to meet the business specification. However, little research has been done on how to manage workflow models, including issues such as model storage, model retrieval, model reuse and assembly. In this paper, we propose a novel framework to support workflow modeling and design by adapting workflow cases from a repository of process models. Our approach to workflow model management is based on a structured workflow lifecycle and leverages recent advances in model management and case-based reasoning techniques. Our contributions include a conceptual model of workflow cases, a similarity flooding algorithm for workflow case retrieval, and a domain-independent AI planning approach to workflow case composition. We illustrate the workflow model management framework with a prototype system called Case-Oriented Design Assistant for Workflow Modeling (CODAW). 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "11d1978a3405f63829e02ccb73dcd75f", "text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.", "title": "" }, { "docid": "a488a74817a8401eff1373d4e21f060f", "text": "We propose a neural machine translation architecture that models the surrounding text in addition to the source sentence. These models lead to better performance, both in terms of general translation quality and pronoun prediction, when trained on small corpora, although this improvement largely disappears when trained with a larger corpus. We also discover that attention-based neural machine translation is well suited for pronoun prediction and compares favorably with other approaches that were specifically designed for this task.", "title": "" }, { "docid": "3111ef9867be7cf58be9694cbe2a14d9", "text": "Grammatical Error Diagnosis for Chinese has always been a challenge for both foreign learners and NLP researchers, for the variousity of grammar and the flexibility of expression. In this paper, we present a model based on Bidirectional Long Short-Term Memory(Bi-LSTM) neural networks, which treats the task as a sequence labeling problem, so as to detect Chinese grammatical errors, to identify the error types and to locate the error positions. In the corpora of this year’s shared task, there can be multiple errors in a single offset of a sentence, to address which, we simutaneously train three Bi-LSTM models sharing word embeddings which label Missing, Redundant and Selection errors respectively. We regard word ordering error as a special kind of word selection error which is longer during training phase, and then separate them by length during testing phase. In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), Our system achieved relatively high F1 for all the three levels in the traditional Chinese track and for the detection level in the Simpified Chinese track.", "title": "" }, { "docid": "40413aa7fd92e042b8c359b2cf6d2d23", "text": "Text summarization is the process of creating a short description of a specified text while preserving its information context. This paper tackles Arabic text summarization problem. The semantic redundancy and insignificance will be removed from the summarized text. This can be achieved by checking the text entailment relation, and lexical cohesion. Accordingly, a text summarization approach (called LCEAS) based on lexical cohesion and text entailment relation is developed. In LCEAS, text entailment approach is enhanced to suit Arabic language. Roots and semantic-relations are used between the senses of the words to extract the common words. New threshold values are specified to suit entailment based segmentation for Arabic text. LCEAS is a single document summarization, which is constructed using extraction technique. To evaluate LCEAS, its performance is compared with previous Arabic text summarization systems. Each system output is compared against Essex Arabic Summaries Corpus (EASC) corpus (the model summaries), using Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Automatic Summarization Engineering (AutoSummEng) metrics. The outcome of LCEAS indicates that the developed approach outperforms the previous Arabic text summarization systems. KeywordsText Summarization; Text Segmentation; Lexical Cohesion; Text Entailment; Natural Language Processing.", "title": "" }, { "docid": "e587b5954c957f268d21878ede3359f8", "text": "ing audit logs", "title": "" }, { "docid": "b31244421f89b32704509dfeb80702a0", "text": "Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.", "title": "" }, { "docid": "9664431f0cfc22567e1e5c945f898595", "text": "Anomaly detection aims to detect abnormal events by a model of normality. It plays an important role in many domains such as network intrusion detection, criminal activity identity and so on. With the rapidly growing size of accessible training data and high computation capacities, deep learning based anomaly detection has become more and more popular. In this paper, a new domain-based anomaly detection method based on generative adversarial networks (GAN) is proposed. Minimum likelihood regularization is proposed to make the generator produce more anomalies and prevent it from converging to normal data distribution. Proper ensemble of anomaly scores is shown to improve the stability of discriminator effectively. The proposed method has achieved significant improvement than other anomaly detection methods on Cifar10 and UCI datasets.", "title": "" }, { "docid": "b79bf80221c893f40abd7fd6b8a7145a", "text": "Attention is typically used to select informative sub-phrases that are used for prediction. This paper investigates the novel use of attention as a form of feature augmentation, i.e, casted attention. We propose Multi-Cast Attention Networks (MCAN), a new attention mechanism and general model architecture for a potpourri of ranking tasks in the conversational modeling and question answering domains. Our approach performs a series of soft attention operations, each time casting a scalar feature upon the inner word embeddings. The key idea is to provide a real-valued hint (feature) to a subsequent encoder layer and is targeted at improving the representation learning process. There are several advantages to this design, e.g., it allows an arbitrary number of attention mechanisms to be casted, allowing for multiple attention types (e.g., co-attention, intra-attention) and attention variants (e.g., alignment-pooling, max-pooling, mean-pooling) to be executed simultaneously. This not only eliminates the costly need to tune the nature of the co-attention layer, but also provides greater extents of explainability to practitioners. Via extensive experiments on four well-known benchmark datasets, we show that MCAN achieves state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms existing state-of-the-art models by 9%. MCAN also achieves the best performing score to date on the well-studied TrecQA dataset.", "title": "" }, { "docid": "486e3f5614f69f60d8703d8641c73416", "text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.", "title": "" }, { "docid": "4331057bb0a3f3add576513fa71791a8", "text": "The category theoretic structures of monads and comonads can be used as an abstraction mechanism for simplifying both language semantics and programs. Monads have been used to structure impure computations, whilst comonads have been used to structure context-dependent computations. Interestingly, the class of computations structured by monads and the class of computations structured by comonads are not mutually exclusive. This paper formalises and explores the conditions under which a monad and a comonad can both structure the same notion of computation: when a comonad is left adjoint to a monad. Furthermore, we examine situations where a particular monad/comonad model of computation is deficient in capturing the essence of a computational pattern and provide a technique for calculating an alternative monad or comonad structure which fully captures the essence of the computation. Included is some discussion on how to choose between a monad or comonad structure in the case where either can be used to capture a particular notion of computation.", "title": "" }, { "docid": "70bed43cdfd50586e803bf1a9c8b3c0a", "text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.", "title": "" }, { "docid": "6cf9456d2fe55d2115fd40efbb1a8f96", "text": "We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.", "title": "" }, { "docid": "595a31e82d857cedecd098bf4c910e99", "text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.", "title": "" } ]
scidocsrr
50217b0b862b3413a52784f3d2ebae5a
An Embedded System-on-Chip Architecture for Real-time Visual Detection and Matching
[ { "docid": "c797b2a78ea6eb434159fd948c0a1bf0", "text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.", "title": "" }, { "docid": "90378605e6ee192cfedf60d226f8cacf", "text": "Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.", "title": "" } ]
[ { "docid": "5d79d7e9498d7d41fbc7c70d94e6a9ae", "text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.", "title": "" }, { "docid": "bf4776d6d01d63d3eb6dbeba693bf3de", "text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.", "title": "" }, { "docid": "0533a5382c58c8714f442784b5596258", "text": "Using 2 phase-change memory (PCM) devices per synapse, a 3-layer perceptron network with 164,885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for NVM+selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network (NN) simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity and asymmetry of NVM-conductance response.", "title": "" }, { "docid": "5512bb4600d4cefa79508d75bc5c6898", "text": "Spark, a subset of Ada for engineering safety and security-critical systems, is one of the best commercially available frameworks for formal-methodssupported development of critical software. Spark is designed for verification and includes a software contract language for specifying functional properties of procedures. Even though Spark and its static analysis components are beneficial and easy to use, its contract language is almost never used due to the burdens the associated tool support imposes on developers. Symbolic execution (SymExe) techniques have made significant strides in automating reasoning about deep semantic properties of source code. However, most work on SymExe has focused on bugfinding and test case generation as opposed to tasks that are more verificationoriented such as contract checking. In this paper, we present: (a) SymExe techniques for checking software contracts in embedded critical systems, and (b) Bakar Kiasan, a tool that implements these techniques in an integrated development environment for Spark. We describe a methodology for using Bakar Kiasan that provides significant increases in automation, usability, and functionality over existing Spark tools, and we present results from experiments on its application to industrial examples.", "title": "" }, { "docid": "4791b04d1cafd0b4a59bbfbec50ace38", "text": "The current paper proposes a slack-based version of the Super SBM, which is an alternative superefficiency model for the SBM proposed by Tone. Our two-stage approach provides the same superefficiency score as that obtained by the Super SBM model when the evaluated DMU is efficient and yields the same efficiency score as that obtained by the SBM model when the evaluated DMU is inefficient. The projection identified by the Super SBM model may not be strongly Pareto efficient; however, the projection identified from our approach is strongly Pareto efficient. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6c3a5cb13227b4f1333784784c1b3cb8", "text": "This is the proposal for RumourEval-2019, which will run in early 2019 as part of that year’s SemEval event. Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the dangers of “fake news” have become a mainstream concern. Yet automated support for rumour checking remains in its infancy. For this reason, it is important that a shared task in this area continues to provide a focus for effort, which is likely to increase. We therefore propose a continuation in which the veracity of further rumours is determined, and as previously, supportive of this goal, tweets discussing them are classified according to the stance they take regarding the rumour. Scope is extended compared with the first RumourEval, in that the dataset is substantially expanded to include Reddit as well as Twitter data, and additional languages are also", "title": "" }, { "docid": "17c42570f165f885062aeafe2338778d", "text": "Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for fewshot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.", "title": "" }, { "docid": "d3b0957b31f47620c0fa8e65a1cc086a", "text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.", "title": "" }, { "docid": "49791684a7a455acc9daa2ca69811e74", "text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.", "title": "" }, { "docid": "d578c75d20e6747d0a381aee3a2c8f78", "text": "As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a two-stage framework, namely SmartCrawler, for efficient harvesting deep web interfaces. In the first stage, SmartCrawler performs site-based searching for center pages with the help of search engines, avoiding visiting a large number of pages. To achieve more accurate results for a focused crawl, SmartCrawler ranks websites to prioritize highly relevant ones for a given topic. In the second stage, SmartCrawler achieves fast in-site searching by excavating most relevant links with an adaptive link-ranking. To eliminate bias on visiting some highly relevant links in hidden web directories, we design a link tree data structure to achieve wider coverage for a website. Our experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framework, which efficiently retrieves deep-web interfaces from large-scale sites and achieves higher harvest rates than other crawlers.", "title": "" }, { "docid": "9d9086fbdfa46ded883b14152df7f5a5", "text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.", "title": "" }, { "docid": "49ca8739b6e28f0988b643fc97e7c6b1", "text": "Stroke is a leading cause of severe physical disability, causing a range of impairments.  Frequently stroke survivors are left with partial paralysis on one side of the body and movement can be severely restricted in the affected side’s hand and arm.  We know that effective rehabilitation must be early, intensive and repetitive, which leads to the challenge of how to maintain motivation for people undergoing therapy.  This paper discusses why games may be an effective way of addressing the problem of engagement in therapy and analyses which game design patterns may be important for rehabilitation.  We present a number of serious games that our group has developed for upper limb rehabilitation.  Results of an evaluation of the games are presented which indicate that they may be appropriate for people with stroke.", "title": "" }, { "docid": "08c97484fe3784e2f1fd42606b915f83", "text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.", "title": "" }, { "docid": "2ac1d3ce029f547213c122c0e84650b2", "text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/class#fall2012/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) Please indicate the submission time and number of late dates clearly in your submission. SCPD students: Please email your solutions to cs229-qa@cs.stanford.edu with the subject line \" Problem Set 2 Submission \". The first page of your submission should be the homework routing form, which can be found on the SCPD website. Your submission (including the routing form) must be a single pdf file, or we may not be able to grade it. If you are writing your solutions out by hand, please write clearly and in a reasonably large font using a dark pen to improve legibility. 1. [15 points] Constructing kernels In class, we saw that by choosing a kernel K(x, z) = φ(x) T φ(z), we can implicitly map data to a high dimensional space, and have the SVM algorithm work in that space. One way to generate kernels is to explicitly define the mapping φ to a higher dimensional space, and then work out the corresponding K. However in this question we are interested in direct construction of kernels. I.e., suppose we have a function K(x, z) that we think gives an appropriate similarity measure for our learning problem, and we are considering plugging K into the SVM as the kernel function. However for K(x, z) to be a valid kernel, it must correspond to an inner product in some higher dimensional space resulting from some feature mapping φ. Mercer's theorem tells us that K(x, z) is a (Mercer) kernel if and only if for any finite set {x (1) ,. .. , x (m) }, the matrix K is symmetric and positive semidefinite, where the square matrix K ∈ R m×m is given by K ij = K(x (i) , x (j)). Now here comes the question: Let K 1 , K 2 be kernels …", "title": "" }, { "docid": "ca65a232e6b93f6372d1339a11ea63f4", "text": "Over the past decade, information technology has dramatically changed the context in which economic transactions take place. Increasingly, transactions are computer-mediated, so that, relative to humanhuman interactions, human-computer interactions are gaining in relevance. Computer-mediated transactions, and in particular those related to the Internet, increase perceptions of uncertainty. Therefore, trust becomes a crucial factor in the reduction of these perceptions. To investigate this important construct, we studied individual trust behavior and the underlying brain mechanisms through a multi-round trust game. Participants acted in the role of an investor, playing against both humans and avatars. The behavioral results show that participants trusted avatars to a similar degree as they trusted humans. Participants also revealed similarity in learning an interaction partner’s trustworthiness, independent of whether the partner was human or avatar. However, the neuroimaging findings revealed differential responses within the brain network that is associated with theory of mind (mentalizing) depending on the interaction partner. Based on these results, the major conclusion of our study is that, in a situation of a computer with human-like characteristics (avatar), trust behavior in human-computer interaction resembles that of human-human interaction. On a deeper neurobiological level, our study reveals that thinking about an interaction partner’s trustworthiness activates the mentalizing network more strongly if the trustee is a human rather than an avatar. We discuss implications of these findings for future research.", "title": "" }, { "docid": "fcc0032fac0a13f99cafd936aeada724", "text": "This paper shows that several sorts of expressions cannot be interpreted metaphorically, including determiners, tenses, etc. Generally, functional categories cannot be interpreted metaphorically, while lexical categories can. This reveals a semantic property of functional categories, and it shows that metaphor can be used as a probe for investigating them. It also reveals an important linguistic constraint on metaphor. The paper argues this constraint applies to the interface between the cognitive systems for language and metaphor. However, the constraint does not completely prevent structural elements of language from being available to the metaphor system. The paper shows that linguistic structure within the lexicon, specifically, aspectual structure, is available to the metaphor system. This paper takes as its starting point an observation about which sorts of expressions can receive metaphorical interpretations. Surprisingly, there are a number of expressions that cannot be interpreted metaphorically. Quantifier expressions (i.e. determiners) provide a good example. Consider a richly metaphorical sentence like: (1) Read o’er the volume of young Paris’ face, And find delight writ there with beauty’s pen; Examine every married lineament (Romeo and Juliet I.3). Metaphor and Lexical Semantics 2 In appreciating Shakespeare’s lovely use of language, writ and pen are obviously understood metaphorically, and married lineament must be too. (The meanings listed in the Oxford English Dictionary for lineament include diagram, portion of a body, and portion of the face viewed with respect to its outline.) In spite of all this rich metaphor, every means simply every, in its usual literal form. Indeed, we cannot think of what a metaphorical interpretation of every would be. As we will see, this is not an isolated case: while many expressions can be interpreted metaphorically, there is a broad and important group of expressions that cannot. Much of this paper will be devoted to exploring the significance of this observation. It shows us something about metaphor. In particular, it shows that there is a non-trivial linguistic constraint on metaphor. This is a somewhat surprising result, as one of the leading ideas in the theory of metaphor is that metaphor comprehension is an aspect of our more general cognitive abilities, and not tied to the specific structure of language. The constraint on metaphor also shows us something about linguistic meaning. We will see that the class of expressions that fail to have metaphorical interpretations is a linguistically important one. Linguistic items are often grouped into two classes: lexical categories, including nouns, verbs, etc., and functional categories, including determiners (quantifier expressions), tenses, etc. Generally, we will see that lexical categories can have metaphorical interpretations, while functional ones cannot. This reveals something about the kinds of semantic properties these expressions can have. It also shows that we can use the availability of metaphorical interpretation as a kind of probe, to help distinguish these sorts of categories. Functional categories are often described as ‘structural elements’ of language. They are the ‘linguistic glue’ that holds sentences together, and so, their expressions are described as being semantically ‘thin’. Our metaphor probe will give some substance to this (often very rough-andready) idea. But it raises the question of whether all such structural elements in language—anything we can describe as ‘linguistic glue’— are invisible when it comes to metaphorical interpretation. We will see that this is not so. In particular, we will see that linguistic structure that can be found within lexical items may be available to metaphorical interpretation. This paper will show specifically that so-called aspecVol. 3: A Figure of Speech", "title": "" }, { "docid": "22d8bfa59bb8e25daa5905dbb9e1deea", "text": "BACKGROUND\nSubacromial impingement syndrome (SAIS) is a painful condition resulting from the entrapment of anatomical structures between the anteroinferior corner of the acromion and the greater tuberosity of the humerus.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the short-term effectiveness of high-intensity laser therapy (HILT) versus ultrasound (US) therapy in the treatment of SAIS.\n\n\nDESIGN\nThe study was designed as a randomized clinical trial.\n\n\nSETTING\nThe study was conducted in a university hospital.\n\n\nPATIENTS\nSeventy patients with SAIS were randomly assigned to a HILT group or a US therapy group.\n\n\nINTERVENTION\nStudy participants received 10 treatment sessions of HILT or US therapy over a period of 2 consecutive weeks.\n\n\nMEASUREMENTS\nOutcome measures were the Constant-Murley Scale (CMS), a visual analog scale (VAS), and the Simple Shoulder Test (SST).\n\n\nRESULTS\nFor the 70 study participants (42 women and 28 men; mean [SD] age=54.1 years [9.0]; mean [SD] VAS score at baseline=6.4 [1.7]), there were no between-group differences at baseline in VAS, CMS, and SST scores. At the end of the 2-week intervention, participants in the HILT group showed a significantly greater decrease in pain than participants in the US therapy group. Statistically significant differences in change in pain, articular movement, functionality, and muscle strength (force-generating capacity) (VAS, CMS, and SST scores) were observed after 10 treatment sessions from the baseline for participants in the HILT group compared with participants in the US therapy group. In particular, only the difference in change of VAS score between groups (1.65 points) surpassed the accepted minimal clinically important difference for this tool.\n\n\nLIMITATIONS\nThis study was limited by sample size, lack of a control or placebo group, and follow-up period.\n\n\nCONCLUSIONS\nParticipants diagnosed with SAIS showed greater reduction in pain and improvement in articular movement functionality and muscle strength of the affected shoulder after 10 treatment sessions of HILT than did participants receiving US therapy over a period of 2 consecutive weeks.", "title": "" }, { "docid": "57f3b7130d41a176410015ca03b9c954", "text": "Sudhausia aristotokia n. gen., n. sp. and S. crassa n. gen., n. sp. (Nematoda: Diplogastridae): viviparous new species with precocious gonad development Matthias HERRMANN 1, Erik J. RAGSDALE 1, Natsumi KANZAKI 2 and Ralf J. SOMMER 1,∗ 1 Max Planck Institute for Developmental Biology, Department of Evolutionary Biology, Spemannstraße 37, Tübingen, Germany 2 Forest Pathology Laboratory, Forestry and Forest Products Research Institute, 1 Matsunosato, Tsukuba, Ibaraki 305-8687, Japan", "title": "" }, { "docid": "dab15cc440d17efc5b3d5b2454cac591", "text": "The performance of a circular patch antenna with slotted ground plane for body centric communication mainly in the health care monitoring systems for Onbody application is researched. The CP antenna is intended for utilization in UWB, body centric communication applications i.e. in between 3.1 to 10.6 GHz. The proposed antenna is CP antenna of (30 x 30 x 1.6) mm. It is simulated via CST microwave studio suite. This CP antenna covers the entire ultra wide frequency range (3.9174-13.519) GHz (9.6016) GHz with the VSWR of (3.818 GHz13.268 GHz). Antenna’s group delay is to be observed as 3.5 ns. The simulated results of antenna are given in terms of , VSWR, group delay and radiation pattern. Keywords— UWB, Body Worn Antenna, BodyCentric Communication.", "title": "" } ]
scidocsrr
5cd0be106ac0782e02e2f3d5c5653f28
Beyond Trending Topics: Real-World Event Identification on Twitter
[ { "docid": "b134824f6c135a331e503b77d17380c0", "text": "Social media sites (e.g., Flickr, YouTube, and Facebook) are a popular distribution outlet for users looking to share their experiences and interests on the Web. These sites host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events of different type and scale. By automatically identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can enable event browsing and search in state-of-the-art search engines. To address this problem, we exploit the rich \"context\" associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). Using this rich context, which includes both textual and non-textual features, we can define appropriate document similarity metrics to enable online clustering of media to events. As a key contribution of this paper, we explore a variety of techniques for learning multi-feature similarity metrics for social media documents in a principled manner. We evaluate our techniques on large-scale, real-world datasets of event images from Flickr. Our evaluation results suggest that our approach identifies events, and their associated social media documents, more effectively than the state-of-the-art strategies on which we build.", "title": "" }, { "docid": "3e63c8a5499966f30bd3e6b73494ff82", "text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.", "title": "" } ]
[ { "docid": "83ad15e2ffeebb21705b617646dc4ed7", "text": "As Twitter becomes a more common means for officials to communicate with their constituents, it becomes more important that we understand how officials use these communication tools. Using data from 380 members of Congress' Twitter activity during the winter of 2012, we find that officials frequently use Twitter to advertise their political positions and to provide information but rarely to request political action from their constituents or to recognize the good work of others. We highlight a number of differences in communication frequency between men and women, Senators and Representatives, Republicans and Democrats. We provide groundwork for future research examining the behavior of public officials online and testing the predictive power of officials' social media behavior.", "title": "" }, { "docid": "405cd35764b8ae0b380e85a58a9714bf", "text": "This work is aimed at modeling, designing and developing an egg incubator system that is able to incubate various types of egg within the temperature range of 35 – 40 0 C. This system uses temperature and humidity sensors that can measure the condition of the incubator and automatically change to the suitable condition for the egg. Extreme variations in incubation temperature affect the embryo and ultimately, post hatch performance. In this work, electric bulbs were used to give the suitable temperature to the egg whereas water and controlling fan were used to ensure that humidity and ventilation were in good condition. LCD is used to display status condition of the incubator and an interface (Keypad) is provided to key in the appropriate temperature range for the egg. To ensure that all part of the eggs was heated by the lamp, DC motor was used to rotate iron rod at the bottom side and automatically change position of the egg. The entire element is controlled using AT89C52 Microcontroller. The temperature of the incubator is maintained at the normal temperature using PID controller implemented in microcontroller. Mathematical model of the incubator, actuator and PID controller were developed. Controller design based on the models was developed using Matlab Simulink. The models were validated through simulation and the Zeigler-Nichol tuning method was adopted as the tuning technique for varying the temperature control parameters of the PID controller in order to achieve a desirable transient response of the system when subjected to a unit step input. After several assumptions and simulations, a set of optimal parameters were obtained at the result of the third test that exhibited a commendable improvement in the overshoot, rise time, peak time and settling time thus improving the robustness and stability of the system. Keyword: Egg Incubator System, AT89C52 Microcontroller, PID Controller, Temperature Sensor.", "title": "" }, { "docid": "f4859226e52f7c9d2b2dc4ac8a0255de", "text": "Imbalanced data learning is one of the challenging problems in data mining; among this matter, founding the right model assessment measures is almost a primary research issue. Skewed class distribution causes a misreading of common evaluation measures as well it lead a biased classification. This article presents a set of alternative for imbalanced data learning assessment, using a combined measures (G-means, likelihood ratios, Discriminant power, F-Measure Balanced Accuracy, Youden index, Matthews correlation coefficient), and graphical performance assessment (ROC curve, Area Under Curve, Partial AUC, Weighted AUC, Cumulative Gains Curve and lift chart, Area Under Lift AUL), that aim to provide a more credible evaluation. We analyze the applications of these measures in churn prediction models evaluation, a well known application of imbalanced data", "title": "" }, { "docid": "6f304f0dd414a1ed61ecca15dd3bc924", "text": "Given a matrix A ∈ R, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of A and then retains some of the remaining elements with probabilities proportional to the square of their magnitudes. We analyze the approximation accuracy of the proposed algorithm using a recent, elegant non-commutative Bernstein inequality, and compare our bounds with all existing (to the best of our knowledge) elementwise matrix sparsification algorithms.", "title": "" }, { "docid": "db70302a3d7e7e7e5974dd013e587b12", "text": "In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \\emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.", "title": "" }, { "docid": "68fe4f62d48270395ca3f257bbf8a18a", "text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.", "title": "" }, { "docid": "8fac18c1285875aee8e7a366555a4ca3", "text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.", "title": "" }, { "docid": "98f814584c555baa05a1292e7e14f45a", "text": "This paper presents two types of dual band (2.4 and 5.8 GHz) wearable planar dipole antennas, one printed on a conventional substrate and the other on a two-dimensional metamaterial surface (Electromagnetic Bandgap (EBG) structure). The operation of both antennas is investigated and compared under different bending conditions (in E and H-planes) around human arm and leg of different radii. A dual band, Electromagnetic Band Gap (EBG) structure on a wearable substrate is used as a high impedance surface to control the Specific Absorption Rate (SAR) as well as to improve the antenna gain up to 4.45 dBi. The EBG inspired antenna has reduced the SAR effects on human body to a safe level (< 2W/Kg). I.e. the SAR is reduced by 83.3% for lower band and 92.8% for higher band as compared to the conventional antenna. The proposed antenna can be used for wearable applications with least health hazard to human body in Industrial, Scientific and Medical (ISM) band (2.4 GHz, 5.2 GHz) applications. The antennas on human body are simulated and analyzed in CST Microwave Studio (CST MWS).", "title": "" }, { "docid": "f8435db6c6ea75944d1c6b521e0f3dd3", "text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "266f89564a34239cf419ed9e83a2c988", "text": "The potential of high-resolution IKONOS and QuickBird satellite imagery for mapping and analysis of land and water resources at local scales in Minnesota is assessed in a series of three applications. The applications and accuracies evaluated include: (1) classification of lake water clarity (r = 0.89), (2) mapping of urban impervious surface area (r = 0.98), and (3) aquatic vegetation surveys of emergent and submergent plant groups (80% accuracy). There were several notable findings from these applications. For example, modeling and estimation approaches developed for Landsat TM data for continuous variables such as lake water clarity and impervious surface area can be applied to high-resolution satellite data. The rapid delivery of spatial data can be coupled with current GPS and field computer technologies to bring the imagery into the field for cover type validation. We also found several limitations in working with this data type. For example, shadows can influence feature classification and their effects need to be evaluated. Nevertheless, high-resolution satellite data has excellent potential to extend satellite remote sensing beyond what has been possible with aerial photography and Landsat data, and should be of interest to resource managers as a way to create timely and reliable assessments of land and water resources at a local scale. D 2003 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "6fca80896fe3493072a1bc360cd680a7", "text": "The physical formats used to represent linguistic data and its annotations have evolved over the past four decades, accommodating different needs and perspectives as well as incorporating advances in data representation generally. This chapter provides an overview of representation formats with the aim of surveying the relevant issues for representing different data types together with current stateof-the-art solutions, in order to provide sufficient information to guide others in the choice of a representation format or formats.", "title": "" }, { "docid": "db9ab8624cdf9b6fdfc91a5d72b76694", "text": "In this paper, a low profile LLC resonant converter with two transformers using a planar core is proposed for a slim switching mode power supply (SMPS). Design procedures, magnetic modeling and voltage gain characteristics on the proposed planar transformer and converter are described in detail. LLC resonant converter including two transformers using a planar core is connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter is designed and tested.", "title": "" }, { "docid": "77985effa998d08e75eaa117e07fc7a9", "text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.", "title": "" }, { "docid": "748d71e6832288cd0120400d6069bf50", "text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull", "title": "" }, { "docid": "b44f24b54e45974421f799527391a9db", "text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.", "title": "" }, { "docid": "f35e22d5ee51d8e83836337b3ab51754", "text": "SaaS companies generate revenues by charging recurring subscription fees for using their software services. The fast growth of SaaS companies is usually accompanied with huge upfront costs in marketing expenses targeted at their potential customers. Customer retention is a critical issue for SaaS companies because it takes twelve months on average to break-even with the expenses for a single customer. This study describes a methodology for helping SaaS companies manage their customer relationships. We investigated the time-dependent software feature usage data, for example, login numbers and comment numbers, to predict whether a customer would churn within the next three months. Our study compared model performance across four classification algorithms. The XGBoost model yielded the best results for identifying the most important software usage features and for classifying customers as either churn type or non-risky type. Our model achieved a 10-fold cross-validated mean AUC score of 0.7941. Companies can choose to move along the ROC curve to accommodate to their marketing capability. The feature importance output from the XGBoost model can facilitate SaaS companies in identifying the most significant software features to launch more effective marketing campaigns when facing prospective customers.", "title": "" }, { "docid": "96669cea810d2918f2d35875f87d45f2", "text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.", "title": "" }, { "docid": "172567417be706a47c94d35d90c24400", "text": "This work presents a novel semi-supervised learning approach for data-driven modeling of asset failures when health status is only partially known in historical data. We combine a generative model parameterized by deep neural networks with non-linear embedding technique. It allows us to build prognostic models with the limited amount of health status information for the precise prediction of future asset reliability. The proposed method is evaluated on a publicly available dataset for remaining useful life (RUL) estimation, which shows significant improvement even when a fraction of the data with known health status is as sparse as 1% of the total. Our study suggests that the non-linear embedding based on a deep generative model can efficiently regularize a complex model with deep architectures while achieving high prediction accuracy that is far less sensitive to the availability of health status information.", "title": "" }, { "docid": "0ccbc904dd7623c9ef537e41ac888dd0", "text": "Big Data architectures allow to flexibly store and process heterogeneous data, from multiple sources, in its original format. The structure of those data, commonly supplied by means of REST APIs, is continuously evolving, forcing data analysts using it need to adapt their analytical processes after each release. This gets more challenging when aiming to perform an integrated or historical analysis of multiple sources. To cope with such complexity, in this paper we present the Big Data Integration ontology, the core construct for a data governance protocol that systematically annotates and integrates data from multiple sources in its original format. To cope with syntactic evolution in the sources, we present an algorithm that semi-automatically adapts the ontology upon new releases. A functional evaluation on realworld APIs is performed in order to validate our approach.", "title": "" }, { "docid": "1e5ebd122bee855d7e8113d5fe71202d", "text": "We derive the general expression of the anisotropic magnetoresistance (AMR) ratio of ferromagnets for a relative angle between the magnetization direction and the current direction. We here use the two-current model for a system consisting of a spin-polarized conduction state (s) and localized d states (d) with spin-orbit interaction. Using the expression, we analyze the AMR ratios of Ni and a half-metallic ferromagnet. These results correspond well to the respective experimental results. In addition, we give an intuitive explanation about a relation between the sign of the AMR ratio and the s-d scattering process. Introduction The anisotropic magnetoresistance (AMR) effect, in which the electrical resistivity depends on a relative angle θ between the magnetization (Mex) direction and the electric current (I) direction, has been studied extensively both experimentally [1-5] and theoretically [1,6]. The AMR ratio is often defined by ( ) ( ) ρ θ ρ θ ρ ρ ρ ⊥", "title": "" } ]
scidocsrr
1bc8b083b81954925146ea8e9941badf
Experimental Investigation of Light-Gauge Steel Plate Shear Walls
[ { "docid": "8f3b3611ee8a52753e026625f6ccd12e", "text": "plate is ntation of by plastic plex, wall ection of procedure Abstract: A revised procedure for the design of steel plate shear walls is proposed. In this procedure the thickness of the infill found using equations that are derived from the plastic analysis of the strip model, which is an accepted model for the represe steel plate shear walls. Comparisons of experimentally obtained ultimate strengths of steel plate shear walls and those predicted analysis are given and reasonable agreement is observed. Fundamental plastic collapse mechanisms for several, more com configurations are also given. Additionally, an existing codified procedure for the design of steel plate walls is reviewed and a s this procedure which could lead to designs with less-than-expected ultimate strength is identified. It is shown that the proposed eliminates this possibility without changing the other valid sections of the current procedure.", "title": "" } ]
[ { "docid": "bf8a24b974553d21849e9b066d78e6d4", "text": "Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two sub-problems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an end-to-end transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a self-attention mechanism, which enables the use of efficient non-recurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this end-to-end model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.", "title": "" }, { "docid": "05a76f64a6acbcf48b7ac36785009db3", "text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.", "title": "" }, { "docid": "9414f4f7164c69f67b4bf200da9f1358", "text": "Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.", "title": "" }, { "docid": "73c1f5b8e8df783c976427b64734f909", "text": "XTS-AES is an advanced mode of AES for data protection of sector-based devices. Compared to other AES modes, it features two secret keys instead of one, and an additional tweak for each data block. These characteristics make the mode not only resistant against cryptoanalysis attacks, but also more challenging for side-channel attack. In this paper, we propose two attack methods on XTS-AES overcoming these challenges. In the first attack, we analyze side-channel leakage of the particular modular multiplication in XTS-AES mode. In the second one, we utilize the relationship between two consecutive block tweaks and propose a method to work around the masking of ciphertext by the tweak. These attacks are verified on an FPGA implementation of XTS-AES. The results show that XTS-AES is susceptible to side-channel power analysis attacks, and therefore dedicated protections are required for security of XTS-AES in storage devices.", "title": "" }, { "docid": "9e451fe70d74511d2cc5a58b667da526", "text": "Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.", "title": "" }, { "docid": "2687cb8fc5cde18e53c580a50b33e328", "text": "Social network sites (SNSs) are becoming an increasingly popular resource for both students and adults, who use them to connect with and maintain relationships with a variety of ties. For many, the primary function of these sites is to consume and distribute personal content about the self. Privacy concerns around sharing information in a public or semi-public space are amplified by SNSs’ structural characteristics, which may obfuscate the true audience of these disclosures due to their technical properties (e.g., persistence, searchability) and dynamics of use (e.g., invisible audiences, context collapse) (boyd, 2008b). Early work on the topic focused on the privacy pitfalls of Facebook and other SNSs (e.g., Acquisti & Gross, 2006; Barnes, 2006; Gross & Acquisti, 2005) and argued that individuals were (perhaps inadvertently) disclosing information that might be inappropriate for some audiences, such as future employers, or that might enable identity theft or other negative outcomes.", "title": "" }, { "docid": "f6f22580071dc149a8dc544835123977", "text": "This paper describes MITRE’s participation in the Paraphrase and Semantic Similarity in Twitter task (SemEval-2015 Task 1). This effort placed first in Semantic Similarity and second in Paraphrase Identification with scores of Pearson’s r of 61.9%, F1 of 66.7%, and maxF1 of 72.4%. We detail the approaches we explored including mixtures of string matching metrics, alignments using tweet-specific distributed word representations, recurrent neural networks for modeling similarity with those alignments, and distance measurements on pooled latent semantic features. Logistic regression is used to tie the systems together into the ensembles submitted for evaluation.", "title": "" }, { "docid": "b713da979bc3d01153eaae8827779b7b", "text": "Chronic lower leg pain results from various conditions, most commonly, medial tibial stress syndrome, stress fracture, chronic exertional compartment syndrome, nerve entrapment, and popliteal artery entrapment syndrome. Symptoms associated with these conditions often overlap, making a definitive diagnosis difficult. As a result, an algorithmic approach was created to aid in the evaluation of patients with complaints of lower leg pain and to assist in defining a diagnosis by providing recommended diagnostic studies for each condition. A comprehensive physical examination is imperative to confirm a diagnosis and should begin with an inquiry regarding the location and onset of the patient's pain and tenderness. Confirmation of the diagnosis requires performing the appropriate diagnostic studies, including radiographs, bone scans, magnetic resonance imaging, magnetic resonance angiography, compartmental pressure measurements, and arteriograms. Although most conditions causing lower leg pain are treated successfully with nonsurgical management, some syndromes, such as popliteal artery entrapment syndrome, may require surgical intervention. Regardless of the form of treatment, return to activity must be gradual and individualized for each patient to prevent future athletic injury.", "title": "" }, { "docid": "1b990fd9a3506f821519faad113f59ee", "text": "The primary focus of this study is to understand the current port operating condition and recommend short term measures to improve traffic condition in the port of Chennai. The cause of congestion is identified based on the data collected and observation made at port gates as well as at terminal gates in Chennai port. A simulation model for the existing road layout is developed in micro-simulation software VISSIM and is calibrated to reflect the prevailing condition inside the port. The data such as truck origin/destination, hourly inflow and outflow of trucks, speed, and stopping time at checking booths are used as input. Routing data is used to direct traffic to specific terminal or dock within the port. Several alternative scenarios are developed and simulated to get results of the key performance indicators. A comparative and detailed analysis of these indicators is used to evaluate recommendations to reduce congestion inside the port.", "title": "" }, { "docid": "435da20d6285a8b57a35fb407b96c802", "text": "This paper attempts to review examples of the use of storytelling and narrative in immersive virtual reality worlds. Particular attention is given to the way narrative is incorporated in artistic, cultural, and educational applications through the development of specific sensory and perceptual experiences that are based on characteristics inherent to virtual reality, such as immersion, interactivity, representation, and illusion. Narrative development is considered on three axes: form (visual representation), story (emotional involvement), and history (authenticated cultural content) and how these can come together.", "title": "" }, { "docid": "ebbc0b7aea9fafa1258f337fab4d20e8", "text": "This paper presents a new design of high frequency DC/AC inverter for home applications using fuel cells or photovoltaic array sources. A battery bank parallel to the DC link is provided to take care of the slow dynamic response of the source. The design is based on a push-pull DC/DC converter followed by a full-bridge PWM inverter topology. The nominal power rating is 10 kW. Actual design parameters, procedure and experimental results of a 1.5 kW prototype are provided. The objective of this paper is to explore the possibility of making renewable sources of energy utility interactive by means of low cost power electronic interface.", "title": "" }, { "docid": "f4d6cd6f6cd453077e162b64ae485c62", "text": "Effects of Music Therapy on Prosocial Behavior of Students with Autism and Developmental Disabilities by Catherine L. de Mers Dr. Matt Tincani, Examination Committee Chair Assistant Professor o f Special Education University o f Nevada, Las Vegas This researeh study employed a multiple baseline across participants design to investigate the effects o f music therapy intervention on hitting, screaming, and asking o f three children with autism and/or developmental disabilities. Behaviors were observed and recorded during 10-minute free-play sessions both during baseline and immediately after musie therapy sessions during intervention. Interobserver agreement and procedural fidelity data were collected. Music therapy sessions were modeled on literature pertaining to music therapy with children with autism. In addition, social validity surveys were eollected to answer research questions pertaining to the social validity of music therapy as an intervention. Findings indicate that music therapy produced moderate and gradual effects on hitting, screaming, and asking. Hitting and sereaming deereased following intervention, while asking increased. Intervention effects were maintained three weeks following", "title": "" }, { "docid": "6fdd0c7d239417234cfc4706a82b5a0f", "text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.", "title": "" }, { "docid": "e8dd0edd4ae06d53b78662f9acca09c5", "text": "A new methodology based on mixed linear models was developed for mapping QTLs with digenic epistasis and QTL×environment (QE) interactions. Reliable estimates of QTL main effects (additive and epistasis effects) can be obtained by the maximum-likelihood estimation method, while QE interaction effects (additive×environment interaction and epistasis×environment interaction) can be predicted by the-best-linear-unbiased-prediction (BLUP) method. Likelihood ratio and t statistics were combined for testing hypotheses about QTL effects and QE interactions. Monte Carlo simulations were conducted for evaluating the unbiasedness, accuracy, and power for parameter estimation in QTL mapping. The results indicated that the mixed-model approaches could provide unbiased estimates for both positions and effects of QTLs, as well as unbiased predicted values for QE interactions. Additionally, the mixed-model approaches also showed high accuracy and power in mapping QTLs with epistatic effects and QE interactions. Based on the models and the methodology, a computer software program (QTLMapper version 1.0) was developed, which is suitable for interval mapping of QTLs with additive, additive×additive epistasis, and their environment interactions.", "title": "" }, { "docid": "83f88cbaed86220e0047b51c965a77ba", "text": "There are two conflicting perspectives regarding the relationship between profanity and dishonesty. These two forms of norm-violating behavior share common causes and are often considered to be positively related. On the other hand, however, profanity is often used to express one's genuine feelings and could therefore be negatively related to dishonesty. In three studies, we explored the relationship between profanity and honesty. We examined profanity and honesty first with profanity behavior and lying on a scale in the lab (Study 1; N = 276), then with a linguistic analysis of real-life social interactions on Facebook (Study 2; N = 73,789), and finally with profanity and integrity indexes for the aggregate level of U.S. states (Study 3; N = 50 states). We found a consistent positive relationship between profanity and honesty; profanity was associated with less lying and deception at the individual level and with higher integrity at the society level.", "title": "" }, { "docid": "4706f9e8d9892543aaeb441c45816b24", "text": "The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.", "title": "" }, { "docid": "2b314587816255285bf985a086719572", "text": "Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety.", "title": "" }, { "docid": "eac86562382c4ec9455f1422b6f50e9f", "text": "In this paper we look at how to sparsify a graph i.e. how to reduce the edgeset while keeping the nodes intact, so as to enable faster graph clustering without sacrificing quality. The main idea behind our approach is to preferentially retain the edges that are likely to be part of the same cluster. We propose to rank edges using a simple similarity-based heuristic that we efficiently compute by comparing the minhash signatures of the nodes incident to the edge. For each node, we select the top few edges to be retained in the sparsified graph. Extensive empirical results on several real networks and using four state-of-the-art graph clustering and community discovery algorithms reveal that our proposed approach realizes excellent speedups (often in the range 10-50), with little or no deterioration in the quality of the resulting clusters. In fact, for at least two of the four clustering algorithms, our sparsification consistently enables higher clustering accuracies.", "title": "" }, { "docid": "93c9ffa6c83de5fece14eb351315fbed", "text": "nature protocols | VOL.7 NO.11 | 2012 | 1983 IntroDuctIon In a typical histology study, it is necessary to make thin sections of blocks of frozen or fixed tissue for microscopy. This process has major limitations for obtaining a 3D picture of structural components and the distribution of cells within tissues. For example, in axon regeneration studies, after labeling the injured axons, it is common that the tissue of interest (e.g., spinal cord, optic nerve) is sectioned. Subsequently, when tissue sections are analyzed under the microscope, only short fragments of axons are observed within each section; hence, the 3D information of axonal structures is lost. Because of this confusion, these fragmented axonal profiles might be interpreted as regenerated axons even though they could be spared axons1. In addition, the growth trajectories and target regions of the regenerating axons cannot be identified by visualization of axonal fragments. Similar problems could occur in cancer and immunology studies when only small fractions of target cells are observed within large organs. To avoid these limitations and problems, tissues ideally should be imaged at high spatial resolution without sectioning. However, optical imaging of thick tissues is limited mostly because of scattering of imaging light through the thick tissues, which contain various cellular and extracellular structures with different refractive indices. The imaging light traveling through different structures scatters and loses its excitation and emission efficiency, resulting in a lower resolution and imaging depth2,3. Optical clearing of tissues by organic solvents, which make the biological tissue transparent by matching the refractory indexes of different tissue layers to the solvent, has become a prominent method for imaging thick tissues2,4. In cleared tissues, the imaging light does not scatter and travels unobstructed throughout the different tissue layers. For this purpose, the first tissue clearing method was developed about a century ago by Spalteholz, who used a mixture of benzyl alcohol and methyl salicylate to clear large organs such as the heart5,6. In general, the first step of tissue clearing is tissue dehydration, owing to the low refractive index of water compared with cellular structures containing proteins and lipids4. Subsequently, dehydrated tissue is impregnated with an optical clearing agent, such as glucose7, glycerol8, benzyl alcohol–benzyl benzoate (BABB, also known as Murray’s clear)4,9–13 or dibenzyl ether (DBE)13,14, which have approximately the same refractive index as the impregnated tissue. At the end of the clearing procedure, the cleared tissue hardens and turns transparent, and thus resembles glass.", "title": "" }, { "docid": "6f22283e5142035d6f6f9d5e06ab1cd2", "text": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.", "title": "" } ]
scidocsrr
00657ff4d15c007f5eb6e7c38849996f
Developing a Teacher Dashboard For Use with Intelligent Tutoring Systems
[ { "docid": "26e24e4a59943f9b80d6bf307680b70c", "text": "We present a machine-learned model that can automatically detect when a student using an intelligent tutoring system is off-task, i.e., engaged in behavior which does not involve the system or a learning task. This model was developed using only log files of system usage (i.e. no screen capture or audio/video data). We show that this model can both accurately identify each student's prevalence of off-task behavior and can distinguish off-task behavior from when the student is talking to the teacher or another student about the subject matter. We use this model in combination with motivational and attitudinal instruments, developing a profile of the attitudes and motivations associated with off-task behavior, and compare this profile to the attitudes and motivations associated with other behaviors in intelligent tutoring systems. We discuss how the model of off-task behavior can be used within interactive learning environments which respond to when students are off-task.", "title": "" }, { "docid": "2adcf4db59bb321132a10445292d7fe9", "text": "In this paper, we present work on learning analytics that aims to support learners and teachers through dashboard applications, ranging from small mobile applications to learnscapes on large public displays. Dashboards typically capture and visualize traces of learning activities, in order to promote awareness, reflection, and sense-making, and to enable learners to define goals and track progress toward these goals. Based on an analysis of our own work and a broad range of similar learning dashboards, we identify HCI issues for this exciting research area.", "title": "" }, { "docid": "bed92439d0a455eb57d992728ef7deb5", "text": "Although learning with Intelligent Tutoring Systems (ITS) has been well studied, little research has investigated what role teachers can play, if empowered with data. Many ITSs provide student performance reports, but they may not be designed to serve teachers’ needs well, which is important for a well-designed dashboard. We investigated what student data is most helpful to teachers and how they use data to adjust and individualize instruction. Specifically, we conducted Contextual Inquiry interviews with teachers and used Interpretation Sessions and Affinity Diagramming to analyze the data. We found that teachers generate data on students’ concept mastery, misconceptions and errors, and utilize data provided by ITSs and other software. Teachers use this data to drive instruction and remediate issues on an individual and class level. Our study uncovers how data can support teachers in helping students learn and provides a solid foundation and recommendations for designing a teacher’s dashboard.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" } ]
[ { "docid": "0bbdefaf90329b45993608128ccd233c", "text": "Eye gaze tracking system has been widely researched for the replacement of the conventional computer interfaces such as the mouse and keyboard. In this paper, we propose the long range binocular eye gaze tracking system that works from 1.5 m to 2.5 m with allowing a head displacement in depth. The 3D position of the user's eye is obtained from the two wide angle cameras. A high resolution image of the eye is captured using the pan, tilt, and focus controlled narrow angle camera. The angles for maneuvering the pan and tilt motor are calculated by the proposed calibration method based on virtual camera model. The performance of the proposed calibration method is verified in terms of speed and convenience through the experiment. The narrow angle camera keeps tracking the eye while the user moves his head freely. The point-of-gaze (POG) of each eye onto the screen is calculated by using a 2D mapping based gaze estimation technique and the pupil center corneal reflection (PCCR) vector. PCCR vector modification method is applied to overcome the degradation in accuracy with displacements of the head in depth. The final POG is obtained by the average of the two POGs. Experimental results show that the proposed system robustly works for a large screen TV from 1.5 m to 2.5 m distance with displacements of the head in depth (+20 cm) and the average angular error is 0.69°.", "title": "" }, { "docid": "450808fb3512ffd3bac692523e785c73", "text": "This paper focuses on approaches to building a text automatic summarization model for news articles, generating a one-sentence summarization that mimics the style of a news title given some paragraphs. We managed to build and train two relatively complex deep learning models that outperformed our baseline model, which is a simple feed forward neural network. We explored Recurrent Neural Network models with encoder-decoder using LSTM and GRU cells, and with/without attention. We obtained some results that we then measured by calculating their respective ROUGE scores with respect to the actual references. For future work, we believe abstractive method of text summarization is a power way of summarizing texts, and we will continue with this approach. We think that the deficiencies currently embedded in our language model can be improved by better fine-tuning the model, more deep-learning method exploration, as well as larger training dataset.", "title": "" }, { "docid": "de6348bb8e3b4c1cfd1fa83557ae50c9", "text": "Cerebellar lesions can cause motor deficits and/or the cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome). We used voxel-based lesion-symptom mapping to test the hypothesis that the cerebellar motor syndrome results from anterior lobe damage whereas lesions in the posterolateral cerebellum produce the CCAS. Eighteen patients with isolated cerebellar stroke (13 males, 5 females; 20-66 years old) were evaluated using measures of ataxia and neurocognitive ability. Patients showed a wide range of motor and cognitive performance, from normal to severely impaired; individual deficits varied according to lesion location within the cerebellum. Patients with damage to cerebellar lobules III-VI had worse ataxia scores: as predicted, the cerebellar motor syndrome resulted from lesions involving the anterior cerebellum. Poorer performance on fine motor tasks was associated primarily with strokes affecting the anterior lobe extending into lobule VI, with right-handed finger tapping and peg-placement associated with damage to the right cerebellum, and left-handed finger tapping associated with left cerebellar damage. Patients with the CCAS in the absence of cerebellar motor syndrome had damage to posterior lobe regions, with lesions leading to significantly poorer scores on language (e.g. right Crus I and II extending through IX), spatial (bilateral Crus I, Crus II, and right lobule VIII), and executive function measures (lobules VII-VIII). These data reveal clinically significant functional regions underpinning movement and cognition in the cerebellum, with a broad anterior-posterior distinction. Motor and cognitive outcomes following cerebellar damage appear to reflect the disruption of different cerebro-cerebellar motor and cognitive loops.", "title": "" }, { "docid": "f4166e4121dbd6f6ab209e6d99aac63f", "text": "In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.", "title": "" }, { "docid": "e118177a0fc9fad704b2be958b01a873", "text": "Safety stories specify safety requirements, using the EARS (Easy Requirements Specification) format. Software practitioners can use them in agile projects at lower levels of safety criticality to deal effectively with safety concerns.", "title": "" }, { "docid": "fe6f81141e58bf5cf13bec80e033e197", "text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.", "title": "" }, { "docid": "6b252d02e013519d1bd12dfcb3641013", "text": "BACKGROUND\nDuplex ultrasound investigation has become the reference standard in assessing the morphology and haemodynamics of the lower limb veins. The project described in this paper was an initiative of the Union Internationale de Phlébologie (UIP). The aim was to obtain a consensus of international experts on the methodology to be used for assessment of anatomy of superficial and perforating veins in the lower limb by ultrasound imaging.\n\n\nMETHODS\nThe authors performed a systematic review of the published literature on duplex anatomy of the superficial and perforating veins of the lower limbs; afterwards they invited a group of experts from a wide range of countries to participate in this project. Electronic submissions from the authors and the experts (text and images) were made available to all participants via the UIP website. The authors prepared a draft document for discussion at the UIP Chapter meeting held in San Diego, USA in August 2003. Following this meeting a revised manuscript was circulated to all participants and further comments were received by the authors and included in subsequent versions of the manuscript. Eventually, all participants agreed the final version of the paper.\n\n\nRESULTS\nThe experts have made detailed recommendations concerning the methods to be used for duplex ultrasound examination as well as the interpretation of images and measurements obtained. This document provides a detailed methodology for complete ultrasound assessment of the anatomy of the superficial and perforating veins in the lower limbs.\n\n\nCONCLUSIONS\nThe authors and a large group of experts have agreed a methodology for the investigation of the lower limbs venous system by duplex ultrasonography, with specific reference to the anatomy of the main superficial veins and perforators of the lower limbs in healthy and varicose subjects.", "title": "" }, { "docid": "ff272e6b59a3069372a694f99963929d", "text": "Nowadays, Information Technology (IT) plays an important role in efficiency and effectiveness of the organizational performance. As an IT application, Enterprise Resource Planning (ERP) systems is considered one of the most important IT applications because it enables the organizations to connect and interact with its administrative units in order to manage data and organize internal procedures. Many institutions use ERP systems, most notably Higher Education Institutions (HEIs). However, many projects fail or exceed scheduling and budget constraints; the rate of failure in HEIs sector is higher than in other sectors. With HEIs’ recent movement to implement ERP systems and the lack of research studies examining successful implementation in HEIs, this paper provides a critical literature review with a special focus on Saudi Arabia. Further, it defines Critical Success Factors (CSFs) contributing to the success of ERP implementation in HEIs. This paper is part of a larger research effort aiming to provide guidelines and useful findings that help HEIs to manage the challenges for ERP systems and define CSFs that will help practitioners to implement them in the Saudi context.", "title": "" }, { "docid": "8f9bf08bb52e5c192512f7b43ed50ba7", "text": "Finding the sparse solution of an underdetermined system of linear equations (the so called sparse recovery problem) has been extensively studied in the last decade because of its applications in many different areas. So, there are now many sparse recovery algorithms (and program codes) available. However, most of these algorithms have been developed for real-valued systems. This paper discusses an approach for using available real-valued algorithms (or program codes) to solve complex-valued problems, too. The basic idea is to convert the complex-valued problem to an equivalent real-valued problem and solve this new real-valued problem using any real-valued sparse recovery algorithm. Theoretical guarantees for the success of this approach will be discussed, too. On the other hand, a widely used sparse recovery idea is finding the minimum ℓ1 norm solution. For real-valued systems, this idea requires to solve a linear programming (LP) problem, but for complex-valued systems it needs to solve a second-order cone programming (SOCP) problem, which demands more computational load. However, based on the approach of this paper, the complex case can also be solved by linear programming, although the theoretical guarantee for finding the sparse solution is more limited.", "title": "" }, { "docid": "72147e489de9053bf1a4844c2f0de717", "text": "Video Question Answering is a challenging problem in visual information retrieval, which provides the answer to the referenced video content according to the question. However, the existing visual question answering approaches mainly tackle the problem of static image question, which may be ineffectively for video question answering due to the insufficiency of modeling the temporal dynamics of video contents. In this paper, we study the problem of video question answering by modeling its temporal dynamics with frame-level attention mechanism. We propose the attribute-augmented attention network learning framework that enables the joint frame-level attribute detection and unified video representation learning for video question answering. We then incorporate the multi-step reasoning process for our proposed attention network to further improve the performance. We construct a large-scale video question answering dataset. We conduct the experiments on both multiple-choice and open-ended video question answering tasks to show the effectiveness of the proposed method.", "title": "" }, { "docid": "bfdbc3814d517df9859294bd53885aa2", "text": "The Internet of Things (IoT) is the next big wave in computing characterized by large scale open ended heterogeneous network of things, with varying sensing, actuating, computing and communication capabilities. Compared to the traditional field of autonomic computing, the IoT is characterized by an open ended and highly dynamic ecosystem with variable workload and resource availability. These characteristics make it difficult to implement self-awareness capabilities for IoT to manage and optimize itself. In this work, we introduce a methodology to explore and learn the trade-offs of different deployment configurations to autonomously optimize the QoS and other quality attributes of IoT applications. Our experiments demonstrate that our proposed methodology can automate the efficient deployment of IoT applications in the presence of multiple optimization objectives and variable operational circumstances.", "title": "" }, { "docid": "3a6c58a05427392750d15307fda4faec", "text": "In this paper, we present the design of a low voltage bandgap reference (LVBGR) circuit for supply voltage of 1.2V which can generate an output reference voltage of 0.363V. Traditional BJT based bandgap reference circuits give very precise output reference but power and area consumed by these BJT devices is larger so for low supply bandgap reference we chose MOSFETs operating in subthreshold region based reference circuits. LVBGR circuits with less sensitivity to supply voltage and temperature is used in both analog and digital circuits like high precise comparators used in data converter, phase-locked loop, ring oscillator, memory systems, implantable biomedical product etc. In the proposed circuit subthreshold MOSFETs temperature characteristics are used to achieve temperature compensation of output voltage reference and it can work under very low supply voltage. A PMOS structure 2stage opamp which will be operating in subthreshold region is designed for the proposed LVBGR circuit whose gain is 89.6dB and phase margin is 74 °. Finally a LVBGR circuit is designed which generates output voltage reference of 0.364V given with supply voltage of 1.2 V with 10 % variation and temperature coefficient of 240ppm/ °C is obtained for output reference voltage variation with respect to temperature over a range of 0 to 100°C. The output reference voltage exhibits a variation of 230μV with a supply range of 1.08V to 1.32V at typical process corner. The proposed LVBGR circuit for 1.2V supply is designed with the Mentor Graphics Pyxis tool using 130nm technology with EldoSpice simulator. Overall current consumed by the circuit is 900nA and also the power consumed by the entire LVBGR circuit is 0.9μW and the PSRR of the LVBGR circuit is -70dB.", "title": "" }, { "docid": "daef1d0005da14d3a5717bf400cd69e7", "text": "Deep learning methods have typically been trained on large datasets in which many training examples are available. However, many real-world product datasets have only a small number of images available for each product. We explore the use of deep learning methods for recognizing object instances when we have only a single training example per class. We show that feedforward neural networks outperform state-of-the-art methods for recognizing objects from novel viewpoints even when trained from just a single image per object. To further improve our performance on this task, we propose to take advantage of a supplementary dataset in which we observe a separate set of objects from multiple viewpoints. We introduce a new approach for training deep learning methods for instance recognition with limited training data, in which we use an auxiliary multi-view dataset to train our network to be robust to viewpoint changes. We find that this approach leads to a more robust classifier for recognizing objects from novel viewpoints, outperforming previous state-of-the-art approaches including keypoint-matching, template-based techniques, and sparse coding.", "title": "" }, { "docid": "6960f780dfc491c6cdcbb6c53fd32363", "text": "We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (", "title": "" }, { "docid": "b2853b59ffb0cb70bd2f4a3cb0c03e1d", "text": "This paper presents a waveform modeling and generation method for speech bandwidth extension (BWE) using stacked dilated convolutional neural networks (CNNs) with causal or non-causal convolutional layers. Such dilated CNNs describe the predictive distribution for each wideband or high-frequency speech sample conditioned on the input narrowband speech samples. Distinguished from conventional frame-based BWE approaches, the proposed methods can model the speech waveforms directly and therefore avert the spectral conversion and phase estimation problems. Experimental results prove that the BWE methods proposed in this paper can achieve better performance than the state-of-the-art frame-based approach utilizing recurrent neural networks (RNNs) incorporating long shortterm memory (LSTM) cells in subjective preference tests.", "title": "" }, { "docid": "b049e5249d3c0fc52706a54ee767480e", "text": "In dialogical argumentation, it is often assumed that the involved parties will always correctly identify the intended statements posited by each other and realize all of the associated relations, conform to the three acceptability states (accepted, rejected, undecided), adjust their views whenever new and correct information comes in, and that a framework handling only attack relations is sufficient to represent their opinions. Although it is natural to make these assumptions as a starting point for further research, dropping some of them has become quite challenging. Probabilistic argumentation is one of the approaches that can be harnessed for more accurate user modelling. The epistemic approach allows us to represent how much a given argument is believed or disbelieved by a given person, offering us the possibility to express more than just three agreement states. It comes equipped with a wide range of postulates, including those that do not make any restrictions concerning how initial arguments should be viewed. Thus, this approach is potentially more suitable for handling beliefs of the people that have not fully disclosed their opinions or counterarguments with respect to standard Dung’s semantics. The constellation approach can be used to represent the views of different people concerning the structure of the framework we are dealing with, including situations in which not all relations are acknowledged or when they are seen differently than intended. Finally, bipolar argumentation frameworks can be used to express both positive and negative relations between arguments. In this paper we will describe the results of an experiment in which participants were asked to judge dialogues in terms of agreement and structure. We will compare our findings with the aforementioned assumptions as well as with the constellation and epistemic approaches to probabilistic argumentation and bipolar argumentation. Keywords— Dialogical argumentation, probabilistic argumentation, abstract argumentation ∗This research is funded by EPSRC Project EP/N008294/1 “Framework for Computational Persuasion”.We thank the reviewers for their valuable comments that helped us to improve this paper.", "title": "" }, { "docid": "6e8a9c37672ec575821da5c9c3145500", "text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "604b46c973be0a277faa96a407dc845f", "text": "A nonlinear dynamic model for a quadrotor unmanned aerial vehicle is presented with a new vision of state parameter control which is based on Euler angles and open loop positions state observer. This method emphasizes on the control of roll, pitch and yaw angle rather than the translational motions of the UAV. For this reason the system has been presented into two cascade partial parts, the first one relates the rotational motion whose the control law is applied in a closed loop form and the other one reflects the translational motion. A dynamic feedback controller is developed to transform the closed loop part of the system into linear, controllable and decoupled subsystem. The wind parameters estimation of the quadrotor is used to avoid more sensors. Hence an estimator of resulting aerodynamic moments via Lyapunov function is developed. Performance and robustness of the proposed controller are tested in simulation.", "title": "" }, { "docid": "5f49c93d7007f0f14f1410ce7805b29a", "text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.", "title": "" } ]
scidocsrr
031f2a9df778ba103d06dd671c1edfda
A 10-Bit 0.5 V 100 KS/S SAR ADC with a New rail-to-rail Comparator for Energy Limited Applications
[ { "docid": "082bf5d1d7285ce01de1f72abea5c505", "text": "A novel switched-current successive approximation ADC is presented in this paper with high speed and low power consumption. The proposed ADC contains a new high-accuracy and power-e±cient switched-current S/H circuit and a speed-improved current comparator. Designed and simulated in a 0:18m CMOS process, this 8-bit ADC achieves 46.23 dB SNDR at 1.23 MS/s consuming 73:19 W under 1.2 V voltage supply, resulting in an ENOB of 7.38-bit and an FOM of 0.357 pJ/Conv.-step.", "title": "" }, { "docid": "3f37793db0be4f874dd073972f40e1c7", "text": "The matching properties of the threshold voltage, substrate factor and current factor of MOS transistors have been analysed and measured. Improvements of the existing theory are given, as well as extensions for long distance matching and rotation of devices. The matching results have been verified by measurements and calculations on a band-gap reference circuit.", "title": "" } ]
[ { "docid": "f8c7f0fc1fb365d874766f6d1da2215c", "text": "Different works have shown that the combination of multiple loss functions is beneficial when training deep neural networks for a variety of prediction tasks. Generally, such multi-loss approaches are implemented via a weighted multi-loss objective function in which each term encodes a different desired inference criterion. The importance of each term is often set using empirically tuned hyper-parameters. In this work, we analyze the importance of the relative weighting between the different terms of a multi-loss function and propose to leverage the model’s uncertainty with respect to each loss as an automatically learned weighting parameter. We consider the application of colon gland analysis from histopathology images for which various multi-loss functions have been proposed. We show improvements in classification and segmentation accuracy when using the proposed uncertainty driven multi-loss function.", "title": "" }, { "docid": "a29ee41e8f46d1feebeb67886b657f70", "text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.", "title": "" }, { "docid": "5fe1fa98c953d778ee27a104802e5f2b", "text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.", "title": "" }, { "docid": "bf1cbc78576e8631fa5a28f3f0f3c218", "text": "A current-mode dc-dc converter with an on-chip current sensor is presented in this letter. The current sensor has significant improvement on the current-sensing speed. The sensing ratio of the current sensor has low sensitivity to the variation of the process, voltage, temperature and loading. The current sensor combines the sensed inductor current signal with the compensation ramp signal and the output of the error amplifier smoothly. The settling time of the current sensor is less than 10 ns. In the current-mode dc-dc converter application, the differential output of the current sensor can be directly sent to the pulse-width modulation comparator. With the proposed current sensor, the dc-dc converter could realize a low duty cycle with a high switching frequency. The dc-dc converter has been fabricated by CSMC 0.5-μm 5-V CMOS process with a die size of 2.25 mm 2. Experimental results show that the current-mode converter can achieve a duty cycle down to 0.11 with a switching frequency up to 4 MHz. The measured transient response time is less than 6 μs as the load current changes between 50 and 600 mA, rapidly.", "title": "" }, { "docid": "c7ab6bc685029cc61a02f4596fef8818", "text": "UPON Lite focuses on users, typically domain experts without ontology expertise, minimizing the role of ontology engineers.", "title": "" }, { "docid": "7d42fd2db675eb5aa3573d3437a4d124", "text": "This paper presents a new solution for filtering current harmonics in three-phase four-wire networks. The original four-branch star (FBS) filter topology presented in this paper is characterized by a particular layout of single-phase inductances and capacitors, without using any transformer or special electromagnetic device. Via this layout, a power filter, with two different and simultaneous resonance frequencies and sequences, is achieved-one frequency for positive-/negative-sequence and another one for zero-sequence components. This filter topology can work either as a passive filter, when only passive components are employed, or as a hybrid filter, when its behavior is improved by integrating a power converter into the filter structure. The paper analyzes the proposed topology, and derives fundamental concepts about the control of the resulting hybrid power filter. From this analysis, a specific implementation of a three-phase four-wire hybrid power filter is presented as an illustrative application of the filtering topology. An extensive evaluation using simulation and experimental results from a DSP-based laboratory prototype is conducted in order to verify and validate the good performance achieved by the proposed FBS passive/hybrid power filter.", "title": "" }, { "docid": "b57859a76aea1fb5d4219068bde83283", "text": "Software vulnerabilities are the root cause of a wide range of attacks. Existing vulnerability scanning tools are able to produce a set of suspects. However, they often suffer from a high false positive rate. Convicting a suspect and vindicating false positives are mostly a highly demanding manual process, requiring a certain level of understanding of the software. This limitation significantly thwarts the application of these tools by system administrators or regular users who are concerned about security but lack of understanding of, or even access to, the source code. It is often the case that even developers are reluctant to inspect/fix these numerous suspects unless they are convicted by evidence. In this paper, we propose a lightweight dynamic approach which generates evidence for various security vulnerabilities in software, with the goal of relieving the manual procedure. It is based on data lineage tracing, a technique that associates each execution point precisely with a set of relevant input values. These input values can be mutated by an offline analysis to generate exploits. We overcome the efficiency challenge by using Binary Decision Diagrams (BDD). Our tool successfully generates exploits for all the known vulnerabilities we studied. We also use it to uncover a number of new vulnerabilities, proved by evidence.", "title": "" }, { "docid": "7603ee2e0519b727de6dc29e05b2049f", "text": "To what extent do we share feelings with others? Neuroimaging investigations of the neural mechanisms involved in the perception of pain in others may cast light on one basic component of human empathy, the interpersonal sharing of affect. In this fMRI study, participants were shown a series of still photographs of hands and feet in situations that are likely to cause pain, and a matched set of control photographs without any painful events. They were asked to assess on-line the level of pain experienced by the person in the photographs. The results demonstrated that perceiving and assessing painful situations in others was associated with significant bilateral changes in activity in several regions notably, the anterior cingulate, the anterior insula, the cerebellum, and to a lesser extent the thalamus. These regions are known to play a significant role in pain processing. Finally, the activity in the anterior cingulate was strongly correlated with the participants' ratings of the others' pain, suggesting that the activity of this brain region is modulated according to subjects' reactivity to the pain of others. Our findings suggest that there is a partial cerebral commonality between perceiving pain in another individual and experiencing it oneself. This study adds to our understanding of the neurological mechanisms implicated in intersubjectivity and human empathy.", "title": "" }, { "docid": "4608c8ca2cf58ca9388c25bb590a71df", "text": "Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant  burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.", "title": "" }, { "docid": "7bb0ea76acaf4e23312ae62d0b6321db", "text": "The European honey bee exploits floral resources efficiently and may therefore compete with solitary wild bees. Hence, conservationists and bee keepers are debating about the consequences of beekeeping for the conservation of wild bees in nature reserves. We observed flower-visiting bees on flowers of Calluna vulgaris in sites differing in the distance to the next honey-bee hive and in sites with hives present and absent in the Lüneburger Heath, Germany. Additionally, we counted wild bee ground nests in sites that differ in their distance to the next hive and wild bee stem nests and stem-nesting bee species in sites with hives present and absent. We did not observe fewer honey bees or higher wild bee flower visits in sites with different distances to the next hive (up to 1,229 m). However, wild bees visited fewer flowers and honey bee visits increased in sites containing honey-bee hives and in sites containing honey-bee hives we found fewer stem-nesting bee species. The reproductive success, measured as number of nests, was not affected by distance to honey-bee hives or their presence but by availability and characteristics of nesting resources. Our results suggest that beekeeping in the Lüneburg Heath can affect the conservation of stem-nesting bee species richness but not the overall reproduction either of stem-nesting or of ground-nesting bees. Future experiments need control sites with larger distances than 500 m to hives. Until more information is available, conservation efforts should forgo to enhance honey bee stocking rates but enhance the availability of nesting resources.", "title": "" }, { "docid": "f0a7d1543bb056d7ea02c4f11a684d28", "text": "The computer vision community has reached a point when it can start considering high-level reasoning tasks such as the \"communicative intents\" of images, or in what light an image portrays its subject. For example, an image might imply that a politician is competent, trustworthy, or energetic. We explore a variety of features for predicting these communicative intents. We study a number of facial expressions and body poses as cues for the implied nuances of the politician's personality. We also examine how the setting of an image (e.g. kitchen or hospital) influences the audience's perception of the portrayed politician. Finally, we improve the performance of an existing approach on this problem, by learning intermediate cues using convolutional neural networks. We show state of the art results on the Visual Persuasion dataset of Joo et al. [11].", "title": "" }, { "docid": "9f6f00bf0872c54fbf2ec761bf73f944", "text": "Nanoscience emerged in the late 1980s and is developed and applied in China since the middle of the 1990s. Although nanotechnologies have been less developed in agronomy than other disciplines, due to less investment, nanotechnologies have the potential to improve agricultural production. Here, we review more than 200 reports involving nanoscience in agriculture, livestock, and aquaculture. The major points are as follows: (1) nanotechnologies used for seeds and water improved plant germination, growth, yield, and quality. (2) Nanotechnologies could increase the storage period for vegetables and fruits. (3) For livestock and poultry breeding, nanotechnologies improved animals immunity, oxidation resistance, and production and decreased antibiotic use and manure odor. For instance, the average daily gain of pig increased by 9.9–15.3 %, the ratio of feedstuff to weight decreased by 7.5–10.3 %, and the diarrhea rate decreased by 55.6–66.7 %. (4) Nanotechnologies for water disinfection in fishpond increased water quality and increased yields and survivals of fish and prawn. (5) Nanotechnologies for pesticides increased pesticide performance threefold and reduced cost by 50 %. (6) Nano urea increased the agronomic efficiency of nitrogen fertilization by 44.5 % and the grain yield by 10.2 %, versus normal urea. (7) Nanotechnologies are widely used for rapid detection and diagnosis, notably for clinical examination, food safety testing, and animal epidemic surveillance. (8) Nanotechnologies may also have adverse effects that are so far not well known.", "title": "" }, { "docid": "caa04ee7fb10167fea167a89b7228c9b", "text": "Using dedicated hardware to do machine learning typically ends up in disaster because of cost, obsolescence, and poor software. The popularization of graphic processing units (GPUs), which are now available on every PC, provides an attractive alternative. We propose a generic 2-layer fully connected neural network GPU implementation which yields over 3/spl times/ speedup for both training and testing with respect to a 3 GHz P4 CPU.", "title": "" }, { "docid": "e7bedfa690b456a7a93e5bdae8fff79c", "text": "During the past several years, there have been a significant number of researches conducted in the area of semiconductor final test scheduling problems (SFTSP). As specific example of simultaneous multiple resources scheduling problem (SMRSP), intelligent manufacturing planning and scheduling based on meta-heuristic methods, such as Genetic Algorithm (GA), Simulated Annealing (SA), and Particle Swarm Optimization (PSO), have become the common tools for finding satisfactory solutions within reasonable computational times in real settings. However, limited researches were aiming at analyze the effects of interdependent relations during group decision-making activities. Moreover for complex and large problems, local constraints and objectives from each managerial entity, and their contributions towards the global objectives cannot be effectively represented in a single model. In this paper, we propose a novel Cooperative Estimation of Distribution Algorithm (CEDA) to overcome the challenges mentioned before. The CEDA is established based on divide-and-conquer strategy and a co-evolutionary framework. Considerable experiments have been conducted and the results confirmed that CEDA outperforms recent research results for scheduling problems in FMS (Flexible Manufacturing Systems).", "title": "" }, { "docid": "9828a83e8b28b3b0d302a25da9120763", "text": "For robotic manipulators that are redundant or with high degrees of freedom (dof ), an analytical solution to the inverse kinematics is very difficult or impossible. Pioneer 2 robotic arm (P2Arm) is a recently developed and widely used 5-dof manipulator. There is no effective solution to its inverse kinematics to date. This paper presents a first complete analytical solution to the inverse kinematics of the P2Arm, which makes it possible to control the arm to any reachable position in an unstructured environment. The strategies developed in this paper could also be useful for solving the inverse kinematics problem of other types of robotic arms.", "title": "" }, { "docid": "9117bb0ed6ab5fb573f16b5a09798711", "text": "When does knowledge transfer benefit performance? Combining field data from a global consulting firm with an agent-based model, we examine how efforts to supplement one’s knowledge from coworkers interact with individual, organizational, and environmental characteristics to impact organizational performance. We find that once cost and interpersonal exchange are included in the analysis, the impact of knowledge transfer is highly contingent. Depending on specific characteristics and circumstances, knowledge transfer can better, matter little to, or even harm performance. Three illustrative studies clarify puzzling past results and offer specific boundary conditions: (1) At the individual level, better organizational support for employee learning diminishes the benefit of knowledge transfer for organizational performance. (2) At the organization level, broader access to organizational memory makes global knowledge transfer less beneficial to performance. (3) When the organizational environment becomes more turbulent, the organizational performance benefits of knowledge transfer decrease. The findings imply that organizations may forgo investments in both organizational memory and knowledge exchange, that wide-ranging knowledge exchange may be unimportant or even harmful for performance, and that organizations operating in turbulent environments may find that investment in knowledge exchange undermines performance rather than enhances it. At a time when practitioners are urged to make investments in facilitating knowledge transfer and collaboration, appreciation of the complex relationship between knowledge transfer and performance will help in reaping benefits while avoiding liabilities.", "title": "" }, { "docid": "864adf6f82a0d1af98339f92035b15fc", "text": "Typically in neuroimaging we are looking to extract some pertinent information from imperfect, noisy images of the brain. This might be the inference of percent changes in blood flow in perfusion FMRI data, segmentation of subcortical structures from structural MRI, or inference of the probability of an anatomical connection between an area of cortex and a subthalamic nucleus using diffusion MRI. In this article we will describe how Bayesian techniques have made a significant impact in tackling problems such as these, particularly in regards to the analysis tools in the FMRIB Software Library (FSL). We shall see how Bayes provides a framework within which we can attempt to infer on models of neuroimaging data, while allowing us to incorporate our prior belief about the brain and the neuroimaging equipment in the form of biophysically informed or regularising priors. It allows us to extract probabilistic information from the data, and to probabilistically combine information from multiple modalities. Bayes can also be used to not only compare and select between models of different complexity, but also to infer on data using committees of models. Finally, we mention some analysis scenarios where Bayesian methods are impractical, and briefly discuss some practical approaches that we have taken in these cases.", "title": "" }, { "docid": "a2082f1b4154cd11e94eff18a016e91e", "text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.", "title": "" }, { "docid": "e483d914e00fa46a6be188fabd396165", "text": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.", "title": "" }, { "docid": "737f75e39cbf1b5226985e866a44c106", "text": "A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., Security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., The baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., More than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., For security, was 1% in the previous development process.", "title": "" } ]
scidocsrr
397e1ca66cd9cc314ee3b6182ca6b548
On Organizational Becoming: Rethinking Organizational Change
[ { "docid": "efd723e99064699de2ed5400887c1eda", "text": "Building on a formal theory of the structural aspects of organizational change initiated in Hannan, Pólos, and Carroll (2002a, 2002b), this paper focuses on structural inertia. We define inertia as a persistent organizational resistance to changing architectural features. We examine the evolutionary consequences of architectural inertia. The main theorem holds that selection favors architectural inertia in the sense that the median level of inertia in cohort of organizations presumably increases over time. A second theorem holds that the selection intensity favoring architectural inertia is greater when foresight about the consequences of changes is more limited. According to the prior theory of Hannan, Pólos, and Carroll (2002a, 2002b), foresight is limited by complexity and opacity. Thus it follows that the selection intensity favoring architectural inertia is stronger in populations composed of complex and opaque organizations than in those composed of simple and transparent ones. ∗This research was supported by fellowships from the Netherlands Institute for Advanced Study and by the Stanford Graduate School of Business Trust, ERIM at Erasmus University, and the Centre for Formal Studies in the Social Sciences at Lorand Eötvös University. We benefited from the comments of Jim Baron, Dave Barron, Gábor Péli, Joel Podolny, and the participants in the workshop of the Nagymaros Group on Organizational Ecology and in the Stanford Strategy Conference. †Stanford University ‡Loránd Eötvös University, Budapest and Erasmus University, Rotterdam §Stanford University", "title": "" }, { "docid": "9c5535f218f6228ba6b2a8e5fdf93371", "text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.", "title": "" } ]
[ { "docid": "b168f298448b3ba16b7f585caae7baa6", "text": "Not only how good or bad people feel on average, but also how their feelings fluctuate across time is crucial for psychological health. The last 2 decades have witnessed a surge in research linking various patterns of short-term emotional change to adaptive or maladaptive psychological functioning, often with conflicting results. A meta-analysis was performed to identify consistent relationships between patterns of short-term emotion dynamics-including patterns reflecting emotional variability (measured in terms of within-person standard deviation of emotions across time), emotional instability (measured in terms of the magnitude of consecutive emotional changes), and emotional inertia of emotions over time (measured in terms of autocorrelation)-and relatively stable indicators of psychological well-being or psychopathology. We determined how such relationships are moderated by the type of emotional change, type of psychological well-being or psychopathology involved, valence of the emotion, and methodological factors. A total of 793 effect sizes were identified from 79 articles (N = 11,381) and were subjected to a 3-level meta-analysis. The results confirmed that overall, low psychological well-being co-occurs with more variable (overall ρ̂ = -.178), unstable (overall ρ̂ = -.205), but also more inert (overall ρ̂ = -.151) emotions. These effect sizes were stronger when involving negative compared with positive emotions. Moreover, the results provided evidence for consistency across different types of psychological well-being and psychopathology in their relation with these dynamical patterns, although specificity was also observed. The findings demonstrate that psychological flourishing is characterized by specific patterns of emotional fluctuations across time, and provide insight into what constitutes optimal and suboptimal emotional functioning. (PsycINFO Database Record", "title": "" }, { "docid": "41cdd0e8bcbffbd4c66b8088e26b94fe", "text": "We propose a neural network for 3D point cloud processing that exploits spherical convolution kernels and octree partitioning of space. The proposed metric-based spherical kernels systematically quantize point neighborhoods to identify local geometric structures in data, while maintaining the properties of translation-invariance and asymmetry. The network architecture itself is guided by octree data structuring that takes full advantage of the sparse nature of irregular point clouds. We specify spherical kernels with the help of neurons in each layer that in turn are associated with spatial locations. We exploit this association to avert dynamic kernel generation during network training, that enables efficient learning with high resolution point clouds. We demonstrate the utility of the spherical convolutional neural network for 3D object classification on standard benchmark datasets.", "title": "" }, { "docid": "917287666755fe4b1832f5b6025414bb", "text": "The Piver classification of radical hysterectomy for the treatment of cervical cancer is outdated and misused. The Surgery Committee of the Gynecological Cancer Group of the European Organization for Research and Treatment of Cancer (EORTC) produced, approved, and adopted a revised classification. It is hoped that at least within the EORTC participating centers, a standardization of procedures is achieved. The clinical indications of the new classification are discussed.", "title": "" }, { "docid": "ad5a8c3ee37219868d056b341300008e", "text": "The challenges of 4G are multifaceted. First, 4G requires multiple-input, multiple-output (MIMO) technology, and mobile devices supporting MIMO typically have multiple antennas. To obtain the benefits of MIMO communications systems, antennas typically must be properly configured to take advantage of the independent signal paths that can exist in the communications channel environment. [1] With proper design, one antenna’s radiation is prevented from traveling into the neighboring antenna and being absorbed by the opposite load circuitry. Typically, a combination of antenna separation and polarization is used to achieve the required signal isolation and independence. However, when the area inside devices such as smartphones, USB modems, and tablets is extremely limited, this approach often is not effective in meeting industrial design and performance criteria. Second, new LTE networks are expected to operate alongside all the existing services, such as 3G voice/data, Wi-Fi, Bluetooth, etc. Third, this problem gets even harder in the 700 MHz LTE band because the typical handset is not large enough to properly resonate at that frequency.", "title": "" }, { "docid": "7159d958139d684e4a74abe252788a40", "text": "Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.", "title": "" }, { "docid": "e5edb616b5d0664cf8108127b0f8684c", "text": "Night vision systems have become an important research area in recent years. Due to variations in weather conditions such as snow, fog, and rain, night images captured by camera may contain high level of noise. These conditions, in real life situations, may vary from no noise to extreme amount of noise corrupting images. Thus, ideal image restoration systems at night must consider various levels of noise and should have a technique to deal with wide range of noisy situations. In this paper, we have presented a new method that works well with different signal to noise ratios ranging from -1.58 dB to 20 dB. For moderate noise, Wigner distribution based algorithm gives good results, whereas for extreme amount of noise 2nd order Wigner distribution is used. The performance of our restoration technique is evaluated using MSE criteria. The results show that our method is capable of dealing with the wide range of Gaussian noise and gives consistent performance throughout.", "title": "" }, { "docid": "d341486002f2b0f5e620f5a63873577c", "text": "Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account.", "title": "" }, { "docid": "1e4a74d8d4ae131467e12911fd6ac281", "text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.", "title": "" }, { "docid": "c0a2fc4ffe5910ffe9a4a9fe983106c3", "text": "Robust inspection is important to ensure the safety of nuclear power plant components. An automated approach would require detecting often low contrast cracks that could be surrounded by or even within textures with similar appearances such as welding, scratches and grind marks. We propose a crack detection method for nuclear power plant inspection videos by fine tuning a deep neural network for detecting local patches containing cracks which are then grouped in spatial-temporal space for group-level classification. We evaluate the proposed method on a data set consisting of 17 videos consisting of nearly 150,000 frames of inspection video and provide comparison to prior methods.", "title": "" }, { "docid": "0c0d0b6d4697b1a0fc454b995bcda79a", "text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.", "title": "" }, { "docid": "464f7d25cb2a845293a3eb8c427f872f", "text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.", "title": "" }, { "docid": "2c7bafac9d4c4fedc43982bd53c99228", "text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.", "title": "" }, { "docid": "c2ad090abd3f540436d3385bb6f3f013", "text": "We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 “big” Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications. Our code is available together with all pretrained models at: https://github. com/datquocnguyen/jPTDP.", "title": "" }, { "docid": "0e45e57b4e799ebf7e8b55feded7e9e1", "text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.", "title": "" }, { "docid": "a90f865e053b9339052a4d00281dbd03", "text": "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output &#x2013; point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.", "title": "" }, { "docid": "0cae8939c57ff3713d7321102c80816e", "text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.", "title": "" }, { "docid": "31fca4faa53520b240267562c9e394fe", "text": "Purpose – The aim of this study was two-fold: first, to examine the noxious effects of presenteeism on employees’ work well-being in a cross-cultural context involving Chinese and British employees; second, to explore the role of supervisory support as a pan-cultural stress buffer in the presenteeism process. Design/methodology/approach – Using structured questionnaires, the authors compared data collected from samples of 245 Chinese and 128 British employees working in various organizations and industries. Findings – Cross-cultural comparison revealed that the act of presenteeism was more prevalent among Chinese and they reported higher levels of strains than their British counterparts. Hierarchical regression analyses showed that presenteeism had noxious effects on exhaustion for both Chinese and British employees. Moreover, supervisory support buffered the negative impact of presenteeism on exhaustion for both Chinese and British employees. Specifically, the negative relation between presenteeism and exhaustion was stronger for those with more supervisory support. Practical implications – Presenteeism may be used as a career-protecting or career-promoting tactic. However, the negative effects of this behavior on employees’ work well-being across the culture divide should alert us to re-think its pros and cons as a career behavior. Employees in certain cultures (e.g. the hardworking Chinese) may exhibit more presenteeism behaviour, thus are in greater risk of ill-health. Originality/value – This is the first cross-cultural study demonstrating the universality of the act of presenteeism and its damaging effects on employees’ well-being. The authors’ findings of the buffering role of supervisory support across cultural contexts highlight the necessity to incorporate resources in mitigating the harmful impact of presenteeism.", "title": "" }, { "docid": "461062a51b0c33fcbb0f47529f3a6fba", "text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.", "title": "" }, { "docid": "3c8e85a977df74c2fd345db9934d4699", "text": "The abstract paragraph should be indented 1/2 inch (3 picas) on both left and righthand margins. Use 10 point type, with a vertical spacing of 11 points. The word Abstract must be centered, bold, and in point size 12. Two line spaces precede the abstract. The abstract must be limited to one paragraph.", "title": "" } ]
scidocsrr
8682983d0f8b0c24bec9756a7d875b17
Relative localization and communication module for small-scale multi-robot systems
[ { "docid": "1e6cec12054c46442819f9595d07ae09", "text": "Most of the research in the field of robotics is focussed on solving the problem of Simultaneous Localization and Mapping(SLAM). In general the problem is solved using a single robot. In the article written by R. Grabowski, C. Paredis and P. Hkosla, called ”Heterogeneous Teams of Modular Robots for Mapping end Exploration” a novel localization method is presented based on multiple robots.[Grabowski, 2000] For this purpose the relative distance between the different robots is calculated. These measurements, together with the positions estimated using dead reckoning, are used to determine the most likely new positions of the agents. Knowing the positions is essential when pursuing accurate (team) mapping capabilities. The proposed method makes it possible for heterogeneous team of modular centimeter-scale robots to collaborate and map unexplored environments.", "title": "" } ]
[ { "docid": "5e3575b45ffaeb2587d7e6531609bd1c", "text": "These last years, several new home automation boxes appeared on the market, the new radio-based protocols facilitating their deployment with respect to previously wired solutions. Coupled with the wider availability of connected objects, these protocols have allowed new users to set up home automation systems by themselves. In this paper, we relate an in situ observational study of these builders in order to understand why and how the smart habitats were developed and used. We led 10 semi-structured interviews in households composed of at least 2 adults and equipped for at least 1 year, and 47 home automation builders answered an online questionnaire at the end of the study. Our study confirms, specifies and exhibits additional insights about usages and means of end-user development in the context of home automation.", "title": "" }, { "docid": "fa05d004df469e8f83fa4fdee9909a6f", "text": "Accurate velocity estimation is an important basis for robot control, but especially challenging for highly elastically driven robots. These robots show large swing or oscillation effects if they are not damped appropriately during the performed motion. In this letter, we consider an ultralightweight tendon-driven series elastic robot arm equipped with low-resolution joint position encoders. We propose an adaptive Kalman filter for velocity estimation that is suitable for these kinds of robots with a large range of possible velocities and oscillation frequencies. Based on an analysis of the parameter characteristics of the measurement noise variance, an update rule based on the filter position error is developed that is easy to adjust for use with different sensors. Evaluation of the filter both in simulation and in robot experiments shows a smooth and accurate performance, well suited for control purposes.", "title": "" }, { "docid": "d52bfde050e6535645c324e7006a50e7", "text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.", "title": "" }, { "docid": "baefc6e7e7968651f3e36acfd62b094d", "text": "The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language—words, phrases, and sentences—is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.", "title": "" }, { "docid": "c7c63f08639660f935744309350ab1e0", "text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.", "title": "" }, { "docid": "b5bb280c7ce802143a86b9261767d9a6", "text": "Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks—pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a largescale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes1. We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.", "title": "" }, { "docid": "0195e112c19f512b7de6a7f00e9f1099", "text": "Medication-related osteonecrosis of the jaw (MRONJ) is a severe adverse drug reaction, consisting of progressive bone destruction in the maxillofacial region of patients. ONJ can be caused by two pharmacological agents: Antiresorptive (including bisphosphonates (BPs) and receptor activator of nuclear factor kappa-B ligand inhibitors) and antiangiogenic. MRONJ pathophysiology is not completely elucidated. There are several suggested hypothesis that could explain its unique localization to the jaws: Inflammation or infection, microtrauma, altered bone remodeling or over suppression of bone resorption, angiogenesis inhibition, soft tissue BPs toxicity, peculiar biofilm of the oral cavity, terminal vascularization of the mandible, suppression of immunity, or Vitamin D deficiency. Dental screening and adequate treatment are fundamental to reduce the risk of osteonecrosis in patients under antiresorptive or antiangiogenic therapy, or before initiating the administration. The treatment of MRONJ is generally difficult and the optimal therapy strategy is still to be established. For this reason, prevention is even more important. It is suggested that a multidisciplinary team approach including a dentist, an oncologist, and a maxillofacial surgeon to evaluate and decide the best therapy for the patient. The choice between a conservative treatment and surgery is not easy, and it should be made on a case by case basis. However, the initial approach should be as conservative as possible. The most important goals of treatment for patients with established MRONJ are primarily the control of infection, bone necrosis progression, and pain. The aim of this paper is to represent the current knowledge about MRONJ, its preventive measures and management strategies.", "title": "" }, { "docid": "799bc245ecfabf59416432ab62fe9320", "text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.", "title": "" }, { "docid": "3e142a338a98e3a3c9a65fea07473cf8", "text": "In this paper we develop a new family of Ordered Weighted Averaging (OWA) operators. Weight vector is obtained from a desired orness of the operator. Using Faulhaber’s formulas we obtain direct and simple expressions for the weight vector without any iteration loop. With the exception of one weight, the remaining follow a straight line relation. As a result, a fast and robust algorithm is developed. The resulting weight vector is suboptimal according with the Maximum Entropy criterion, but it is very close to the optimal. Comparisons are done with other procedures.", "title": "" }, { "docid": "122ed18a623510052664996c7ef4b4bb", "text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding", "title": "" }, { "docid": "914f41b9f3c0d74f888c7dd83e226468", "text": "We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users.", "title": "" }, { "docid": "6db790d4d765b682fab6270c5930bead", "text": "Geophysical applications of radar interferometry to measure changes in the Earth's surface have exploded in the early 1990s. This new geodetic technique calculates the interference pattern caused by the difference in phase between two images acquired by a spaceborne synthetic aperture radar at two distinct times. The resulting interferogram is a contour map of the change in distance between the ground and the radar instrument. These maps provide an unsurpassed spatial sampling density (---100 pixels km-2), a competitive precision (---1 cm), and a useful observation cadence (1 pass month-•). They record movements in the crust, perturbations in the atmosphere, dielectric modifications in the soil, and relief in the topography. They are also sensitive to technical effects, such as relative variations in the radar's trajectory or variations in its frequency standard. We describe how all these phenomena contribute to an interferogram. Then a practical summary explains the techniques for calculating and manipulating interferograms from various radar instruments, including the four satellites currently in orbit: ERS-1, ERS-2, JERS-1, and RADARSAT. The next chapter suggests some guidelines for interpreting an interferogram as a geophysical measurement: respecting the limits of the technique, assessing its uncertainty, recognizing artifacts, and discriminating different types of signal. We then review the geophysical applications published to date, most of which study deformation related to earthquakes, volcanoes, and glaciers using ERS-1 data. We also show examples of monitoring natural hazards and environmental alterations related to landslides, subsidence, and agriculture. In addition, we consider subtler geophysical signals such as postseismic relaxation, tidal loading of coastal areas, and interseismic strain accumulation. We conclude with our perspectives on the future of radar interferometry. The objective of the review is for the reader to develop the physical understanding necessary to calculate an interferogram and the geophysical intuition necessary to interpret it.", "title": "" }, { "docid": "03dcb05a6aa763b6b0a5cdc58ddb81d8", "text": "In this paper, a phase-shifted dual H-bridge converter, which can solve the drawbacks of existing phase-shifted full-bridge converters such as narrow zero-voltage-switching (ZVS) range, large circulating current, large duty-cycle loss, and serious secondary-voltage overshoot and oscillation, is analyzed and evaluated. The proposed topology is composed of two symmetric half-bridge inverters that are placed in parallel on the primary side and are driven in a phase-shifting manner to regulate the output voltage. At the rectifier stage, a center-tap-type rectifier with two additional low-current-rated diodes is employed. This structure allows the proposed converter to have the advantages of a wide ZVS range, no problems related to duty-cycle loss, no circulating current, and the reduction of secondary-voltage oscillation and overshoot. Moreover, the output filter's size becomes smaller compared to the conventional phase-shift full-bridge converters. This paper describes the operation principle of the proposed converter and the analysis and design consideration in depth. A 1-kW 320-385-V input 50-V output laboratory prototype operating at a 100-kHz switching frequency is designed, built, and tested to verify the effectiveness of the presented converter.", "title": "" }, { "docid": "39fc05dfc0faeb47728b31b6053c040a", "text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.", "title": "" }, { "docid": "b17f5cfea81608e5034121113dbc8de4", "text": "Every question asked by a therapist may be seen to embody some intent and to arise from certain assumptions. Many questions are intended to orient the therapist to the client's situation and experiences; others are asked primarily to provoke therapeutic change. Some questions are based on lineal assumptions about the phenomena being addressed; others are based on circular assumptions. The differences among these questions are not trivial. They tend to have dissimilar effects. This article explores these issues and offers a framework for distinguishing four major groups of questions. The framework may be used by therapists to guide their decision making about what kinds of questions to ask, and by researchers to study different interviewing styles.", "title": "" }, { "docid": "a520bf66f1b54a7444f2cbe3f2da8000", "text": "In this work we study the problem of Intrusion Detection is sensor networks and we propose a lightweight scheme that can be applied to such networks. Its basic characteristic is that nodes monitor their neighborhood and collaborate with their nearest neighbors to bring the network back to its normal operational condition. We emphasize in a distributed approach in which, even though nodes don’t have a global view, they can still detect an intrusion and produce an alert. We apply our design principles for the blackhole and selective forwarding attacks by defining appropriate rules that characterize malicious behavior. We also experimentally evaluate our scheme to demonstrate its effectiveness in detecting the afore-mentioned attacks.", "title": "" }, { "docid": "b206a5f5459924381ef6c46f692c7052", "text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.", "title": "" }, { "docid": "79b73417f1f09e6487ea0c9ead28098b", "text": "The internet connectivity of client software (e.g., apps running on phones and PCs), web sites, and online services provide an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called A/B tests, split tests, randomized experiments, control/treatment tests, and online field experiments. Unlike most data mining techniques for finding correlational patterns, controlled experiments allow establishing a causal relationship with high probability. Experimenters can utilize the Scientific Method to form a hypothesis of the form “If a specific change is introduced, will it improve key metrics?” and evaluate it with real users. The theory of a controlled experiment dates back to Sir Ronald A. Fisher’s experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, and the topic of offline experiments is well developed in Statistics (Box 2005). Online Controlled Experiments started to be used in the late 1990s with the growth of the Internet. Today, many large sites, including Amazon, Bing, Facebook, Google, LinkedIn, and Yahoo! run thousands to tens of thousands of experiments each year testing user interface (UI) changes, enhancements to algorithms (search, ads, personalization, recommendation, etc.), changes to apps, content management system, etc. Online controlled experiments are now considered an indispensable tool, and their use is growing for startups and smaller websites. Controlled experiments are especially useful in combination with Agile software development (Martin 2008, Rubin 2012), Steve Blank’s Customer Development process (Blank 2005), and MVPs (Minimum Viable Products) popularized by Eric Ries’s Lean Startup (Ries 2011). Motivation and Background Many good resources are available with motivation and explanations about online controlled experiments (Siroker and Koomen 2013, Goward 2012, McFarland 2012, Schrage 2014, Kohavi, Longbotham and Sommerfield, et al. 2009, Kohavi, Deng and Longbotham, et al. 2014, Kohavi, Deng and Frasca, et al. 2013).", "title": "" }, { "docid": "c27e6b7be1a5d00632bbbea64b2516ad", "text": "Block diagonalization (BD) is a well-known precoding method in multiuser multi-input multi-output (MIMO) broadcast channels. This scheme can be considered as a extension of the zero-forcing (ZF) channel inversion to the case where each receiver is equipped with multiple antennas. One of the limitation of the BD is that the sum rate does not grow linearly with the number of users and transmit antennas at low and medium signal-to-noise ratio regime, since the complete suppression of multi-user interference is achieved at the expense of noise enhancement. Also it performs poorly under imperfect channel state information. In this paper, we propose a generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems. We first introduce a generalized ZF channel inversion algorithm as a new approach of the conventional BD. Applying this idea to the MMSE channel inversion for identifying orthonormal basis vectors of the precoder, and employing the MMSE criterion for finding its combining matrix, the proposed scheme increases the signal-to-interference-plus-noise ratio at each user's receiver. Simulation results confirm that the proposed scheme exhibits a linear growth of the sum rate, as opposed to the BD scheme. For block fading channels with four transmit antennas, the proposed scheme provides a 3 dB gain over the conventional BD scheme at 1% frame error rate. Also, we present a modified precoding method for systems with channel estimation errors and show that the proposed algorithm is robust to channel estimation errors.", "title": "" }, { "docid": "9200498e7ef691b83bf804d4c5581ba2", "text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.", "title": "" } ]
scidocsrr
a69534aff3e44a8641428e4ddbe1de14
Tensor decomposition of EEG signals: A brief review
[ { "docid": "ffc36fa0dcc81a7f5ba9751eee9094d7", "text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.", "title": "" } ]
[ { "docid": "e90e2a651c54b8510efe00eb1d8e7be0", "text": "The design simulation, fabrication, and measurement of a 2.4-GHz horizontally polarized omnidirectional planar printed antenna for WLAN applications is presented. The antenna adopts the printed Alford-loop-type structure. The three-dimensional (3-D) EM simulator HFSS is used for design simulation. The designed antenna is fabricated on an FR-4 printed-circuit-board substrate. The measured input standing-wave-ratio (SWR) is less than three from 2.40 to 2.483 GHz. As desired, the horizontal-polarization H-plane pattern is quite omnidirectional and the E-plane pattern is also very close to that of an ideal dipole antenna. Also a comparison with the popular printed inverted-F antenna (PIFA) has been conducted, the measured H-plane pattern of the Alford-loop-structure antenna is better than that of the PIFA when the omnidirectional pattern is desired. Further more, the study of the antenna printed on a simulated PCMCIA card and that inserted inside a laptop PC are also conducted. The HFSS model of a laptop PC housing, consisting of the display, the screen, and the metallic box with the keyboard, is constructed. The effect of the laptop PC housing with different angle between the display and keyboard on the antenna is also investigated. It is found that there is about 15 dB attenuation of the gain pattern (horizontal-polarization field) in the opposite direction of the PCMCIA slot on the laptop PC. Hence, the effect of the large ground plane of the PCMCIA card and the attenuation effect of the laptop PC housing should be taken into consideration for the antenna design for WLAN applications. For the proposed antenna, in addition to be used alone for a horizontally polarized antenna, it can be also a part of a diversity antenna", "title": "" }, { "docid": "94aa0777f80aa25ec854f159dc3e0706", "text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.", "title": "" }, { "docid": "a7de62c78f1286e66fd35145f3163f1c", "text": "A particularly insidious type of concurrency bug is atomicity violations. While there has been substantial work on automatic detection of atomicity violations, each existing technique has focused on a certain type of atomic region. To address this limitation, this paper presents Atom Tracker, a comprehensive approach to atomic region inference and violation detection. Atom Tracker is the first scheme to (1) automatically infer generic atomic regions (not limited by issues such as the number of variables accessed, the number of instructions included, or the type of code construct the region is embedded in) and (2) automatically detect violations of them at runtime with negligible execution overhead. Atom Tracker provides novel algorithms to infer generic atomic regions and to detect atomicity violations of them. Moreover, we present a hardware implementation of the violation detection algorithm that leverages cache coherence state transitions in a multiprocessor. In our evaluation, we take eight atomicity violation bugs from real-world codes like Apache, MySql, and Mozilla, and show that Atom Tracker detects them all. In addition, Atom Tracker automatically infers all of the atomic regions in a set of micro benchmarks accurately. Finally, we also show that the hardware implementation induces a negligible execution time overhead of 0.2–4.0% and, therefore, enables Atom Tracker to find atomicity violations on-the-fly in production runs.", "title": "" }, { "docid": "4acc30bade98c1257ab0a904f3695f3d", "text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.", "title": "" }, { "docid": "139b3dae4713a5bcff97e1b209bd3206", "text": "Utilizing parametric and nonparametric techniques, we assess the role of a heretofore relatively unexplored ‘input’ in the educational process, homework, on academic achievement. Our results indicate that homework is an important determinant of student test scores. Relative to more standard spending related measures, extra homework has a larger and more significant impact on test scores. However, the effects are not uniform across different subpopulations. Specifically, we find additional homework to be most effective for high and low achievers, which is further confirmed by stochastic dominance analysis. Moreover, the parametric estimates of the educational production function overstate the impact of schooling related inputs. In all estimates, the homework coefficient from the parametric model maps to the upper deciles of the nonparametric coefficient distribution and as a by-product the parametric model understates the percentage of students with negative responses to additional homework. JEL: C14, I21, I28", "title": "" }, { "docid": "d18ed4c40450454d6f517c808da7115a", "text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.", "title": "" }, { "docid": "e2b42351d30b2b1938497c6fdab68135", "text": "An automatic road sign recognition system first locates road signs within images captured by an imaging sensor on-board of a vehicle, and then identifies the detected road signs. This paper presents an automatic neural-network-based road sign recognition system. First, a study of the existing road sign recognition research is presented. In this study, the issues associated with automatic road sign recognition are described, the existing methods developed to tackle the road sign recognition problem are reviewed, and a comparison of the features of these methods is given. Second, the developed road sign recognition system is described. The system is capable of analysing live colour road scene images, detecting multiple road signs within each image, and classifying the type of road signs detected. The system consists of two modules: detection and classification. The detection module segments the input image in the hue-saturation-intensity colour space, and then detects road signs using a Multi-layer Perceptron neural-network. The classification module determines the type of detected road signs using a series of one to one architectural Multi-layer Perceptron neural networks. Two sets of classifiers are trained using the Resillient-Backpropagation and Scaled-Conjugate-Gradient algorithms. The two modules of the system are evaluated individually first. Then the system is tested as a whole. The experimental results demonstrate that the system is capable of achieving an average recognition hit-rate of 95.96% using the scaled-conjugate-gradient trained classifiers.", "title": "" }, { "docid": "97b7065942b53f2d873c80f32242cd00", "text": "Hierarchical multilabel classification (HMC) allows an instance to have multiple labels residing in a hierarchy. A popular loss function used in HMC is the H-loss, which penalizes only the first classification mistake along each prediction path. However, the H-loss metric can only be used on tree-structured label hierarchies, but not on DAG hierarchies. Moreover, it may lead to misleading predictions as not all misclassifications in the hierarchy are penalized. In this paper, we overcome these deficiencies by proposing a hierarchy-aware loss function that is more appropriate for HMC. Using Bayesian decision theory, we then develop a Bayes-optimal classifier with respect to this loss function. Instead of requiring an exhaustive summation and search for the optimal multilabel, the proposed classification problem can be efficiently solved using a greedy algorithm on both tree-and DAG-structured label hierarchies. Experimental results on a large number of real-world data sets show that the proposed algorithm outperforms existing HMC methods.", "title": "" }, { "docid": "025d4933b4cc199366ffbff7cf51aea6", "text": "An increase in pulsatile release of LHRH is essential for the onset of puberty. However, the mechanism controlling the pubertal increase in LHRH release is still unclear. In primates the LHRH neurosecretory system is already active during the neonatal period but subsequently enters a dormant state in the juvenile/prepubertal period. Neither gonadal steroid hormones nor the absence of facilitatory neuronal inputs to LHRH neurons is responsible for the low levels of LHRH release before the onset of puberty in primates. Recent studies suggest that during the prepubertal period an inhibitory neuronal system suppresses LHRH release and that during the subsequent maturation of the hypothalamus this prepubertal inhibition is removed, allowing the adult pattern of pulsatile LHRH release. In fact, y-aminobutyric acid (GABA) appears to be an inhibitory neurotransmitter responsible for restricting LHRH release before the onset of puberty in female rhesus monkeys. In addition, it appears that the reduction in tonic GABA inhibition allows an increase in the release of glutamate as well as other neurotransmitters, which contributes to the increase in pubertal LHRH release. In this review, developmental changes in several neurotransmitter systems controlling pulsatile LHRH release are extensively reviewed.", "title": "" }, { "docid": "4e5661631557563430a82b4685ef6aa3", "text": "Cloud Computing (CC) is fast becoming well known in the computing world as the latest technology. CC enables users to use resources as and when they are required. Mobile Cloud Computing (MCC) is an integration of the concept of cloud computing within a mobile environment, which removes barriers linked to the mobile devices' performance. Nevertheless, these new benefits are not problem-free entirely. Several common problems encountered by MCC are privacy, personal data management, identity authentication, and potential attacks. The security issues are a major hindrance in the mobile cloud computing's adaptability. This study begins by presenting the background of MCC including the various definitions, infrastructures, and applications. In addition, the current challenges and opportunities will be presented including the different approaches that have been adapted in studying MCC.", "title": "" }, { "docid": "7f2dff96e9c1742842fea6a43d17f93e", "text": "We study shock-based methods for credible causal inference in corporate finance research. We focus on corporate governance research, survey 13,461 papers published between 2001 and 2011 in 22 major accounting, economics, finance, law, and management journals; and identify 863 empirical studies in which corporate governance is associated with firm value or other characteristics. We classify the methods used in these studies and assess whether they support a causal link between corporate governance and firm value or another outcome. Only a stall minority of studies have convincing causal inference strategies. The convincing strategies largely rely on external shocks – usually from legal rules – often called “natural experiments”. We examine the 74 shock-based papers and provide a guide to shock-based research design, which stresses the common features across different designs and the value of using combined designs.", "title": "" }, { "docid": "ef7c3f93851f77274f4d2b9557e572d6", "text": "In today’s world most of us depend on Social Media to communicate, express our feelings and share information with our friends. Social Media is the medium where now a day’s people feel free to express their emotions. Social Media collects the data in structured and unstructured, formal and informal data as users do not care about the spellings and accurate grammatical construction of a sentence while communicating with each other using different social networking websites ( Facebook, Twitter, LinkedIn and YouTube). Gathered data contains sentiments and opinion of users which will be processed using data mining techniques and analyzed for achieving the meaningful information from it. Using Social media data we can classify the type of users by analysis of their posted data on the social web sites. Machine learning algorithms are used for text classification which will extract meaningful data from these websites. Here, in this paper we will discuss the different types of classifiers and their advantages and disadvantages.", "title": "" }, { "docid": "0bf150f6cd566c31ec840a57d8d2fa55", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "36209810c1a842c871b639220ba63036", "text": "This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the discriminator. With the feedback from the label information, the generator is able to learn faster and achieve better generated image quality. Empirically, we show that the proposed ArtGAN is capable to create realistic artwork, as well as generate compelling real world images that globally look natural with clear shape on CIFAR-10.", "title": "" }, { "docid": "f9879c1592683bc6f3304f3937d5eee2", "text": "Altered cell metabolism is a characteristic feature of many cancers. Aside from well-described changes in nutrient consumption and waste excretion, altered cancer cell metabolism also results in changes to intracellular metabolite concentrations. Increased levels of metabolites that result directly from genetic mutations and cancer-associated modifications in protein expression can promote cancer initiation and progression. Changes in the levels of specific metabolites, such as 2-hydroxyglutarate, fumarate, succinate, aspartate and reactive oxygen species, can result in altered cell signalling, enzyme activity and/or metabolic flux. In this Review, we discuss the mechanisms that lead to changes in metabolite concentrations in cancer cells, the consequences of these changes for the cells and how they might be exploited to improve cancer therapy.", "title": "" }, { "docid": "34c41c33ce2cd7642cf29d8bfcab8a3f", "text": "I2Head database has been created with the aim to become an optimal reference for low cost gaze estimation. It exhibits the following outstanding characteristics: it takes into account key aspects of low resolution eye tracking technology; it combines images of users gazing at different grids of points from alternative positions with registers of user’s head position and it provides calibration information of the camera and a simple 3D head model for each user. Hardware used to build the database includes a 6D magnetic sensor and a webcam. A careful calibration method between the sensor and the camera has been developed to guarantee the accuracy of the data. Different sessions have been recorded for each user including not only static head scenarios but also controlled displacements and even free head movements. The database is an outstanding framework to test both gaze estimation algorithms and head pose estimation methods.", "title": "" }, { "docid": "78e631aceb9598767289c86ace415e2b", "text": "We present the Balloon family of password hashing functions. These are the first cryptographic hash functions with proven space-hardness properties that: (i) use a password-independent access pattern, (ii) build exclusively upon standard cryptographic primitives, and (iii) are fast enough for real-world use. Space-hard functions require a large amount of working space to evaluate efficiently and, when used for password hashing, they dramatically increase the cost of offline dictionary attacks. The central technical challenge of this work was to devise the graph-theoretic and linear-algebraic techniques necessary to prove the space-hardness properties of the Balloon functions (in the random-oracle model). To motivate our interest in security proofs, we demonstrate that it is possible to compute Argon2i, a recently proposed space-hard function that lacks a formal analysis, in less than the claimed required space with no increase in the computation time.", "title": "" }, { "docid": "e1a4468ccd5305b5158c26b2160d04a6", "text": "Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.", "title": "" }, { "docid": "425ee0a0dc813a3870af72ac02ea8bbc", "text": "Although the mechanism of action of botulinum toxin (BTX) has been intensively studied, many unanswered questions remain regarding the composition and clinical properties of the two formulations of BTX currently approved for cosmetic use. In the first half of this review, these questions are explored in detail, with emphasis on the most pertinent and revelatory studies in the literature. The second half delineates most of the common and some not so common uses of BTX in the face and neck, stressing important patient selection and safety considerations. Complications from neurotoxins at cosmetic doses are generally rare and usually technique dependent.", "title": "" } ]
scidocsrr
954f48f92867dbcdd21db815f84eef07
Origami Robot: A Self-Folding Paper Robot With an Electrothermal Actuator Created by Printing
[ { "docid": "f641e0da7b9aaffe0fabd1a6b60a6c52", "text": "This paper introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of 'instant inkjet circuits' is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Having presented this exciting new technology, we explain the tools and techniques we have found useful for the first time. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built. We believe that this technology will be of immediate appeal to researchers in the ubiquitous computing domain, since it supports the fabrication of a variety of functional electronic device prototypes.", "title": "" } ]
[ { "docid": "5e261764696ebfb02196b0f9a6b7a4a6", "text": "When the cost of misclassifying a sample is high, it is useful to have an accurate estimate of uncertainty in the prediction for that sample. There are also multiple types of uncertainty which are best estimated in different ways, for example, uncertainty that is intrinsic to the training set may be well-handled by a Bayesian approach, while uncertainty introduced by shifts between training and query distributions may be better-addressed by density/support estimation. In this paper, we examine three types of uncertainty: model capacity uncertainty, intrinsic data uncertainty, and open set uncertainty, and review techniques that have been derived to address each one. We then introduce a unified hierarchical model, which combines methods from Bayesian inference, invertible latent density inference, and discriminative classification in a single end-to-end deep neural network topology to yield efficient per-sample uncertainty estimation. Our approach addresses all three uncertainty types and readily accommodates prior/base rates for binary detection.", "title": "" }, { "docid": "5029feaec44e80561efef4b97c435896", "text": "Conceptual blending has been proposed as a creative cognitive process, but most theories focus on the analysis of existing blends rather than mechanisms for the efficient construction of novel blends. While conceptual blending is a powerful model for creativity, there are many challenges related to the computational application of blending. Inspired by recent theoretical research, we argue that contexts and context-induced goals provide insights into algorithm design for creative systems using conceptual blending. We present two case studies of creative systems that use goals and contexts to efficiently produce novel, creative artifacts in the domains of story generation and virtual characters engaged in pretend play respectively.", "title": "" }, { "docid": "1d78bd02fbf7be1bac964ff934c766de", "text": "Recently, some publications indicated that the generative modeling approaches, i.e., topic models, achieved appreciated performance on multi-label classification, especially for skewed data sets. In this paper, we develop two supervised topic models for multi-label classification problems. The two models, i.e., Frequency-LDA (FLDA) and Dependency-Frequency-LDA (DFLDA), extend Latent Dirichlet Allocation (LDA) via two observations, i.e., the frequencies of the labels and the dependencies among different labels. We train the models by the Gibbs sampler algorithm. The experiment results on well known collections demonstrate that our two models outperform the state-of-the-art approaches. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3553d1dc8272bf0366b2688e5107aa3f", "text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.", "title": "" }, { "docid": "74290ff01b32423087ce0025625dc445", "text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.", "title": "" }, { "docid": "df833f98f7309a5ab5f79fae2f669460", "text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.", "title": "" }, { "docid": "62b7f53dc399b347b6a4a453d7bd1fa2", "text": "Sign language is important for facilitating communication between hearing impaired and the rest of society. Two approaches have traditionally been used in the literature: image-based and sensor-based systems. Sensor-based systems require the user to wear electronic gloves while performing the signs. The glove includes a number of sensors detecting different hand and finger articulations. Image-based systems use camera(s) to acquire a sequence of images of the hand. Each of the two approaches has its own disadvantages. The sensor-based method is not natural as the user must wear a cumbersome instrument while the imagebased system requires specific background and environmental conditions to achieve high accuracy. In this paper, we propose a new approach for Arabic Sign Language Recognition (ArSLR) which involves the use of the recently introduced Leap Motion Controller (LMC). This device detects and tracks the hand and fingers to provide position and motion information. We propose to use the LMC as a backbone of the ArSLR system. In addition to data acquisition, the system includes a preprocessing stage, a feature extraction stage, and a classification stage. We compare the performance of Multilayer Perceptron (MLP) neural networks with the Nave Bayes classifier. Using the proposed system on the Arabic sign alphabets gives 98% classification accuracy with the Nave Bayes classifier and more than 99% using the MLP.", "title": "" }, { "docid": "8a564e77710c118e4de86be643b061a6", "text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.", "title": "" }, { "docid": "6f0d9f383c0142b43ea440e6efb2a59a", "text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.", "title": "" }, { "docid": "1151348144ad2915f63f6b437e777452", "text": "Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled) data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, publicly available data sets are few, often contain samples from subjects with too similar characteristics, and very often lack of specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new smartphone accelerometer dataset designed for activity recognition. The dataset includes 11,771 activities performed by 30 subjects of ages ranging from 18 to 60 years. Activities are divided in 17 fine grained classes grouped in two coarse grained classes: 9 types of activities of daily living (ADL) and 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with two different classifiers and with different configurations. The best results are achieved with k-NN classifying ADLs only, considering personalization, and with both windows of 51 and 151 samples.", "title": "" }, { "docid": "e7bd18d1c1aa3ef51114dbab9587bb5b", "text": "Protein phase separation is implicated in formation of membraneless organelles, signaling puncta and the nuclear pore. Multivalent interactions of modular binding domains and their target motifs can drive phase separation. However, forces promoting the more common phase separation of intrinsically disordered regions are less understood, with suggested roles for multivalent cation-pi, pi-pi, and charge interactions and the hydrophobic effect. Known phase-separating proteins are enriched in pi-orbital containing residues and thus we analyzed pi-interactions in folded proteins. We found that pi-pi interactions involving non-aromatic groups are widespread, underestimated by force-fields used in structure calculations and correlated with solvation and lack of regular secondary structure, properties associated with disordered regions. We present a phase separation predictive algorithm based on pi interaction frequency, highlighting proteins involved in biomaterials and RNA processing.", "title": "" }, { "docid": "9adfb1b69d1521d148db41618a449e7b", "text": "This article presents a novel parallel spherical mechanism called Argos with three rotational degrees of freedom. Design aspects of the first prototype built of the Argos mechanism are discussed. The direct kinematic problem is solved, leading always to four nonsingular configurations of the end effector for a given set of joint angles. The inverse-kinematic problem yields two possible configurations for each of the three pantographs for a given orientation of the end effector. Potential applications of the Argos mechanism are robot wrists, orientable machine tool beds, joy sticks, surgical manipulators, and orientable units for optical components. Another pantograph based new structure named PantoScope having two rotational DoF is also briefly introduced. KEY WORDS—parallel robot, machine tool, 3 degree of freedom (DoF) wrist, pure orientation, direct kinematics, inverse kinematics, Pantograph based, Argos, PantoScope", "title": "" }, { "docid": "a64f8a3a75dd719b955aa827d8c33472", "text": "ÐWhile empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behavior. This paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout. Index TermsÐQualitative methods, data collection, data analysis, experimental design, empirical software engineering, participant observation, interviewing.", "title": "" }, { "docid": "1b5a8f920a2f3380f311c53bdeb740c8", "text": "5 Objectivity in parentheses 7 5.0 Illusion and Perception: the traditional approach . . . . . . . . . . . . . . . . . . . . . 7 5.1 An Invitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.2 Objectivity in parentheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.3 The Universum versus the Multiversa . . . . . . . . . . . . . . . . . . . . . . . . . . . 8", "title": "" }, { "docid": "ec8684e227bf63ac2314ce3cb17e2e8b", "text": "Musical genre classification is the automatic classification of audio signals into user defined labels describing pieces of music. A problem inherent to genre classification experiments in music information retrieval research is the use of songs from the same artist in both training and test sets. We show that this does not only lead to overoptimistic accuracy results but also selectively favours particular classification approaches. The advantage of using models of songs rather than models of genres vanishes when applying an artist filter. The same holds true for the use of spectral features versus fluctuation patterns for preprocessing of the audio files.", "title": "" }, { "docid": "af1dab317f2a5b45593a89d96a8061de", "text": "Software engineering is forecast to be among the fastest growing employment field in the next decades. The purpose of this investigation is two-fold: Firstly, empirical studies on the personality types of software professionals are reviewed. Secondly, this work provides an upto-date personality profile of software engineers according to the Myers–Briggs Type Indicator. r 2002 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "9a3edba4b95b444243e34675ab2a7b85", "text": "Values are presented for body constants based on a study of nine male white cadavers of normal appearance and average build. The limb data are supplemented by a further analysis of 11 upper and 41 lower limbs. Techniques used in the study form standard procedures that can be duplicated by subsequent workers. Each cadaver was measured, weighed, and somatotyped. Joints were placed in the midposition of the movement range and the body was frozen rigid. Joint angles were bisected in a systematic dismemberment procedure to produce unit segments. These segment lengths were weighed, measured for linear link dimensions, and analysed for segment volumes. The segment centers of mass were located relative to link end points as well as in relation to anatomical landmarks. Finally, each segment was dissected into its component parts and these were weighed. The specific gravity of each body part was calculated separately. Data are expressed in mean values together with standard deviations and, where available, are correlated and evaluated with other values in the literature. Data on the relative bulk of body segments have been scarce. Until recently, the only users of information dealing with the mass and proportion of the human figure have been sculptors and graphic artists. These people usually met their needs through canons of proportions and a trained perception rather than by actual measurement. There are no substitutes though for good empirical data when critical work on body mechanics or accurate analyses of human locomotion are attempted. During the past decade or so, the need for such information has been recognized specifically in designing prosthetic and orthotic devices for the limbs of handicapped persons, for sports analysis, for the construction of test dummies, such as those subjected to vehicular crashes, and for studies on the dynamics of body impacts in crashes and falls. The fundamental nature of data on the mass and dimensions of the body parts cannot be questioned. It is odd that even now there is such a dearth of information. The research literature up to the present contains usable body segment measurements from only 12 (or possibly 14) unpreserved and dismembered cadavers, all adult white males. A tabulation of data in an Air Force technical report (Dempster, '55a), dealing with seven specimens caAM. J. ANAT., 120: 33-54. daver by cadaver, was the first amplification of the scanty records in more than two generations. The tables on Michigan cadavers were reprinted by Krogman and Johnston ( '63) in an abridgment of the original report; Williams and Lisner ( '62) presented their own simplifications based on the same study; Barter ('57), Duggar ( '62) and Contini, Drillis, and Bluestein ( '63) have made tallys of data from the original tabulations along with parts of the older data. None of these studies gave any attention to the procedural distinctions between workers who had procured original data; one even grouped volumes and masses indiscriminately as masses. The Michigan data, however, have not been summarized nor evaluated up to this time. Since the procedures and, especially, the limiting conditions incidental to the gathering of body-segment data, have not been commented on critically since Braune and Fischer (1889), a comprehensive discussion of the entire problem at this point should help further work in this important area. 1 Supported in part by research grants from the Public Health Service National Institutes of Health (GM-07741-06). and from the office of Vocational Rehabilitation (RD-216 60-C), wlth support a dozen vears earlier' from a research contract with the Anthropometric Unit of the Wright Air Development Center Wright-Patterson Air Force Base, Dayton, Ohio (AF 18 (600)-43 Project no. 7414).", "title": "" }, { "docid": "a7bf370e83bd37ed4f83c3846cfaaf97", "text": "This paper presents the design and implementation of an evanescent tunable combline filter based on electronic tuning with the use of RF-MEMS capacitor banks. The use of MEMS tuning circuit results in the compact implementation of the proposed filter with high-Q and near to zero DC power consumption. The proposed filter consist of combline resonators with tuning disks that are loaded with RF-MEMS capacitor banks. A two-pole filter is designed and measured based on the proposed tuning concept. The filter operates at 2.5 GHz with a bandwidth of 22 MHz. Measurement results demonstrate a tuning range of 110 MHz while the quality factor is above 374 (1300–374 over the tuning range).", "title": "" }, { "docid": "904c8b4be916745c7d1f0777c2ae1062", "text": "In this paper, we address the problem of continuous access control enforcement in dynamic data stream environments, where both data and query security restrictions may potentially change in real-time. We present FENCE framework that ffectively addresses this problem. The distinguishing characteristics of FENCE include: (1) the stream-centric approach to security, (2) the symmetric model for security settings of both continuous queries and streaming data, and (3) two alternative security-aware query processing approaches that can optimize query execution based on regular and security-related selectivities. In FENCE, both data and query security restrictions are modeled symmetrically in the form of security metadata, called \"security punctuations\" embedded inside data streams. We distinguish between two types of security punctuations, namely, the data security punctuations (or short, dsps) which represent the access control policies of the streaming data, and the query security punctuations (or short, qsps) which describe the access authorizations of the continuous queries. We also present our encoding method to support XACML(eXtensible Access Control Markup Language) standard. We have implemented FENCE in a prototype DSMS and present our performance evaluation. The results of our experimental study show that FENCE's approach has low overhead and can give great performance benefits compared to the alternative security solutions for streaming environments.", "title": "" } ]
scidocsrr
ce6e3755a36ca41f25f5e9010fde0bbe
Perceived , not actual , similarity predicts initial attraction in a live romantic context : Evidence from the speed-dating paradigm
[ { "docid": "241cd26632a394e5d922be12ca875fe1", "text": "Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator.", "title": "" } ]
[ { "docid": "aa2b1a8d0cf511d5862f56b47d19bc6a", "text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:", "title": "" }, { "docid": "b9da9cc9d7583c5b72daf8a25a3145f5", "text": "The purpose of this article is to review literature that is relevant to the social scientific study of ethics and leadership, as well as outline areas for future study. We first discuss ethical leadership and then draw from emerging research on \"dark side\" organizational behavior to widen the boundaries of the review to include ««ethical leadership. Next, three emerging trends within the organizational behavior literature are proposed for a leadership and ethics research agenda: 1 ) emotions, 2) fit/congruence, and 3) identity/ identification. We believe each shows promise in extending current thinking. The review closes with discussion of important issues that are relevant to the advancement of research on leadership and ethics. T IMPORTANCE OF LEADERSHIP in promoting ethical conduct in organizations has long been understood. Within a work environment, leaders set the tone for organizational goals and behavior. Indeed, leaders are often in a position to control many outcomes that affect employees (e.g., strategies, goal-setting, promotions, appraisals, resources). What leaders incentivize communicates what they value and motivates employees to act in ways to achieve such rewards. It is not surprising, then, that employees rely on their leaders for guidance when faced with ethical questions or problems (Treviño, 1986). Research supports this contention, and shows that employees conform to the ethical values of their leaders (Schminke, Wells, Peyrefitte, & Sabora, 2002). Furthermore, leaders who are perceived as ethically positive influence productive employee work behavior (Mayer, Kuenzi, Greenbaum, Bardes, & Salvador, 2009) and negatively influence counterproductive work behavior (Brown & Treviño, 2006b; Mayer et al., 2009). Recently, there has been a surge of empirical research seeking to understand the influence of leaders on building ethical work practices and employee behaviors (see Brown & Treviño, 2006a for a review). Initial theory and research (Bass & Steidlemeier, 1999; Brown, Treviño, & Harrison, 2005; Ciulla, 2004; Treviño, Brown, & Hartman, 2003; Treviño, Hartman, & Brown, 2000) sought to define ethical leadership from both normative and social scientific (descriptive) approaches to business ethics. The normative perspective is rooted in philosophy and is concerned with prescribing how individuals \"ought\" or \"should\" behave in the workplace. For example, normative scholarship on ethical leadership (Bass & Steidlemeier, 1999; Ciulla, 2004) examines ethical decision making from particular philosophical frameworks, evaluates the ethicality of particular leaders, and considers the degree to which certain styles of leadership or influence tactics are ethical. ©2010 Business Ethics Quarterly 20:4 (October 2010); ISSN 1052-150X pp. 583-616 584 BUSINESS ETHICS QUARTERLY In contrast, our article emphasizes a social scientific approach to ethical leadership (e.g.. Brown et al., 2005; Treviño et al., 2000; Treviño et al, 2003). This approach is rooted in disciplines such as psychology, sociology, and organization science, and it attempts to understand how people perceive ethical leadership and investigates the antecedents, outcomes, and potential boundary conditions of those perceptions. This research has focused on investigating research questions such as: What is ethical leadership (Brown et al., 2005; Treviño et al., 2003)? What traits are associated with perceived ethical leadership (Walumbwa & Schaubroeck, 2009)? How does ethical leadership flow through various levels of management within organizations (Mayer et al., 2009)? And, does ethical leadership help or hurt a leader's promotability within organizations (Rubin, Dierdorff, & Brown, 2010)? The purpose of our article is to review literature that is relevant to the descriptive study of ethics and leadership, as well as outhne areas for future empirical study. We first discuss ethical leadership and then draw from emerging research on what often is called \"dark\" (destructive) organizational behavior, so as to widen the boundaries of our review to also include ««ethical leadership. Next, we discuss three emerging trends within the organizational behavior literature—1) emotions, 2) fit/congruence, and 3) identity/identification—that we believe show promise in extending current thinking on the influence of leadership (both positive and negative) on organizational ethics. We conclude with a discussion of important issues that are relevant to the advancement of research in this domain. A REVIEW OF SOCIAL SCIENTIFIC ETHICAL LEADERSHIP RESEARCH The Concept of Ethical Leadership Although the topic of ethical leadership has long been considered by scholars, descriptive research on ethical leadership is relatively new. Some of the first formal investigations focused on defining ethical leadership from a descriptive perspective and were conducted by Treviño and colleagues (Treviño et al., 2000, 2003). Their qualitative research revealed that ethical leaders were best described along two related dimensions: moral person and moral manager. The moral person dimension refers to the qualities of the ethical leader as a person. Strong moral persons are honest and trustworthy. They demonstrate a concern for other people and are also seen as approachable. Employees can come to these individuals with problems and concerns, knowing that they will be heard. Moral persons have a reputation for being fair and principled. Lastly, riioral persons are seen as consistently moral in both their personal and professional lives. The moral manager dimension refers to how the leader uses the tools of the position of leadership to promote ethical conduct at work. Strong moral managers see themselves as role models in the workplace. They make ethics salient by modeling ethical conduct to their employees. Moral managers set and communicate ethical standards and use rewards and punishments to ensure those standards are followed. In sum, leaders who are moral managers \"walk the talk\" and \"talk the walk,\" patterning their behavior and organizational processes to meet moral standards. ETHICAL AND UNETHICAL LEADERSHIP 585 Treviño and colleagues (Treviño et al., 2000, 2003) argued that individuals in power must be both strong moral persons and moral managers in order to be seen as ethical leaders by those around them. Strong moral managers who are weak moral persons are likely to be seen as hypocrites, failing to practice what they preach. Hypocritical leaders talk about the importance of ethics, but their actions show them to be dishonest and unprincipled. Conversely, a strong moral person who is a weak moral manager runs the risk of being seen as an ethically \"neutral\" leader. That is, the leader is perceived as being silent on ethical issues, suggesting to employees that the leader does not really care about ethics. Subsequent research by Brown, Treviño, and Harrison (2005:120) further clarified the construct and provided a formal definition of ethical leadership as \"the demonstration of normatively appropriate conduct through personal actions and interpersonal relationships, and the promotion of such conduct to followers through two-way communication, reinforcement, and decision-making.\" They noted that \"the term normatively appropriate is 'deliberately vague'\" (Brown et al., 2005: 120) because norms vary across organizations, industries, and cultures. Brown et al. (2005) ground their conceptualization of ethical leadership in social learning theory (Bandura, 1977, 1986). This theory suggests individuals can learn standards of appropriate behavior by observing how role models (like teachers, parents, and leaders) behave. Accordingly, ethical leaders \"teach\" ethical conduct to employees through their own behavior. Ethical leaders are relevant role models because they occupy powerful and visible positions in organizational hierarchies that allow them to capture their follower's attention. They communicate ethical expectations through formal processes (e.g., rewards, policies) and personal example (e.g., interpersonal treatment of others). Effective \"ethical\" modeling, however, requires more than power and visibility. For social learning of ethical behavior to take place, role models must be credible in terms of moral behavior. By treating others fairly, honestly, and considerately, leaders become worthy of emulation by others. Otherwise, followers might ignore a leader whose behavior is inconsistent with his/her ethical pronouncements or who fails to interact with followers in a caring, nurturing style (Yussen & Levy, 1975). Outcomes of Ethical Leadership Researchers have used both social learning theory (Bandura, 1977,1986) and social exchange theory (Blau, 1964) to explain the effects of ethical leadership on important outcomes (Brown et al., 2005; Brown & Treviño, 2006b; Mayer et al , 2009; Walumbwa & Schaubroeck, 2009). According to principles of reciprocity in social exchange theory (Blau, 1964; Gouldner, 1960), individuals feel obligated to return beneficial behaviors when they believe another has been good and fair to them. In line with this reasoning, researchers argue and find that employees feel indebted to ethical leaders because of their trustworthy and fair nature; consequently, they reciprocate with beneficial work behavior (e.g., higher levels of ethical behavior and citizenship behaviors) and refrain from engaging in destructive behavior (e.g., lower levels of workplace deviance). 586 BUSINESS ETHICS QUARTERLY Emerging research has found that ethical leadership is related to important follower outcomes, such as employees' job satisfaction, organizational commitment, willingness to report problems to supervisors, willingness to put in extra effort on the job, voice behavior (i.e., expression of constructive suggestions intended to improve standard procedures), and perceptions of organizational culture and ethical climate (Brown et al., 2005; Neubert, Carlson, Kacmar, Roberts,", "title": "" }, { "docid": "c09d2c25f112d9ecd10a8cf82e5ae6f0", "text": "We propose a deontological approach to machine ethics that avoids some weaknesses of an intuition-based system, such as that of Anderson and Anderson. In particular, it has no need to deal with conflicting intuitions, and it yields a more satisfactory account of when autonomy should be respected. We begin with a “dual standpoint” theory of action that regards actions as grounded in reasons and therefore as having a conditional form that is suited to machine instructions. We then derive ethical principles based on formal properties that the reasons must exhibit to be coherent, and formulate the principles using quantified modal logic. We conclude that deontology not only provides a more satisfactory basis for machine ethics but endows the machine with an ability to explain its actions, thus contributing to transparency in AI.", "title": "" }, { "docid": "5eed0c6f114382d868cd841c7b5d9986", "text": "Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance.", "title": "" }, { "docid": "26f71c28c1346e80bac0e39d84e99206", "text": "The objective of the article is to highlight various roles of glutamic acid like endogenic anticancer agent, conjugates to anticancer agents, and derivatives of glutamic acid as possible anticancer agents. Besides these emphases are given especially for two endogenous derivatives of glutamic acid such as glutamine and glutamate. Glutamine is a derivative of glutamic acid and is formed in the body from glutamic acid and ammonia in an energy requiring reaction catalyzed by glutamine synthase. It also possesses anticancer activity. So the transportation and metabolism of glutamine are also discussed for better understanding the role of glutamic acid. Glutamates are the carboxylate anions and salts of glutamic acid. Here the roles of various enzymes required for the metabolism of glutamates are also discussed.", "title": "" }, { "docid": "642aff9bd8d12a33aa1696eb1bd829d8", "text": "This paper presents the study on the semiconductor-based galvanic isolation. This solution delivers the differential-mode (DM) power via semiconductor power switches during their on states, while sustaining the common-mode (CM) voltage and blocking the CM leakage current with those switches during their off states. While it is impractical to implement this solution with Si devices, the latest SiC devices and the coming vertical GaN devices, however, provide unprecedented properties and thus can potentially enable the practical implementation. An isolated dc/dc converter based on the switched-capacitor circuit is studied as an example. The CM leakage current caused by the line input and the resulted touch current (TC) are quantified and compared to the limits in the safety standard IEC60950. To reduce the TC, low switch output capacitance and low converter switching frequency are needed. Then, discussions are presented on the TC reduction approaches and the design considerations to achieve high power density and high efficiency. A 400-V, 400-W prototype based on 1.7-kV SiC MOSFETs is built to demo the DM power delivery performance and showcase the CM leakage current problem. Further study on the CM leakage current elimination is needed to validate this solution.", "title": "" }, { "docid": "531d387a14eefa6a8c45ad64039f29be", "text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.", "title": "" }, { "docid": "12d31865b311f0ad88ef7dd694a2cfc1", "text": "With the advance of wireless communication systems and increasing importance of other wireless applications, wideband and low profile antennas are in great demand for both commercial and military applications. Multi-band and wideband antennas are desirable in personal communication systems, small satellite communication terminals, and other wireless applications. Wideband antennas also find applications in Unmanned Aerial Vehicles (UAVs), Counter Camouflage, Concealment and Deception (CC&D), Synthetic Aperture Radar (SAR), and Ground Moving Target Indicators (GMTI). Some of these applications also require that an antenna be embedded into the airframe structure Traditionally, a wideband antenna in the low frequency wireless bands can only be achieved with heavily loaded wire antennas, which usually means different antennas are needed for different frequency bands. Recent progress in the study of fractal antennas suggests some attractive solutions for using a single small antenna operating in several frequency bands. The purpose of this article is to introduce the concept of the fractal, review the progress in fractal antenna study and implementation, compare different types of fractal antenna elements and arrays and discuss the challenge and future of this new type of antenna.", "title": "" }, { "docid": "e4a14229d3a10356f6b10ac0c19c8ec7", "text": "The Programmer's Learning Machine (PLM) is an interactive exerciser for learning programming and algorithms. Using an integrated and graphical environment that provides a short feedback loop, it allows students to learn in a (semi)-autonomous way. This generic platform also enables teachers to create specific programming microworlds that match their teaching goals. This paper discusses our design goals and motivations, introduces the existing material and the proposed microworlds, and details the typical use cases from the student and teacher point of views.", "title": "" }, { "docid": "10add5936202de7ee77bb3320fa0fbaa", "text": "Maintaining the quality of roadways is a major challenge for governments around the world. In particular, poor road surfaces pose a significant safety threat to motorists, especially when motorbikes make up a significant portion of roadway traffic. According to the statistics of the Ministry of Justice in Taiwan, there were 220 claims for state compensation caused by road quality problems between 2005 to 2007, and the government paid a total of 113 million NTD in compensation. This research explores utilizing a mobile phone with a tri-axial accelerometer to collect acceleration data while riding a motorcycle. The data is analyzed to detect road anomalies and to evaluate road quality. Motorcycle-based acceleration data is collected on twelve stretches of road, with a data log spanning approximately three hours, and a total road length of about 60 kilometers. Both supervised and unsupervised machine learning methods are used to recognize road conditions. SVM learning is used to detect road anomalies and to identify their corresponding positions from labeled acceleration data. This method of road anomaly detection achieves a precision of 78.5%. Furthermore, to construct a model of smooth roads, unsupervised learning is used to learn anomaly thresholds by clustering data collected from the accelerometer. The results are used to rank the quality of the road segments in the experiment. We compare the ranked list from the learned evaluator with the ranked list from human evaluators who rode along the same roadways during the test phase. Based on the Kendall tau rank correlation coefficient, the automatically ranked result exhibited excellent performance. Keywords-mobile device; machine learning; accelerometer; road surface anomaly; pothole;", "title": "" }, { "docid": "9547ec27942f9439d18dbfecdda83e1c", "text": "Inverted pendulum system is a complicated, unstable and multivariable nonlinear system. In order to control the angle and displacement of inverted pendulum system effectively, a novel double-loop digital PID control strategy is presented in this paper. Based on impulse transfer function, the model of the single linear inverted pendulum system is divided into two parts according to the controlled parameters. The inner control loop that is formed by the digital PID feedback control can control the angle of the pendulum, while in order to control the cart displacement, the digital PID series control is adopted to form the outer control loop. The simulation results show the digital control strategy is very effective to single inverted pendulum and when the sampling period is selected as 50 ms, the performance of the digital control system is similar to that of the analog control system. Copyright © 2013 IFSA.", "title": "" }, { "docid": "e70425a0b9d14ff4223f3553de52c046", "text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.", "title": "" }, { "docid": "fb11348b48f65a4d3101727308a1f4fc", "text": "Spin-transfer torque random access memory (STT-RAM) has emerged as an attractive candidate for future nonvolatile memories. It advantages the benefits of current state-of-the-art memories including high-speed read operation (of static RAM), high density (of dynamic RAM), and nonvolatility (of flash memories). However, the write operation in the 1T-1MTJ STT-RAM bitcell is asymmetric and stochastic, which leads to high energy consumption and long latency. In this paper, a new write assist technique is proposed to terminate the write operation immediately after switching takes place in the magnetic tunneling junction (MTJ). As a result, both the write time and write energy consumption of 1T-1MTJ bitcells improves. Moreover, the proposed write assist technique leads to an error-free write operation. The simulation results using a 65-nm CMOS access transistor and a 40-nm MTJ technology confirm that the proposed write assist technique results in three orders of magnitude improvement in bit error rate compared with the best existing techniques. Moreover, the proposed write assist technique leads to 81% energy saving compared with a cell without write assist and adds only 9.6% area overhead to a 16-kbit STT-RAM array.", "title": "" }, { "docid": "ad00ba810df4c7295b89640c64b50e51", "text": "Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of participants made a commission error, the erroneous repetition of a PM response following intention completion. Comparisons between the PM groups and control group suggested that commission errors occurred in the absence of preparatory monitoring. Response time analyses additionally suggested that some participants experienced fatigue across the ongoing task block, and those who did were more susceptible to making a commission error. These results supported the hypothesis that commission errors can arise from the spontaneous retrieval of finished intentions and possibly the failure to exert executive control to oppose the PM response.", "title": "" }, { "docid": "2d7a13754631206203d6618ab2a27a76", "text": "This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving BiHistogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image. Keywords-component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.", "title": "" }, { "docid": "c2875f69b6a5d51f3fb3f3cf4ad0f346", "text": "Cancer cells often have characteristic changes in metabolism. Cellular proliferation, a common feature of all cancers, requires fatty acids for synthesis of membranes and signaling molecules. Here, we provide a view of cancer cell metabolism from a lipid perspective, and we summarize evidence that limiting fatty acid availability can control cancer cell proliferation.", "title": "" }, { "docid": "ec1f47a6ca0edd2334fc416d29ce02ea", "text": "We present Synereo, a next-gen decentralized and distributed social network designed for an attention economy. Our presentation is given in two chapters. Chapter 1 presents our design philosophy. Our goal is to make our users more effective agents by presenting social content that is relevant and actionable based on the user’s own estimation of value. We discuss the relationship between attention, value, and social agency in order to motivate the central mechanisms for content flow on the network. Chapter 2 defines a network model showing the mechanics of the network interactions, as well as the compensation model enabling users to promote content on the network and receive compensation for attention given to the network. We discuss the high-level technical implementation of these concepts based on the π-calculus the most well known of a family of computational formalisms known as the mobile process calculi. 0.1 Prologue: This is not a manifesto The Internet is overflowing with social network manifestos. Ello has a manifesto. Tsu has a manifesto. SocialSwarm has a manifesto. Even Disaspora had a manifesto. Each one of them is written in earnest with clear intent (see figure 1). Figure 1: Ello manifesto The proliferation of these manifestos and the social networks they advertise represents an important market shift, one that needs to be understood in context. The shift from mainstream media to social media was all about “user generated content”. In other words, people took control of the content by making it for and distributing it to each other. In some real sense it was a remarkable expansion of the shift from glamrock to punk and DIY; and like that movement, it was the sense of people having a say in what impressions they received that has been the underpinning of the success of Facebook and Twitter and YouTube and the other social media giants. In the wake of that shift, though, we’ve seen that even when the people are producing the content, if the service is in somebody else’s hands then things still go wonky: the service providers run psychology experiments via the social feeds [1]; they sell people’s personally identifiable and other critical info [2]; and they give data to spooks [3]. Most importantly, they do this without any real consent of their users. With this new wave of services people are expressing a desire to take more control of the service, itself. When the service is distributed, as is the case with Splicious and Diaspora, it is truly cooperative. And, just as with the music industry, where the technology has reached the point that just about anybody can have a professional studio in their home, the same is true with media services. People are recognizing that we don’t need big data centers with massive environmental impact, we need engagement at the level of the service, itself. If this really is the underlying requirement the market is articulating, then there is something missing from a social network that primarily serves up a manifesto with their service. While each of the networks mentioned above constitutes an important step in the right direction, they lack any clear indication", "title": "" }, { "docid": "e63a5af56d8b20c9e3eac658940413ce", "text": "OBJECTIVE\nThis study examined the effects of various backpack loads on elementary schoolchildren's posture and postural compensations as demonstrated by a change in forward head position.\n\n\nSUBJECTS\nA convenience sample of 11 schoolchildren, aged 8-11 years participated.\n\n\nMETHODS\nSagittal digital photographs were taken of each subject standing without a backpack, and then with the loaded backpack before and after walking 6 minutes (6MWT) at free walking speed. This was repeated over three consecutive weeks using backpacks containing randomly assigned weights of 10%, 15%, or 20% body weight of each respective subject. The craniovertebral angle (CVA) was measured using digitizing software, recorded and analyzed.\n\n\nRESULTS\nSubjects demonstrated immediate and statistically significant changes in CVA, indicating increased forward head positions upon donning the backpacks containing 15% and 20% body weight. Following the 6MWT, the CVA demonstrated further statistically significant changes for all backpack loads indicating increased forward head postures. For the 15 & 20%BW conditions, more than 50% of the subjects reported discomfort after walking, with the neck as the primary location of reported pain.\n\n\nCONCLUSIONS\nBackpack loads carried by schoolchildren should be limited to 10% body weight due to increased forward head positions and subjective complaints at 15% and 20% body weight loads.", "title": "" }, { "docid": "c0a8acf5741567077c8e7dc188033bc4", "text": "The framework of dynamic movement primitives (DMPs) contains many favorable properties for the execution of robotic trajectories, such as indirect dependence on time, response to perturbations, and the ability to easily modulate the given trajectories, but the framework in its original form remains constrained to the kinematic aspect of the movement. In this paper, we bridge the gap to dynamic behavior by extending the framework with force/torque feedback. We propose and evaluate a modulation approach that allows interaction with objects and the environment. Through the proposed coupling of originally independent robotic trajectories, the approach also enables the execution of bimanual and tightly coupled cooperative tasks. We apply an iterative learning control algorithm to learn a coupling term, which is applied to the original trajectory in a feed-forward fashion and, thus, modifies the trajectory in accordance to the desired positions or external forces. A stability analysis and results of simulated and real-world experiments using two KUKA LWR arms for bimanual tasks and interaction with the environment are presented. By expanding on the framework of DMPs, we keep all the favorable properties, which is demonstrated with temporal modulation and in a two-agent obstacle avoidance task.", "title": "" }, { "docid": "581e3373ecfbc6c012df7c166636cc50", "text": "The deep convolutional neural network(CNN) has significantly raised the performance of image classification and face recognition. Softmax is usually used as supervision, but it only penalizes the classification loss. In this paper, we propose a novel auxiliary supervision signal called contrastive-center loss, which can further enhance the discriminative power of the features, for it learns a class center for each class. The proposed contrastive-center loss simultaneously considers intra-class compactness and inter-class separability, by penalizing the contrastive values between: (1)the distances of training samples to their corresponding class centers, and (2)the sum of the distances of training samples to their non-corresponding class centers. Experiments on different datasets demonstrate the effectiveness of contrastive-center loss.", "title": "" } ]
scidocsrr
4d7263c763fab9f348f4ebae3faa47fb
BackFi: High Throughput WiFi Backscatter
[ { "docid": "e30cedcb4cb99c4c3b2743c5359cf823", "text": "This paper presents a 116nW wake-up radio complete with crystal reference, interference compensation, and baseband processing, such that a selectable 31-bit code is required to toggle a wake-up signal. The front-end operates over a broad frequency range, tuned by an off-chip band-select filter and matching network, and is demonstrated in the 402-405MHz MICS band and the 915MHz and 2.4GHz ISM bands with sensitivities of -45.5dBm, -43.4dBm, and -43.2dBm, respectively. Additionally, the baseband processor implements automatic threshold feedback to detect the presence of interferers and dynamically adjust the receiver's sensitivity, mitigating the jamming problem inherent to previous energy-detection wake-up radios. The wake-up radio has a raw OOK chip-rate of 12.5kbps, an active area of 0.35mm2 and operates using a 1.2V supply for the crystal reference and RF demodulation, and a 0.5V supply for subthreshold baseband processing.", "title": "" } ]
[ { "docid": "bf84e66bab43950f0d4d8c2d465b907e", "text": "Paraphrases are sentences or phrases that convey the same meaning using different wording. Although the logical definition of paraphrases requires strict semantic equivalence, linguistics accepts a broader, approximate, equivalence—thereby allowing far more examples of “quasi-paraphrase.” But approximate equivalence is hard to define. Thus, the phenomenon of paraphrases, as understood in linguistics, is difficult to characterize. In this article, we list a set of 25 operations that generate quasi-paraphrases. We then empirically validate the scope and accuracy of this list by manually analyzing random samples of two publicly available paraphrase corpora. We provide the distribution of naturally occurring quasi-paraphrases in English text.", "title": "" }, { "docid": "33789f718bc299fa63762f72595dcd77", "text": "Resource allocation efficiency and energy consumption are among the top concerns to today's Cloud data center. Finding the optimal point where users' multiple job requests can be accomplished timely with minimum electricity and hardware cost is one of the key factors for system designers and managers to optimize the system configurations. Understanding the characteristics of the distribution of user task is an essential step for this purpose. At large-scale Cloud Computing data centers, a precise workload prediction will significantly help designers and operators to schedule hardware/software resources and power supplies in a more efficient manner, and make appropriate decisions to upgrade the Cloud system when the workload grows. While a lot of study has been conducted for hypervisor-based Cloud, container-based virtualization is becoming popular because of the low overhead and high efficiency in utilizing computing resources. In this paper, we have studied a set of real-world container data center traces from part of Google's cluster. We investigated the distribution of job duration, waiting time and machine utilization and the number of jobs submitted in a fix time period. Based on the quantitative study, an Ensemble Workload Prediction (EnWoP) method and a novel prediction evaluation parameter called Cloud Workload Correction Rate (C-Rate) have been proposed. The experimental results have verified that the EnWoP method achieved high prediction accuracy and the C-Rate evaluates the prediction methods more objective.", "title": "" }, { "docid": "045a4622691d1ae85593abccb823b193", "text": "The capability of Corynebacterium glutamicum for glucose-based synthesis of itaconate was explored, which can serve as building block for production of polymers, chemicals, and fuels. C. glutamicum was highly tolerant to itaconate and did not metabolize it. Expression of the Aspergillus terreus CAD1 gene encoding cis-aconitate decarboxylase (CAD) in strain ATCC13032 led to the production of 1.4mM itaconate in the stationary growth phase. Fusion of CAD with the Escherichia coli maltose-binding protein increased its activity and the itaconate titer more than two-fold. Nitrogen-limited growth conditions boosted CAD activity and itaconate titer about 10-fold to values of 1440 mU mg(-1) and 30 mM. Reduction of isocitrate dehydrogenase activity via exchange of the ATG start codon to GTG or TTG resulted in maximal itaconate titers of 60 mM (7.8 g l(-1)), a molar yield of 0.4 mol mol(-1), and a volumetric productivity of 2.1 mmol l(-1) h(-1).", "title": "" }, { "docid": "8273a154d8e8b94873c4c94c4ff6ed14", "text": "The ambitious goals set for 5G wireless networks, which are expected to be introduced around 2020, require dramatic changes in the design of different layers for next generation communications systems. Massive MIMO systems, filter bank multi-carrier modulation, relaying technologies, and millimeter-wave communications have been considered as some of the strong candidates for the physical layer design of 5G networks. In this article, we shed light on the potential and implementation of IM techniques for MIMO and multi-carrier communications systems, which are expected to be two of the key technologies for 5G systems. Specifically, we focus on two promising applications of IM: spatial modulation and orthogonal frequency-division multiplexing with IM, and discuss the recent advances and future research directions in IM technologies toward spectrum- and energy-efficient 5G wireless networks.", "title": "" }, { "docid": "3adf8510887ff9e5c7a270e16dcdec9a", "text": "This paper analyzes the Sampled Value (SV) Process Bus concept that was recently introduced by the IEC 61850-9-2 standard. This standard proposes that the Current and Voltage Transformer (CT, PT) outputs that are presently hard wired to various devices (relays, meters, IED, and SCADA) be digitized at the source and then communicated to those devices using an Ethernet-Based Local Area Network (LAN). The approach is especially interesting for modern optical CT/PT devices that possess high quality information about the primary voltage/current waveforms, but are often forced to degrade output signal accuracy in order to meet traditional analog interface requirements (5 A/120 V). While very promising, the SV-based process bus brings along a distinct set of issues regarding the overall reliability of the new Ethernet communications-based protection and control system. This paper looks at the Merging Unit Concept, analyzes the protection system reliability in the process bus environment, and proposes an alternate approach that can be used to successfully deploy this technology. Multiple scenarios used with the associated equipment configurations are compared. Additional issues that need to be addressed by various standards bodies and interoperability challenges posed by the SV process bus LAN on real-time monitoring and control applications (substation HMI, SCADA, engineering access) are also identified.", "title": "" }, { "docid": "04065494023ed79211af3ba0b5bc4c7e", "text": "The glucagon-like peptides include glucagon, GLP-1, and GLP-2, and exert diverse actions on nutrient intake, gastrointestinal motility, islet hormone secretion, cell proliferation and apoptosis, nutrient absorption, and nutrient assimilation. GIP, a related member of the glucagon peptide superfamily, also regulates nutrient disposal via stimulation of insulin secretion. The actions of these peptides are mediated by distinct members of the glucagon receptor superfamily of G protein-coupled receptors. These receptors exhibit unique patterns of tissue-specific expression, exhibit considerable amino acid sequence identity, and share similar structural and functional properties with respect to ligand binding and signal transduction. This article provides an overview of the biology of these receptors with an emphasis on understanding the unique actions of glucagon-related peptides through studies of the biology of their cognate receptors.", "title": "" }, { "docid": "09fa74b0a83e040beb5612e6eeb4089c", "text": "Mapping word embeddings of different languages into a single space has multiple applications. In order to map from a source space into a target space, a common approach is to learn a linear mapping that minimizes the distances between equivalences listed in a bilingual dictionary. In this paper, we propose a framework that generalizes previous work, provides an efficient exact method to learn the optimal linear transformation and yields the best bilingual results in translation induction while preserving monolingual performance in an analogy task.", "title": "" }, { "docid": "6325188ee21b6baf65dbce6855c19bc2", "text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.", "title": "" }, { "docid": "cc5815edf96596a1540fa1fca53da0d3", "text": "INTRODUCTION\nSevere motion sickness is easily identifiable with sufferers showing obvious behavioral signs, including emesis (vomiting). Mild motion sickness and sopite syndrome lack such clear and objective behavioral markers. We postulate that yawning may have the potential to be used in operational settings as such a marker. This study assesses the utility of yawning as a behavioral marker for the identification of soporific effects by investigating the association between yawning and mild motion sickness/sopite syndrome in a controlled environment.\n\n\nMETHODS\nUsing a randomized motion-counterbalanced design, we collected yawning and motion sickness data from 39 healthy individuals (34 men and 5 women, ages 27-59 yr) in static and motion conditions. Each individual participated in two 1-h sessions. Each session consisted of six 10-min blocks. Subjects performed a multitasking battery on a head mounted display while seated on the moving platform. The occurrence and severity of symptoms were assessed with the Motion Sickness Assessment Questionnaire (MSAQ).\n\n\nRESULTS\nYawning occurred predominantly in the motion condition. All yawners in motion (N = 5) were symptomatic. Compared to nonyawners (MSAQ indices: Total = 14.0, Sopite = 15.0), subjects who yawned in motion demonstrated increased severity of motion sickness and soporific symptoms (MSAQ indices: Total = 17.2, Sopite = 22.4), and reduced multitasking cognitive performance (Composite score: nonyawners = 1348; yawners = 1145).\n\n\nDISCUSSION\nThese results provide evidence that yawning may be a viable behavioral marker to recognize the onset of soporific effects and their concomitant reduction in cognitive performance.", "title": "" }, { "docid": "93076fee7472e1a89b2b3eb93cff4737", "text": "This paper presents a fast and robust level set method for image segmentation. To enhance the robustness against noise, we embed a Markov random field (MRF) energy function to the conventional level set energy function. This MRF energy function builds the correlation of a pixel with its neighbors and encourages them to fall into the same region. To obtain a fast implementation of the MRF embedded level set model, we explore algebraic multigrid (AMG) and sparse field method (SFM) to increase the time step and decrease the computation domain, respectively. Both AMG and SFM can be conducted in a parallel fashion, which facilitates the processing of our method for big image databases. By comparing the proposed fast and robust level set method with the standard level set method and its popular variants on noisy synthetic images, synthetic aperture radar (SAR) images, medical images, and natural images, we comprehensively demonstrate the new method is robust against various kinds of noises. In particular, the new level set method can segment an image of size 500 × 500 within 3 s on MATLAB R2010b installed in a computer with 3.30-GHz CPU and 4-GB memory.", "title": "" }, { "docid": "b44b177f50402015e343e78afe4d7523", "text": "A design of a novel wireless implantable blood pressure sensing microsystem for advanced biological research is presented. The system employs a miniature instrumented elastic cuff, wrapped around a blood vessel, for small laboratory animal real-time blood pressure monitoring. The elastic cuff is made of biocompatible soft silicone material by a molding process and is filled by insulating silicone oil with an immersed MEMS capacitive pressure sensor interfaced with low-power integrated electronic system. This technique avoids vessel penetration and substantially minimizes vessel restriction due to the soft cuff elasticity, and is thus attractive for long-term implant. The MEMS pressure sensor detects the coupled blood pressure waveform caused by the vessel expansion and contraction, followed by amplification, 11-bit digitization, and wireless FSK data transmission to an external receiver. The integrated electronics are designed with capability of receiving RF power from an external power source and converting the RF signal to a stable 2 V DC supply in an adaptive manner to power the overall implant system, thus enabling the realization of stand-alone batteryless implant microsystem. The electronics are fabricated in a 1.5 μm CMOS process and occupy an area of 2 mm × 2 mm. The prototype monitoring cuff is wrapped around the right carotid artery of a laboratory rat to measure real-time blood pressure waveform. The measured in vivo blood waveform is compared with a reference waveform recorded simultaneously using a commercial catheter-tip transducer inserted into the left carotid artery. The two measured waveforms are closely matched with a constant scaling factor. The ASIC is interfaced with a 5-mm-diameter RF powering coil with four miniature surface-mounted components (one inductor and three capacitors) over a thin flexible substrate by bond wires, followed by silicone coating and packaging with the prototype blood pressure monitoring cuff. The overall system achieves a measured average sensitivity of 7 LSB/ mmHg, a nonlinearity less than 2.5% of full scale, and a hysteresis less than 1% of full scale. From noise characterization, a blood vessel pressure change sensing resolution 328 of 1 mmHg can be expected. The system weighs 330 mg, representing an order of magnitude mass reduction compared with state-of-the-art commercial technology.", "title": "" }, { "docid": "eba25ae59603328f3ef84c0994d46472", "text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.", "title": "" }, { "docid": "63198927563faa609e6520a01a56b20c", "text": "A 1.2 V 4 Gb DDR4 SDRAM is presented in a 30 nm CMOS technology. DDR4 SDRAM is developed to raise memory bandwidth with lower power consumption compared with DDR3 SDRAM. Various functions and circuit techniques are newly adopted to reduce power consumption and secure stable transaction. First, dual error detection scheme is proposed to guarantee the reliability of signals. It is composed of cyclic redundancy check (CRC) for DQ channel and command-address (CA) parity for command and address channel. For stable reception of high speed signals, a gain enhanced buffer and PVT tolerant data fetch scheme are adopted for CA and DQ respectively. To reduce the output jitter, the type of delay line is selected depending on data rate at initial stage. As a result, test measurement shows 3.3 Gb/s DDR operation at 1.14 V.", "title": "" }, { "docid": "a01302cad4754ecf162d485e00c72e38", "text": "The problem of creating fair ship design curves is of major importance in Computer Aided Ship Design environment. The fairness of these curves is generally considered a subjective notion depending on the judgement of the designer (eg., visually pleasing, minimum variation of curvature, devoid of unnecessary bumps or wiggles, satisfying certain continuity requirements). Thus an automated fairing process based on objective criteria is clearly desirable. This paper presents an automated fairing algorithm for ship curves to satisfy objective geometric constraints. This procedure is based on the use of optimisation tools and cubic B-spline functions. The aim is to produce curves with a more gradual variation of curvature without deteriorating initial shapes. The optimisation based fairing procedure is applied to a variety of plane ship sections to demonstrate the capability and flexibility of the methodology. The resulting curves, with their corresponding curvature plots indicate that, provided that the designer can specify his objectives and constraints clearly, the procedure will generate fair ship definition curves within the constrained design space.", "title": "" }, { "docid": "567d165eb9ad5f9860f3e0602cbe3e03", "text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.", "title": "" }, { "docid": "57e50c15b3107a473f5fb74472b74fcc", "text": "PURPOSE\nThe purpose of this article is to provide an overview of our previous work on roll-over shapes, which are the effective rocker shapes that the lower limb systems conform to during walking.\n\n\nMETHOD\nThis article is a summary of several recently published articles from the Northwestern University Prosthetics Research Laboratory and Rehabilitation Engineering Research Program on the topic of roll-over shapes. The roll-over shape is a measurement of centre of pressure of the ground reaction force in body-based coordinates. This measurement is interpreted as the effective rocker shape created by lower limb systems during walking.\n\n\nRESULTS\nOur studies have shown that roll-over shapes in able-bodied subjects do not change appreciably for conditions of level ground walking, including walking at different speeds, while carrying different amounts of weight, while wearing shoes of different heel heights, or when wearing shoes with different rocker radii. In fact, results suggest that able-bodied humans will actively change their ankle movements to maintain the same roll-over shapes.\n\n\nCONCLUSIONS\nThe consistency of the roll-over shapes to level surface walking conditions has provided insight for design, alignment and evaluation of lower limb prostheses and orthoses. Changes to ankle-foot and knee-ankle-foot roll-over shapes for ramp walking conditions have suggested biomimetic (i.e. mimicking biology) strategies for adaptable ankle-foot prostheses and orthoses.", "title": "" }, { "docid": "60fb532b3d22b5f598a0aebabc616de4", "text": "Introduction Vision is the primary sensory modality for humans—and most other mammals—by which they perceive the world. In humans, vision-related areas occupy about 30% of the neocortex. Light rays are projected upon the retina, and the brain tries to make sense of the world by means of interpreting the visual input pattern. The sensitivity and specificity with which the brain solves this computationally complex problem cannot yet be replicated on a computer. The most imposing of these problems is that of invariant visual pattern recognition. Recently it has been said that the prediction of future sensory input from salient features of current input is the keystone of intelligence. The neocortex is the structure in the brain which is assumed to be responsible for the evolution of intelligence. Current sensory input patterns activate stored traces of previous inputs which then generate top-down expectations, which are verified against the bottom-up input signals. If the verification succeeds, the predicted pattern is recognised. This theory explains how humans, and mammals in general, can recognise images despite changes in location, size and lighting conditions, and in the presence of deformations and large amounts of noise. Parts of this theory, known as the memory-prediction theory (MPT), are modelled in the Hierarchical Temporal Memory or HTM technology developed by a company called Numenta; the model is an attempt to replicate the structural and algorithmic properties of the neocortex. Spatial and temporal relations between features of the sensory signals are formed in an hierarchical memory architecture during a learning process. When a new pattern arrives, the recognition process can be viewed as choosing the stored representation that best predicts the pattern. Hierarchical Temporal Memory has been successfully applied to the recognition of relatively simple images, showing invariance across several transformations and robustness with respect to noisy patterns. We have applied the concept of HTM, as implemented by Numenta, to land-use recognition, by building and testing a system to learn to recognise five different types of land use. Overview of the HTM learning algorithm Hierarchical Temporal Memory can be considered a form of a Bayesian network, where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input, through a process of finding common spatial patterns and then detecting common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data and afford mechanisms for covert attention. Sensory data are presented at the bottom of the hierarchy. To train an HTM, it is necessary to present continuous, time-varying, sensory inputs while the causes underlying the same sensory data persist in the environment. In other words, you either move the senses of the HTM through the world, or the objects in the world move relative to the HTM’s senses. Time is the fundamental component of an HTM, and can be thought of as a learning supervisor. Hierarchical Temporal Memory networks are made of nodes; each node receives as input a temporal sequence of patterns. The goal of each node is to group input patterns that are likely to have the same cause, thereby forming invariant representations of extrinsic causes. An HTM node uses two grouping mechanisms to form invariants (Fig. 1). The first mechanism is called spatial pooling, in which raw data are received by the sensor; spatial poolers of higher nodes receive the outputs from their child nodes. The input of the spatial pooler in higher layers is the fixed-order concatenation of the output of its children. This input is represented by row vectors, and the role of the spatial pooler is to build a matrix (the coincidence matrix) from input vectors that occur frequently. There are multiple spatial pooler algorithms, e.g. Gaussian and Product. The Gaussian spatial pooler algorithm is used for nodes at the input layer, whereas the nodes higher up the hierarchy use the Product spatial pooler. The Gaussian spatial pooler algorithm compares the raw input vectors with the existing coincidences in the coincidence matrix. If the Euclidean distance between an input vector and an existing coincidence is small enough, the input is considered to be the same coincidence, and the count for that coincidence is incremented and stored in memory. 370 South African Journal of Science 105, September/October 2009 Research Articles", "title": "" }, { "docid": "b3f2c1736174eda75f7eedb3cee2a729", "text": "Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiability (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence and performs very well on random 3-SAT instances. However, the performance of WalkSAT on random k-SAT instances with k > 3 lags far behind. Indeed, there are limited works on improving SLS algorithms for such instances. This work takes a good step toward this direction. We propose a novel concept namely multilevel make. Based on this concept, we design a scoring function called linear make, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSATlm. Our experimental results show that WalkSATlm improves WalkSAT by orders of magnitude on random k-SAT instances with k > 3 near the phase transition. Additionally, we propose an efficient implementation for WalkSATlm, which leads to a speedup of 100%. We also give some insights on different forms of linear make functions, and show the limitation of the linear make function on random 3-SAT through theoretical analysis.", "title": "" }, { "docid": "19a538b6a49be54b153b0a41b6226d1f", "text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.", "title": "" }, { "docid": "b36e893afa63ed246c7bf18139eb147e", "text": "It is widely believed that the employee participation may affect employee’s job satisfaction; employee productivity, employee commitment and they all can create comparative advantage for the organization. The main intention of this study was to find out relationship among employee participation, job satisfaction, employee productivity and employee commitment. For the matter 34 organizations from Oil & Gas, Banking and Telecommunication sectors were contacted, of which 15 responded back. The findings of this study are that employee participation not only an important determinant of job satisfaction components. Increasing employee participation will have a positive effect on employee’s job satisfaction, employee commitment and employee productivity. Naturally increasing employee participation is a long-term process, which demands both attention from management side and initiative from the employee side.", "title": "" } ]
scidocsrr
03a5fd34d6ba199433ce53b959802b23
Unified Point-of-Interest Recommendation with Temporal Interval Assessment
[ { "docid": "7e6182248b3c3d7dedce16f8bfa58b28", "text": "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.", "title": "" } ]
[ { "docid": "82c4aa6bc189e011556ca7aa6d1688b9", "text": "Two aspects of children’s early gender development the spontaneous production of gender labels and sex-typed play were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children’s gender labeling as based on mothers’ biweekly reports on their children’s language from 9 through 21 months. Videotapes of children’s play both alone and with mother at 17 and 21 months were independently analyzed for play with gender stereotyped and neutral toys. Finally, the relation between gender labeling and sex-typed play was examined. Children transitioned to using gender labels at approximately 19 months on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in sex-typed play, suggesting that knowledge of gender categories might influence sex-typing before the age of 2.", "title": "" }, { "docid": "b1cb31c70acb17d353116783845f85f5", "text": "Wireless sensor networks have become increasingly popular due to their wide range of applications. Energy consumption is one of the biggest constraints of the wireless sensor node and this limitation combined with a typical deployment of large number of nodes have added many challenges to the design and management of wireless sensor networks. They are typically used for remote environment monitoring in areas where providing electrical power is difficult. Therefore, the devices need to be powered by batteries and alternative energy sources. Because battery energy is limited, the use of different techniques for energy saving is one of the hottest topics in WSNs. In this work, we present a survey of power saving and energy optimization techniques for wireless sensor networks, which enhances the ones in existence and introduces the reader to the most well known available methods that can be used to save energy. They are analyzed from several points of view: Device hardware, transmission, MAC and routing protocols.", "title": "" }, { "docid": "0a45c122c6995df91f03f8615f4668d1", "text": "The advanced microgrid is envisioned to be a critical part of the future smart grid because of its local intelligence, automation, interoperability, and distributed energy resources (DER) hosting capability. The enabling technology of advanced microgrids is the microgrid management system (MGMS). In this article, we discuss and review the concept of the MGMS and state-of-the-art solutions regarding centralized and distributed MGMSs in the primary, secondary, and tertiary levels, from which we observe a general tendency toward decentralization.", "title": "" }, { "docid": "3c667426c8dcea8e7813e9eef23a1e15", "text": "Radio spectrum has become a precious resource, and it has long been the dream of wireless communication engineers to maximize the utilization of the radio spectrum. Dynamic Spectrum Access (DSA) and Cognitive Radio (CR) have been considered promising to enhance the efficiency and utilization of the spectrum. In current overlay cognitive radio, spectrum sensing is first performed to detect the spectrum holes for the secondary user to harness. However, in a more sophisticated cognitive radio, the secondary user needs to detect more than just the existence of primary users and spectrum holes. For example, in a hybrid overlay/underlay cognitive radio, the secondary use needs to detect the transmission power and localization of the primary users as well. In this paper, we combine the spectrum sensing and primary user power/localization detection together, and propose to jointly detect not only the existence of primary users but the power and localization of them via compressed sensing. Simulation results including the miss detection probability (MDP), false alarm probability (FAP) and reconstruction probability (RP) confirm the effectiveness and robustness of the proposed method.", "title": "" }, { "docid": "534fd7868826681596586f00f47cd819", "text": "Locally weighted projection regression is a new algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it uses locally linear models, spanned by a small number of univariate regressions in selected directions in input space. This paper evaluates different methods of projection regression and derives a nonlinear function approximator based on them. This nonparametric local learning system i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic cross validation to learn iii) adjusts its weighting kernels based on local information only, iv) has a computational complexity that is linear in the number of inputs, and v) can deal with a large number of possibly redundant inputs, as shown in evaluations with up to 50 dimensional data sets. To our knowledge, this is the first truly incremental spatially localized learning method to combine all these properties.", "title": "" }, { "docid": "ca768eb654b323354b7d78969162cb81", "text": "Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.", "title": "" }, { "docid": "b91291a9b64ef7668633c2a3df82285a", "text": "Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap.", "title": "" }, { "docid": "50c3a6e263dcfec4faab370afdb17dfd", "text": "Most state-of-the-art methods for representation learning are supervised, which require a large number of labeled data. This paper explores a novel unsupervised approach for learning visual representation. We introduce an image-wise discrimination criterion in addition to a pixel-wise reconstruction criterion to model both individual images and the difference between original images and reconstructed ones during neural network training. These criteria induce networks to focus on not only local features but also global high-level representations, so as to provide a competitive alternative to supervised representation learning methods, especially in the case of limited labeled data. We further introduce a competition mechanism to drive each component to increase its capability to win its adversary. In this way, the identity of representations and the likeness of reconstructed images to original ones are alternately improved. Experimental results on several tasks demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "bb089ffa37487912234ec0bab057605b", "text": "Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g. 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work. The MapNet project webpage is https://goo.gl/mRB3Au.", "title": "" }, { "docid": "b26a9a78f11227e894af0e58b3b01c98", "text": "Although all the cells in an organism contain the same genetic information, differences in the cell phenotype arise from the expression of lineage-specific genes. During myelopoiesis, external differentiating signals regulate the expression of a set of transcription factors. The combined action of these transcription factors subsequently determines the expression of myeloid-specific genes and the generation of monocytes and macrophages. In particular, the transcription factor PU.1 has a critical role in this process. We review the contribution of several transcription factors to the control of macrophage development.", "title": "" }, { "docid": "515e2b726f0e5e7ceb5938fa5d917694", "text": "Text preprocessing and segmentation are critical tasks in search and text mining applications. Due to the huge amount of documents that are exclusively presented in PDF format, most of the Data Mining (DM) and Information Retrieval (IR) systems must extract content from the PDF files. In some occasions this is a difficult task: the result of the extraction process from a PDF file is plain text, and it should be returned in the same order as a human would read the original PDF file. However, current tools for PDF text extraction fail in this objective when working with complex documents with multiple columns. For instance, this is the case of official government bulletins with legal information. In this task, it is mandatory to get correct and ordered text as a result of the application of the PDF extractor. It is very usual that a legal article in a document refers to a previous article and they should be offered in the right sequential order. To overcome these difficulties we have designed a new method for extraction of text in PDFs that simulates the human reading order. We evaluated our method and compared it against other PDF extraction tools and algorithms. Evaluation of our approach shows that it significantly outperforms the results of the existing tools and algorithms.", "title": "" }, { "docid": "b67e6d5ee2451912ea6267cbc5274440", "text": "The paper presents theoretical analyses, simulations and design of a PTAT (proportional to absolute temperature) temperature sensor that is based on the vertical PNP structure and dedicated to CMOS VLSI circuits. Performed considerations take into account specific properties of materials that forms electronic elements. The electrothermal simulations are performed in order to verify the unwanted self-heating effect of the sensor", "title": "" }, { "docid": "3953962740dd06ad2cadbb5d6b7c2cef", "text": "The latest election cycle generated sobering examples of the threat that fake news poses to democracy. Primarily disseminated by hyper-partisan media outlets, fake news proved capable of becoming viral sensations that can dominate social media and influence elections. To address this problem, we begin with stance detection, which is a first step towards identifying fake news. The goal of this project is to identify whether given headline-article pairs: (1) agree, (2) disagree, (3) discuss the same topic, or (4) are not related at all, as described in [1]. Our method feeds the headline-article pairs into a bidirectional LSTM which first analyzes the article and then uses the acquired article representation to analyze the headline. On top of the output of the conditioned bidirectional LSTM, we concatenate global statistical features extracted from the headline-article pairs. We report a 9.7% improvement in the Fake News Challenge evaluation metric and a 22.7% improvement in mean F1 compared to the highest scoring baseline. We also present qualitative results that show how our method outperforms state-of-the art algorithms on this challenge.", "title": "" }, { "docid": "efde92d1e86ff0b5f91b006521935621", "text": "Sizing equations for electrical machinery are developed from basic principles. The technique provides new insights into: 1. The effect of stator inner and outer diameters. 2. The amount of copper and steel used. 3. A maximizing function. 4. Equivalent slot dimensions in terms of diameters and flux density distribution. 5. Pole number effects. While the treatment is analytical, the scope is broad and intended to assist in the design of electrical machinery. Examples are given showing how the machine's internal geometry can assume extreme proportions through changes in basic variables.", "title": "" }, { "docid": "b16d8dddf037e60ba9121f85e7d9b45a", "text": "Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.", "title": "" }, { "docid": "381c02fb1ce523ddbdfe3acdde20abf1", "text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.", "title": "" }, { "docid": "3e26fe227e8c270fda4fe0b7d09b2985", "text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.", "title": "" }, { "docid": "3682143e9cfe7dd139138b3b533c8c25", "text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.", "title": "" }, { "docid": "bb0731a3bc69ddfe293fb1feb096f5f2", "text": "To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., \"download\") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., \"malware\", \"download\") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.", "title": "" }, { "docid": "e6ac100eb695e089e22defcba01fae41", "text": "Recent advances in video super-resolution have shown that convolutional neural networks combined with motion compensation are able to merge information from multiple low-resolution (LR) frames to generate high-quality images. Current state-of-the-art methods process a batch of LR frames to generate a single high-resolution (HR) frame and run this scheme in a sliding window fashion over the entire video, effectively treating the problem as a large number of separate multi-frame super-resolution tasks. This approach has two main weaknesses: 1) Each input frame is processed and warped multiple times, increasing the computational cost, and 2) each output frame is estimated independently conditioned on the input frames, limiting the system's ability to produce temporally consistent results. In this work, we propose an end-to-end trainable frame-recurrent video super-resolution framework that uses the previously inferred HR estimate to super-resolve the subsequent frame. This naturally encourages temporally consistent results and reduces the computational cost by warping only one image in each step. Furthermore, due to its recurrent nature, the proposed method has the ability to assimilate a large number of previous frames without increased computational demands. Extensive evaluations and comparisons with previous methods validate the strengths of our approach and demonstrate that the proposed framework is able to significantly outperform the current state of the art.", "title": "" } ]
scidocsrr
0d3a52c823dbc59c12b769b69a22700b
Top-down control of visual attention
[ { "docid": "49717f07b8b4a3da892c1bb899f7a464", "text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.", "title": "" } ]
[ { "docid": "3efaaabf9a93460bace2e70abc71801d", "text": "BACKGROUND\nNumerous studies report an association between social support and protection from depression, but no systematic review or meta-analysis exists on this topic.\n\n\nAIMS\nTo review systematically the characteristics of social support (types and source) associated with protection from depression across life periods (childhood and adolescence; adulthood; older age) and by study design (cross-sectional v cohort studies).\n\n\nMETHOD\nA systematic literature search conducted in February 2015 yielded 100 eligible studies. Study quality was assessed using a critical appraisal checklist, followed by meta-analyses.\n\n\nRESULTS\nSources of support varied across life periods, with parental support being most important among children and adolescents, whereas adults and older adults relied more on spouses, followed by family and then friends. Significant heterogeneity in social support measurement was noted. Effects were weaker in both magnitude and significance in cohort studies.\n\n\nCONCLUSIONS\nKnowledge gaps remain due to social support measurement heterogeneity and to evidence of reverse causality bias.", "title": "" }, { "docid": "f9de4041343fb6c570e5cbce4cb1ff66", "text": "Do the languages that we speak affect how we experience the world? This question was taken up in a linguistic survey and two non-linguistic psychophysical experiments conducted in native speakers of English, Indonesian, Greek, and Spanish. All four of these languages use spatial metaphors to talk about time, but the particular metaphoric mappings between time and space vary across languages. A linguistic corpus study revealed that English and Indonesian tend to map duration onto linear distance (e.g., a long time), whereas Greek and Spanish preferentially map duration onto quantity (e.g., much time). Two psychophysical time estimation experiments were conducted to determine whether this cross-linguistic difference has implications for speakers’ temporal thinking. Performance on the psychophysical tasks reflected the relative frequencies of the ‘time as distance’ and ‘time as quantity’ metaphors in English, Indonesian, Greek, and Spanish. This was true despite the fact that the tasks used entirely nonlinguistic stimuli and responses. Results suggest that: (1.) The spatial metaphors in our native language may profoundly influence the way we mentally represent time. (2.) Language can shape even primitive, low-level mental processes such as estimating brief durations – an ability we share with babies and non-human animals.", "title": "" }, { "docid": "ca117e9bfd90df7ac652628b342a4b62", "text": "In this article, we introduce an explicit count-based strategy to build word space models with syntactic contexts (dependencies). A filtering method is defined to reduce explicit word-context vectors. This traditional strategy is compared with a neural embedding (predictive) model also based on syntactic dependencies. The comparison was performed using the same parsed corpus for both models. Besides, the dependency-based methods are also compared with bag-of-words strategies, both count-based and predictive ones. The results show that our traditional countbased model with syntactic dependencies outperforms other strategies, including dependency-based embeddings, but just for the tasks focused on discovering similarity between words with the same function (i.e. near-synonyms).", "title": "" }, { "docid": "a7e0ff324e4bf4884f0a6e35adf588a3", "text": "Named Entity Recognition (NER) is a subtask of information extraction and aims to identify atomic entities in text that fall into predefined categories such as person, location, organization, etc. Recent efforts in NER try to extract entities and link them to linked data entities. Linked data is a term used for data resources that are created using semantic web standards such as DBpedia. There are a number of online tools that try to identify named entities in text and link them to linked data resources. Although one can use these tools via their APIs and web interfaces, they use different data resources and different techniques to identify named entities and not all of them reveal this information. One of the major tasks in NER is disambiguation that is identifying the right entity among a number of entities with the same names; for example \"apple\" standing for both \"Apple, Inc.\" the company and the fruit. We developed a similar tool called NERSO, short for Named Entity Recognition Using Semantic Open Data, to automatically extract named entities, disambiguating and linking them to DBpedia entities. Our disambiguation method is based on constructing a graph of linked data entities and scoring them using a graph-based centrality algorithm. We evaluate our system by comparing its performance with two publicly available NER tools. The results show that NERSO performs better.", "title": "" }, { "docid": "e7ecd827a48414f1f533fb30de203a6a", "text": "Followership has been an understudied topic in the academic literature and an underappreciated topic among practitioners. Although it has always been important, the study of followership has become even more crucial with the advent of the information age and dramatic changes in the workplace. This paper provides a fresh look at followership by providing a synthesis of the literature and presents a new model for matching followership styles to leadership styles. The model’s practical value lies in its usefulness for describing how leaders can best work with followers, and how followers can best work with leaders.", "title": "" }, { "docid": "8de0a71dd4d0e8b6874e80ffd5e45dd4", "text": "Predictive state representations (PSRs) have recently been proposed as an alternative to partially observable Markov decision processes (POMDPs) for representing the state of a dynamical system (Littman et al., 2001). We present a learning algorithm that learns a PSR from observational data. Our algorithm produces a variant of PSRs called transformed predictive state representations (TPSRs). We provide an efficient principal-components-based algorithm for learning a TPSR, and show that TPSRs can perform well in comparison to Hidden Markov Models learned with Baum-Welch in a real world robot tracking task for low dimensional representations and long prediction horizons.", "title": "" }, { "docid": "61768befa972c8e9f46524a59c44fabb", "text": "This paper presents a newly defined set-based concurrent engineering process, which the authors believe addresses some of the key challenges faced by engineering enterprises in the 21 century. The main principles of Set-Based Concurrent Engineering (SBCE) have been identified via an extensive literature review. Based on these principles the SBCE baseline model was developed. The baseline model defines the stages and activities which represent the product development process to be employed in the LeanPPD (lean product and process development) project. The LeanPPD project is addressing the needs of European manufacturing companies for a new model that extends beyond lean manufacturing, and incorporates lean thinking in the product design development process.", "title": "" }, { "docid": "a0e0d3224cd73539e01f260d564109a7", "text": "We are living in a world where there is an increasing need for evidence in organizations. Good digital evidence is becoming a business enabler. Very few organizations have the structures (management and infrastructure) in place to enable them to conduct cost effective, low-impact and fficient digital investigations [1]. Digital Forensics (DF) is a vehicle that organizations use to provide good and trustworthy evidence and processes. The current DF models concentrate on reactive investigations, with limited reference to DF readiness and live investigations. However, organizations use DF for other purposes for example compliance testing. The paper proposes that DF consists of three components: Pro-active (ProDF), Active (ActDF) and Re-active (ReDF). ProDF concentrates on DF readiness and the proactive responsible use of DF to demonstrate good governance and enhance governance structures. ActDF considers the gathering of live evidence during an ongoing attack with a limited live investigation element whilst ReDF deals with the traditional DF investigation. The paper discusses each component and the relationship between the components.", "title": "" }, { "docid": "900448785a5aa402165406daff206c93", "text": "Electrospun membranes are gaining interest for use in membrane distillation (MD) due to their high porosity and interconnected pore structure; however, they are still susceptible to wetting during MD operation because of their relatively low liquid entry pressure (LEP). In this study, post-treatment had been applied to improve the LEP, as well as its permeation and salt rejection efficiency. The post-treatment included two continuous procedures: heat-pressing and annealing. In this study, annealing was applied on the membranes that had been heat-pressed. It was found that annealing improved the MD performance as the average flux reached 35 L/m2·h or LMH (>10% improvement of the ones without annealing) while still maintaining 99.99% salt rejection. Further tests on LEP, contact angle, and pore size distribution explain the improvement due to annealing well. Fourier transform infrared spectroscopy and X-ray diffraction analyses of the membranes showed that there was an increase in the crystallinity of the polyvinylidene fluoride-co-hexafluoropropylene (PVDF-HFP) membrane; also, peaks indicating the α phase of polyvinylidene fluoride (PVDF) became noticeable after annealing, indicating some β and amorphous states of polymer were converted into the α phase. The changes were favorable for membrane distillation as the non-polar α phase of PVDF reduces the dipolar attraction force between the membrane and water molecules, and the increase in crystallinity would result in higher thermal stability. The present results indicate the positive effect of the heat-press followed by an annealing post-treatment on the membrane characteristics and MD performance.", "title": "" }, { "docid": "d66799a5d65a6f23527a33b124812ea6", "text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.", "title": "" }, { "docid": "dd9b6b67f19622bfffbad427b93a1829", "text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.", "title": "" }, { "docid": "b9652cf6647d9c7c1f91a345021731db", "text": "Context: The processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. Objective: This paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. Results: The results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. Conclusion: It is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.", "title": "" }, { "docid": "56934c400280e56dffbb27e6d06c21b9", "text": "Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering ; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance .", "title": "" }, { "docid": "08d5c83c7effa92659ea705ad51317e2", "text": "This article examines cognitive, affective, and behavioral measures of motivation and reviews their use throughout the discipline of experimental social psychology. We distinguish between two dimensions of motivation (outcome-focused motivation and process-focused motivation). We discuss circumstances under which measures may help distinguish between different dimensions of motivation, as well as circumstances under which measures may capture different dimensions of motivation in similar ways. Furthermore, we examine situations in which various measures may capture fluctuations in nonmotivational factors, such as learning or physiological depletion. This analysis seeks to advance research in experimental social psychology by highlighting the need for caution when selecting measures of motivation and when interpreting fluctuations captured by these measures. Motivation – the psychological force that enables action – has long been the object of scientific inquiry (Carver & Scheier, 1998; Festinger, 1957; Fishbein & Ajzen, 1974; Hull, 1932; Kruglanski, 1996; Lewin, 1935; Miller, Galanter, & Pribram, 1960; Mischel, Shoda, & Rodriguez, 1989; Zeigarnik, 1927). Because motivation is a psychological construct that cannot be observed or recorded directly, studying it raises an important question: how to measure motivation? Researchers measure motivation in terms of observable cognitive (e.g., recall, perception), affective (e.g., subjective experience), behavioral (e.g., performance), and physiological (e.g., brain activation) responses and using self-reports. Furthermore, motivation is measured in relative terms: compared to previous or subsequent levels of motivation or to motivation in a different goal state (e.g., salient versus non-salient goal). For example, following exposure to a health-goal prime (e.g., gymmembership card), an individual might be more motivated to exercise now than she was 20minutes ago (before exposure to the prime), or than another person who was not exposed to the same prime. An important aspect of determining how to measure motivation is understanding what type of motivation one is attempting to capture. Thus, in exploring the measures of motivation, the present article takes into account different dimensions of motivation. In particular, we highlight the distinction between the outcome-focused motivation to complete a goal (Brehm & Self, 1989; Locke & Latham, 1990; Powers, 1973) and the process-focused motivation to attend to elements related to the process of goal pursuit – with less emphasis on the outcome. Process-related elements may include using “proper” means during goal pursuit (means-focused motivation; Higgins, Idson, Freitas, Spiegel, & Molden, 2003; Touré-Tillery & Fishbach, 2012) and enjoying the experience of goal pursuit (intrinsic motivation; Deci & Ryan, 1985; Fishbach & Choi, 2012; Sansone & Harackiewicz, 1996; Shah & Kruglanski, 2000). In some cases, particular measures of motivation may help distinguish between these different dimensions of motivation, whereas other measures may not. For example, the measured speed at which a person works on a task can have several interpretations. © 2014 John Wiley & Sons Ltd How to Measure Motivation 329 Working slowly could mean (a) that the individual’s motivation to complete the task is low (outcome-focused motivation); or (b) that her motivation to engage in the task is high such that she is “savoring” the task (intrinsic motivation); or (c) that her motivation to “do it right” and use proper means is high such that she is applying herself (means-focused motivation); or even (d) that she is tired (diminished physiological resources). In this case, additional measures (e.g., accuracy in performance) and manipulations (e.g., task difficulty) may help tease apart these various potential interpretations. Thus, experimental researchers must exercise caution when selecting measures of motivation and when interpreting the fluctuations captured by these measures. This review provides a guide for how to measure fluctuations in motivation in experimental settings. One approach is to ask people to rate their motivation (i.e., “how motivated are you?”). However, such an approach is limited to people’s conscious understanding of their own psychological states and can further be biased by social desirability concerns; hence, research in experimental social psychology developed a variety of cognitive and behavioral paradigms to assess motivation without relying on self-reports. We focus on these objective measures of situational fluctuations in motivation. We note that other fields of psychological research commonly use physiological measures (e.g., brain activation, skin conductance), self-report measures (i.e., motivation scales), or measure motivation as a stable trait. These physiological, self-report, and trait measures of motivation are beyond the scope our review. In the sections that follow, we start with a discussion of measures researchers commonly use to capture motivation. We review cognitive measures such as memory accessibility, evaluations, and perceptions of goal-relevant objects, as well as affective measures such as subjective experience. Next, we examine the use of behavioral measures such as speed, performance, and choice to capture fluctuations in motivational strength. In the third section, we discuss the outcomeand process-focused dimensions of motivation and examine specific measures of process-focused motivation, including measures of intrinsic motivation and means-focused motivation. We then discuss how different measures may help distinguish between the outcomeand process-focused dimensions. In the final section, we explore circumstances under which measures may capture fluctuations in learning and physiological resources, rather than changes in motivation. We conclude with some implications of this analysis for the measurement and study of motivation. Cognitive and Affective Measures of Motivation Experimental social psychologists conceptualize a goal as the cognitive representation of a desired end state (Fishbach & Ferguson, 2007; Kruglanski, 1996). According to this view, goals are organized in associative memory networks connecting each goal to corresponding constructs. Goal-relevant constructs could be activities or objects that contribute to goal attainment (i.e., means; Kruglanski et al., 2002), as well as activities or objects that hinder goal attainment (i.e., temptations; Fishbach, Friedman, & Kruglanski, 2003). For example, the goal to eat healthily may be associated with constructs such as apple, doctor (facilitating means), or French fries (hindering temptation). Cognitive and affective measures of motivation include the activation, evaluation, and perception of these goal-related constructs and the subjective experience they evoke. Goal activation: Memory, accessibility, and inhibition of goal-related constructs Constructs related to a goal can activate or prime the pursuit of that goal. For example, the presence of one’s study partner or the word “exam” in a game of scrabble can activate a student’s academic goal and hence increase her motivation to study. Once a goal is active, Social and Personality Psychology Compass 8/7 (2014): 328–341, 10.1111/spc3.12110 © 2014 John Wiley & Sons Ltd 330 How to Measure Motivation the motivational system prepares the individual for action by activating goal-relevant information (Bargh & Barndollar, 1996; Gollwitzer, 1996; Kruglanski, 1996). Thus, motivation manifests itself in terms of how easily goal-related constructs are brought tomind (i.e., accessibility; Aarts, Dijksterhuis, & De Vries, 2001; Higgins & King, 1981; Wyer & Srull, 1986). The activation and subsequent pursuit of a goal can be conscious, such that one is aware of the cues that led to goal-related judgments and behaviors. This activation can also be non-conscious, such that a one is unaware of the goal prime or that one is even exhibiting goal-related judgments and behaviors. Whether goals are conscious or non-conscious, a fundamental characteristic of goal-driven processes is the persistence of the accessibility of goal-related constructs for as long as the goal is active or until an individual disengages from the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Goschke & Kuhl, 1993). Upon goal completion, motivation diminishes and accessibility is inhibited (Liberman & Förster, 2000; Marsh, Hicks, & Bink, 1998). This active reduction in accessibility allows individuals to direct their cognitive resources to other tasks at hand without being distracted by thoughts of a completed goal. Thus, motivation can be measured by the degree to which goal-related concepts are accessible inmemory. Specifically, the greater the motivation to pursue/achieve a goal, the more likely individuals are to remember, notice, or recognize concepts, objects, or persons related to that goal. For example, in a classic study, Zeigarnik (1927) instructed participants to perform 20 short tasks, ten of which they did not get a chance to finish because the experimenter interrupted them. At the end of the study, Zeigarnik inferred the strength of motivation by asking participants to recall as many of the tasks as possible. Consistent with the notion that unfulfilled goals are associated with heightened motivational states, whereas fulfilled goals inhibit motivation, the results show that participants recalled more uncompleted tasks (i.e., unfulfilled goals) than completed tasks (i.e., fulfilled goals; the Zeigarnik effect). More recently, Förster, Liberman, and Higgins (2005) replicated these findings; inferring motivation from performance on a lexical decision task. Their study assessed the speed of recognizing – i.e., identifying as words versus non-words –words related to a focal goal prior to (versus after) completing that goal. A related measure of motivation is the inhibition of conflicting constructs. In ", "title": "" }, { "docid": "2d2d4d439021ee8665ddc3d97d879214", "text": "We present the use of an oblique angle physical vapor deposition OAPVDd technique with substrate rotation to obtain conformal thin films with enhanced step coverage on patterned surfaces. We report the results of rutheniumsRud films sputter deposited on trench structures with aspect ratio ,2 and show that OAPVD with an incidence angle less that 30° with respect to the substrate surface normal one can create a more conformal coating without overhangs and voids compared to that obtained by normal incidence deposition. A simple geometrical shadowing effect is presented to explain the results. The technique has the potential of extending the present PVD technique to future chip interconnect fabrication. ©2005 American Institute of Physics . fDOI: 10.1063/1.1937476 g", "title": "" }, { "docid": "30da5996ad883e41df979fe3640e35ed", "text": "As an initial assessment, over 480,000 labeled virtual images of normal highway driving were readily generated in Grand Theft Auto V's virtual environment. Using these images, a CNN was trained to detect following distance to cars/objects ahead, lane markings, and driving angle (angular heading relative to lane centerline): all variables necessary for basic autonomous driving. Encouraging results were obtained when tested on over 50,000 labeled virtual images from substantially different GTA-V driving environments. This initial assessment begins to define both the range and scope of the labeled images needed for training as well as the range and scope of labeled images needed for testing the definition of boundaries and limitations of trained networks. It is the efficacy and flexibility of a\"GTA-V\"-like virtual environment that is expected to provide an efficient well-defined foundation for the training and testing of Convolutional Neural Networks for safe driving. Additionally, described is the Princeton Virtual Environment (PVE) for the training, testing and enhancement of safe driving AI, which is being developed using the video-game engine Unity. PVE is being developed to recreate rare but critical corner cases that can be used in re-training and enhancing machine learning models and understanding the limitations of current self driving models. The Florida Tesla crash is being used as an initial reference.", "title": "" }, { "docid": "5d879bdbf7667fa8ad19c3bb86219880", "text": "The cellular concept applied in mobile communication systems enables significant increase of overall system capacity, but requires careful radio network planning and dimensioning. Wireless and mobile network operators typically rely on various commercial radio network planning and dimensioning tools, which incorporate different radio signal propagation models. In this paper we present the use of open-source Geographical Resources Analysis Support System (GRASS) for the calculation of radio signal coverage. We developed GRASS modules for radio coverage prediction for a number of different radio channel models, with antenna radiation patterns given in the standard MSI format. The results are stored in a data base (e.g. MySQL, PostgreSQL) for further processing and in a simplified form as a bit-map file for displaying in GRASS. The accuracy of prediction was confirmed by comparison with results obtained by a dedicated professional prediction tool as well as with measurement results. Key-Words: network planning tool, open-source, GRASS GIS, path loss, raster, clutter, radio signal coverage", "title": "" }, { "docid": "d40a55317d8cdebfcd567ea11ad0960f", "text": "This study examined the effects of self-presentation goals on the amount and type of verbal deception used by participants in same-gender and mixed-gender dyads. Participants were asked to engage in a conversation that was secretly videotaped. Self-presentational goal was manipulated, where one member of the dyad (the self-presenter) was told to either appear (a) likable, (b) competent, or (c) was told to simply get to know his or her partner (control condition). After the conversation, self-presenters were asked to review a video recording of the interaction and identify the instances in which they had deceived the other person. Overall, participants told more lies when they had a goal to appear likable or competent compared to participants in the control condition, and the content of the lies varied according to self-presentation goal. In addition, lies told by men and women differed in content, although not in quantity.", "title": "" }, { "docid": "57a48dee2cc149b70a172ac5785afc6c", "text": "We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ~ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.", "title": "" } ]
scidocsrr
67898fc401c5af903c0932453dd10545
Code Hot Spot: A tool for extraction and analysis of code change history
[ { "docid": "596fa75533d4d31a49efbeb24f5fa7f0", "text": "High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.", "title": "" }, { "docid": "b776bf3acb830552eb1ecf353b08edee", "text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.", "title": "" } ]
[ { "docid": "2e088ce4f7e5b3633fa904eab7563875", "text": "Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.", "title": "" }, { "docid": "0007c9ab00e628848a08565daaf4063e", "text": "We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.", "title": "" }, { "docid": "fae925bdd47b835035d4f8f0b5b3139d", "text": "By Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin : Network Flows: Theory, Algorithms, and Applications bringing together the classic and the contemporary aspects of the field this comprehensive introduction to network flows provides an integrative view of theory network flows pearson new international edition theory algorithms and applications on amazon free shipping on qualifying offers Network Flows: Theory, Algorithms, and Applications:", "title": "" }, { "docid": "7c0b7d55abdd6cce85730dbf1cd02109", "text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large", "title": "" }, { "docid": "f0ec66a9054c086e4141cb95995f5f68", "text": "We present a simple hierarchical Bayesian approach to the modeling collections of texts and other large-scale data collections. For text collections, we posit that a document is generated by choosing a random set of multinomial probabilities for a set of possible “topics,” and then repeatedly generating words by sampling from the topic mixture. This model is intractable for exact probabilistic inference, but approximate posterior probabilities and marginal likelihoods can be obtained via fast variational methods. We also present extensions to coupled models for joint text/image data and multiresolution models for topic hierarchies.", "title": "" }, { "docid": "9ce1401e072fc09749d12f9132aa6b1e", "text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.", "title": "" }, { "docid": "9b16eaa154370895b446cc4e66c9a8a9", "text": "The 15 kV SiC N-IGBT is the state-of-the-art high voltage power semiconductor device developed by Cree. The SiC IGBT is exposed to a peak stress of 10-11 kV in power converter systems, with punch-through turn-on dv/dt over 100 kV/μs and turn-off dv/dt about 35 kV/μs. Such high dv/dt requires ultralow coupling capacitance in the dc-dc isolation stage of the gate driver for maintaining fidelity of the signals on the control-supply ground side. Accelerated aging of the insulation in the isolation stage is another serious concern. In this paper, a simple transformer based isolation with a toroid core is investigated for the above requirements of the 15 kV IGBT. The gate driver prototype has been developed with over 100 kV dc insulation capability, and its inter-winding coupling capacitance has been found to be 3.4 pF and 13 pF at 50 MHz and 100 MHz respectively. The performance of the gate driver prototype has been evaluated up to the above mentioned specification using double-pulse tests on high-side IGBT in a half-bridge configuration. The continuous testing at 5 kHz has been performed till 8 kV, and turn-on dv/dt of 85 kV/μs on a buck-boost converter. The corresponding experimental results are presented. Also, the test methodology of evaluating the gate driver at such high voltage, without a high voltage power supply is discussed. Finally, experimental results validating fidelity of the signals on the control-ground side are provided to show the influence of increased inter-winding coupling capacitance on the performance of the gate driver.", "title": "" }, { "docid": "2eba092d19cc8fb35994e045f826e950", "text": "Deep neural networks have proven to be particularly e‚ective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their e‚ectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. Œis article represents the €rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the €eld.", "title": "" }, { "docid": "abdd688f821a450ebe0eb70d720989c2", "text": "In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.", "title": "" }, { "docid": "19b602b49f0fcd51f5ec7f240fe26d60", "text": "Wireless communication by leveraging the use of low-altitude unmanned aerial vehicles (UAVs) has received significant interests recently due to its low-cost and flexibility in providing wireless connectivity in areas without infrastructure coverage. This paper studies a UAV-enabled mobile relaying system, where a high-mobility UAV is deployed to assist in the information transmission from a ground source to a ground destination with their direct link blocked. By assuming that the UAV adopts the energy-efficient circular trajectory and employs time-division duplexing (TDD) based decode-and-forward (DF) relaying, we maximize the spectrum efficiency (SE) in bits/second/Hz as well as energy efficiency (EE) in bits/Joule of the considered system by jointly optimizing the time allocations for the UAV's relaying together with its flying speed and trajectory. It is revealed that for UAV-enabled mobile relaying with the UAV propulsion energy consumption taken into account, there exists a trade-off between the maximum achievable SE and EE by exploiting the new degree of freedom of UAV trajectory design.", "title": "" }, { "docid": "36f8d1e7cd7a6e2a68c3dd4336e91da8", "text": "Although the accuracy of super-resolution (SR) methods based on convolutional neural networks (CNN) soars high, the complexity and computation also explode with the increased depth and width of the network. Thus, we propose the convolutional anchored regression network (CARN) for fast and accurate single image super-resolution (SISR). Inspired by locally linear regression methods (A+ and ARN), the new architecture consists of regression blocks that map input features from one feature space to another. Different from A+ and ARN, CARN is no longer relying on or limited by hand-crafted features. Instead, it is an end-to-end design where all the operations are converted to convolutions so that the key concepts, i.e., features, anchors, and regressors, are learned jointly. The experiments show that CARN achieves the best speed and accuracy trade-off among the SR methods. The code is available at https://github.com/ofsoundof/CARN.", "title": "" }, { "docid": "ef26995e3979f479f4c3628283816d5d", "text": "This article addresses the position taken by Clark (1983) that media do not influence learning under any conditions. The article reframes the questions raised by Clark to explore the conditions under which media will influence learning. Specifically, it posits the need to consider the capabilities of media, and the methods that employ them, as they interact with the cognitive and social processes by which knowledge is constructed. This approach is examined within the context of two major media-based projects, one which uses computers and the other,video. The article discusses the implications of this approach for media theory, research and practice.", "title": "" }, { "docid": "bfdcad076ec599716de7d2dc43323059", "text": "The strategic management of agricultural lands involves crop field monitoring each year. Crop discrimination via remote sensing is a complex task, especially if different crops have a similar spectral response and cropping pattern. In such cases, crop identification could be improved by combining object-based image analysis and advanced machine learning methods. In this investigation, we evaluated the C4.5 decision tree, logistic regression (LR), support vector machine (SVM) and multilayer perceptron (MLP) neural network methods, both as single classifiers and combined in a hierarchical classification, for the mapping of nine major summer crops (both woody and herbaceous) from ASTER satellite images captured in two different dates. Each method was built with different combinations of spectral and textural features obtained after the segmentation of the remote images in an object-based framework. As single classifiers, MLP and SVM obtained maximum overall accuracy of 88%, slightly higher than LR (86%) and notably higher than C4.5 (79%). The SVM+SVM classifier (best method) improved these results to 89%. In most cases, the hierarchical classifiers considerably increased the accuracy of the most poorly classified class (minimum sensitivity). The SVM+SVM method offered a significant improvement in classification accuracy for all of the studied crops compared to OPEN ACCESS Remote Sens. 2014, 6 5020 the conventional decision tree classifier, ranging between 4% for safflower and 29% for corn, which suggests the application of object-based image analysis and advanced machine learning methods in complex crop classification tasks.", "title": "" }, { "docid": "90b3e6aee6351b196445843ca8367a3b", "text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.", "title": "" }, { "docid": "8a538c63adfd618d8967f736d8c59761", "text": "Skyline queries ask for a set of interesting points from a potentially large set of data points. If we are traveling, for instance, a restaurant might be interesting if there is no other restaurant which is nearer, cheaper, and has better food. Skyline queries retrieve all such interesting restaurants so that the user can choose the most promising one. In this paper, we present a new online algorithm that computes the Skyline. Unlike most existing algorithms that compute the Skyline in a batch, this algorithm returns the first results immediately, produces more and more results continuously, and allows the user to give preferences during the running time of the algorithm so that the user can control what kind of results are produced next (e.g., rather cheap or rather near restaurants).", "title": "" }, { "docid": "d14da110523c56d3c1ab2be9d3fbcf8e", "text": "Women are generally more risk averse than men. We investigated whether between- and within-gender variation in financial risk aversion was accounted for by variation in salivary concentrations of testosterone and in markers of prenatal testosterone exposure in a sample of >500 MBA students. Higher levels of circulating testosterone were associated with lower risk aversion among women, but not among men. At comparably low concentrations of salivary testosterone, however, the gender difference in risk aversion disappeared, suggesting that testosterone has nonlinear effects on risk aversion regardless of gender. A similar relationship between risk aversion and testosterone was also found using markers of prenatal testosterone exposure. Finally, both testosterone levels and risk aversion predicted career choices after graduation: Individuals high in testosterone and low in risk aversion were more likely to choose risky careers in finance. These results suggest that testosterone has both organizational and activational effects on risk-sensitive financial decisions and long-term career choices.", "title": "" }, { "docid": "a3099df83149b84e113d0f12b66e1ab7", "text": "We propose a multistart CMA-ES with equal budgets for two interlaced restart strategies, one with an increasing population size and one with varying small population sizes. This BI-population CMA-ES is benchmarked on the BBOB-2009 noiseless function testbed and could solve 23, 22 and 20 functions out of 24 in search space dimensions 10, 20 and 40, respectively, within a budget of less than $10^6 D$ function evaluations per trial.", "title": "" }, { "docid": "ee378b32ee744f0377a3723ec00f4313", "text": "In this article, we present some extensions of the rough set approach and we outline a challenge for the rough set based research. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "9970f1b1d4712353a736806f19ff2f2c", "text": "Many clustering algorithms suffer from scalability problems on massive datasets and do not support any user interaction during runtime. To tackle these problems, anytime clustering algorithms are proposed. They produce a fast approximate result which is continuously refined during the further run. Also, they can be stopped or suspended anytime and provide an answer. In this paper, we propose a novel anytime clustering algorithm based on the density-based clustering paradigm. Our algorithm called A-DBSCAN is applicable to very high dimensional databases such as time series, trajectory, medical data, etc. The general idea of our algorithm is to use a sequence of lower-bounding functions (LBs) of the true similarity measure to produce multiple approximate results of the true density-based clusters. ADBSCAN operates in multiple levels w.r.t. the LBs and is mainly based on two algorithmic schemes: (1) an efficient distance upgrade scheme which restricts distance calculations to core-objects at each level of the LBs; (2) a local reclustering scheme which restricts update operations to the relevant objects only. Extensive experiments demonstrate that A-DBSCAN acquires very good clustering results at very early stages of execution thus saves a large amount of computational time. Even if it runs to the end, A-DBSCAN is still orders of magnitude faster than DBSCAN.", "title": "" } ]
scidocsrr
50eb728b77c847c39dd859207dc6dcfe
Towards Music Imagery Information Retrieval: Introducing the OpenMIIR Dataset of EEG Recordings from Music Perception and Imagination
[ { "docid": "b2032f8912fac19b18bc5a836c3536e9", "text": "Electroencephalographic measurements are commonly used in medical and research areas. This review article presents an introduction into EEG measurement. Its purpose is to help with orientation in EEG field and with building basic knowledge for performing EEG recordings. The article is divided into two parts. In the first part, background of the subject, a brief historical overview, and some EEG related research areas are given. The second part explains EEG recording.", "title": "" } ]
[ { "docid": "bedc7de2ede206905e89daf61828f868", "text": "Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.", "title": "" }, { "docid": "126b62a0ae62c76b43b4fb49f1bf05cd", "text": "OBJECTIVE\nThe aim of the study was to evaluate efficacy of fractional CO2 vaginal laser treatment (Laser, L) and compare it to local estrogen therapy (Estriol, E) and the combination of both treatments (Laser + Estriol, LE) in the treatment of vulvovaginal atrophy (VVA).\n\n\nMETHODS\nA total of 45 postmenopausal women meeting inclusion criteria were randomized in L, E, or LE groups. Assessments at baseline, 8 and 20 weeks, were conducted using Vaginal Health Index (VHI), Visual Analog Scale for VVA symptoms (dyspareunia, dryness, and burning), Female Sexual Function Index, and maturation value (MV) of Meisels.\n\n\nRESULTS\nForty-five women were included and 3 women were lost to follow-up. VHI average score was significantly higher at weeks 8 and 20 in all study arms. At week 20, the LE arm also showed incremental improvement of VHI score (P = 0.01). L and LE groups showed a significant improvement of dyspareunia, burning, and dryness, and the E arm only of dryness (P < 0.001). LE group presented significant improvement of total Female Sex Function Index (FSFI) score (P = 0.02) and individual domains of pain, desire, and lubrication. In contrast, the L group showed significant worsening of pain domain in FSFI (P = 0.04), but FSFI total scores were comparable in all treatment arms at week 20.\n\n\nCONCLUSIONS\nCO2 vaginal laser alone or in combination with topical estriol is a good treatment option for VVA symptoms. Sexual-related pain with vaginal laser treatment might be of concern.", "title": "" }, { "docid": "5a81a087713e3fd530c646f10073de98", "text": "This study explores the influence of wastewater feedstock composition on hydrothermal liquefaction (HTL) biocrude oil properties and physico-chemical characteristics. Spirulina algae, swine manure, and digested sludge were converted under HTL conditions (300°C, 10-12 MPa, and 30 min reaction time). Biocrude yields ranged from 9.4% (digested sludge) to 32.6% (Spirulina). Although similar higher heating values (32.0-34.7 MJ/kg) were estimated for all product oils, more detailed characterization revealed significant differences in biocrude chemistry. Feedstock composition influenced the individual compounds identified as well as the biocrude functional group chemistry. Molecular weights tracked with obdurate carbohydrate content and followed the order of Spirulina<swine manure<digested sludge. A similar trend was observed in boiling point distributions and the long branched aliphatic contents. These findings show the importance of HTL feedstock composition and highlight the need for better understanding of biocrude chemistries when considering bio-oil uses and upgrading requirements.", "title": "" }, { "docid": "601ab07a9169073032e713b0f5251c1b", "text": "We discuss fast exponential time solutions for NP-complete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NP-complete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more.", "title": "" }, { "docid": "ef345b834b801a36b88d3f462f7c2a0e", "text": "At the global level of the Big Five, Extraversion and Neuroticism are the strongest predictors of life satisfaction. However, Extraversion and Neuroticism are multifaceted constructs that combine more specific traits. This article examined the contribution of facets of Extraversion and Neuroticism to life satisfaction in four studies. The depression facet of Neuroticism and the positive emotions/cheerfulness facet of Extraversion were the strongest and most consistent predictors of life satisfaction. These two facets often accounted for more variance in life satisfaction than Neuroticism and Extraversion. The findings suggest that measures of depression and positive emotions/cheerfulness are necessary and sufficient to predict life satisfaction from personality traits. The results also lead to a more refined understanding of the specific personality traits that influence life satisfaction: Depression is more important than anxiety or anger and a cheerful temperament is more important than being active or sociable.", "title": "" }, { "docid": "d922dbcdd2fb86e7582a4fb78990990e", "text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.", "title": "" }, { "docid": "b10074ccf133a3c18a2029a5fe52f7ff", "text": "Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper.", "title": "" }, { "docid": "5d2eabccd2e9873b00de3d21903f8ba7", "text": "In prior work we have demonstrated the noise robustness of a novel microphone solution, the PARAT earplug communication terminal. Here we extend that work with results for the ETSI Advanced Front-End and segmental cepstral mean and variance normalization (CMVN). We also propose a method for doing CMVN in the model domain. This removes the need to train models on normalized features, which may significantly extend the applicability of CMVN. The recognition results are comparable to those of the traditional approach.", "title": "" }, { "docid": "c095de72c7cffc19f3b4302c2045525c", "text": "Reinforcement learning schemes perform direct on-line search in control space. This makes them appropriate for modifying control rules to obtain improvements in the performance of a system. The effectiveness of a reinforcement learning strategy is studied here through the training of a learning classz$er system (LCS) that controls the movement of an autonomous vehicle in simulated paths including left and right turns. The LCS comprises a set of conditionaction rules (classifiers) that compete to control the system and evolve by means of a genetic algorithm (GA). Evolution and operation of classifiers depend upon an appropriate credit assignment mechanism based on reinforcement learning. Different design options and the role of various parameters have been investigated experimentally. The performance of vehicle movement under the proposed evolutionary approach is superior compared with that of other (neural) approaches based on reinforcement learning that have been applied previously to the same benchmark problem.", "title": "" }, { "docid": "038f34588540683674f7ec44325b510a", "text": "We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13% with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods. Fig. 1. 15 different texture-less 3D objects are simultaneously detected with our approach under different poses on heavy cluttered background with partial occlusion. Each detected object is augmented with its 3D model. We also show the corresponding coordinate systems.", "title": "" }, { "docid": "9ce08ed9e7e34ef1f5f12bfbe54e50ea", "text": "GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to \"fill\" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime.\n In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%.", "title": "" }, { "docid": "85ba4fa537c8486ff0f8bb39ac2553b2", "text": "Sign language, which is a medium of communication for deaf people, uses manual communication and body language to convey meaning, as opposed to using sound. This paper presents a prototype Malayalam text to sign language translation system. The proposed system takes Malayalam text as input and generates corresponding Sign Language. Output animation is rendered using a computer generated model. This system will help to disseminate information to the deaf people in public utility places like railways, banks, hospitals etc. This will also act as an educational tool in learning Sign Language.", "title": "" }, { "docid": "49f2f870496d34fe379c0b077197bde3", "text": "Ultra wideband components have been developed using SIW technology. The various components including a GCPW transition with less than 0.4dB insertion loss are developed. In addition to, T and Y-junctions are optimized with relatively wide bandwidth of greater than 63% and 40% respectively that have less than 0.6 dB insertion loss. The developed transition was utilized to design an X-band 8 way power divider that demonstrated excellent performance over a 5 GHz bandwidth with less than ±4º and ±0.9 dB phase and amplitude imbalance, respectively. The developed SIW power divider has a low profile and is particularly suitable for circuits' integration.", "title": "" }, { "docid": "cbc6986bf415292292b7008ae4d13351", "text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.", "title": "" }, { "docid": "ebd7f55f11d6fe8e4f439358b8a65eb4", "text": "This article investigates the problem of Simultaneous Localization and Mapping (SLAM) from the perspective of linear estimation theory. The problem is first formulated in terms of graph embedding: a graph describing robot poses at subsequent instants of time needs be embedded in a three-dimensional space, assuring that the estimated configuration maximizes measurement likelihood. Combining tools belonging to linear estimation and graph theory, a closed-form approximation to the full SLAM problem is proposed, under the assumption that the relative position and the relative orientation measurements are independent. The approach needs no initial guess for optimization and is formally proven to admit solution under the SLAM setup. The resulting estimate can be used as an approximation of the actual nonlinear solution or can be further refined by using it as an initial guess for nonlinear optimization techniques. Finally, the experimental analysis demonstrates that such refinement is often unnecessary, since the linear estimate is already accurate.", "title": "" }, { "docid": "21756eeb425854184ba2ea722a935928", "text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.", "title": "" }, { "docid": "815e0ad06fdc450aa9ba3f56ab19ab05", "text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.", "title": "" }, { "docid": "4c50dd5905ce7e1f772e69673abe1094", "text": "The wireless industry has been experiencing an explosion of data traffic usage in recent years and is now facing an even bigger challenge, an astounding 1000-fold data traffic increase in a decade. The required traffic increase is in bits per second per square kilometer, which is equivalent to bits per second per Hertz per cell × Hertz × cell per square kilometer. The innovations through higher utilization of the spectrum (bits per second per Hertz per cell) and utilization of more bandwidth (Hertz) are quite limited: spectral efficiency of a point-to-point link is very close to the theoretical limits, and utilization of more bandwidth is a very costly solution in general. Hyper-dense deployment of heterogeneous and small cell networks (HetSNets) that increase cells per square kilometer by deploying more cells in a given area is a very promising technique as it would provide a huge capacity gain by bringing small base stations closer to mobile devices. This article presents a holistic view on hyperdense HetSNets, which include fundamental preference in future wireless systems, and technical challenges and recent technological breakthroughs made in such networks. Advancements in modeling and analysis tools for hyper-dense HetSNets are also introduced with some additional interference mitigation and higher spectrum utilization techniques. This article ends with a promising view on the hyper-dense HetSNets to meet the upcoming 1000× data challenge.", "title": "" }, { "docid": "14a90781132fa3932d41b21b382ba362", "text": "In this paper, a prevalent type of zero-voltage- transition bidirectional converters is analyzed with the inclusion of the reverse recovery effect of the diodes. The main drawback of this type is missing the soft-switching condition of the main switches at operating duty cycles smaller than 0.5. As a result, soft-switching condition would be lost in one of the bidirectional converter operating modes (forward or reverse modes) since the duty cycles of the forward and reverse modes are complement of each other. Analysis shows that the rectifying diode reverse recovery would assist in providing the soft-switching condition for the duty cycles below 0.5, which is done by a proper design of the snubber capacitor and with no limitation on the rectifying diode current rate at turn-off. Hence, the problems associated with the soft-switching range and the reverse recovery of the rectifying diode are solved simultaneously, and soft-switching condition for both operating modes of the bidirectional converter is achieved with no extra auxiliary components and no complex control. The theoretical analysis for a bidirectional buck and boost converter is presented in detail, and the validity of the theoretical analysis is justified using the experimental results of a 250-W 135- to 200-V prototype converter.", "title": "" } ]
scidocsrr
e87fa1711329d3b3f0a6b56ad4080445
IR-UWB Radar Demonstrator for Ultra-Fine Movement Detection and Vital-Sign Monitoring
[ { "docid": "45f27e9c768e6fa0a1f4aa63532827ff", "text": "Antennas are mandatory system components for UWB communication systems. The paper presents a comprehensive approach for the characterization of UWB antenna concepts. Measurements of the transient responses of a LPDA and a Vivaldi antenna prove the effectivity of the presented model.", "title": "" } ]
[ { "docid": "9af2a00a9a059a87a188d351f7de4904", "text": "The cities of Paris, London, Chicago, and New York (among others) have recently launched large-scale bike-share systems to facilitate the use of bicycles for urban commuting. This paper estimates the relationship between aspects of bike-share system design and ridership. Specifically, we estimate the effects on ridership of station accessibility (how far the commuter must walk to reach a station) and of bike-availability (the likelihood of finding a bike at the station). Our analysis is based on a structural demand model that considers the random-utility maximizing choices of spatially distributed commuters, and it is estimated using highfrequency system-use data from the bike-share system in Paris. The role of station accessibility is identified using cross-sectional variation in station location and high -frequency changes in commuter choice sets; bike-availability effects are identified using longitudinal variation. Because the scale of our data, (in particular the high-frequency changes in choice sets) render traditional numerical estimation techniques infeasible, we develop a novel transformation of our estimation problem: from the time domain to the “station stockout state” domain. We find that a 10% reduction in distance traveled to access bike-share stations (about 13 meters) can increase system-use by 6.7% and that a 10% increase in bikeavailability can increase system-use by nearly 12%. Finally, we use our estimates to develop a calibrated counterfactual simulation demonstrating that the bike-share system in central Paris would have 29.41% more ridership if its station network design had incorporated our estimates of commuter preferences—with no additional spending on bikes or docking points.", "title": "" }, { "docid": "e8b4f006d0d8bc1fb504ae4268d6f3ac", "text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/fall2014/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) If you are an on-campus (non-SCPD) student, please print, fill out, and include a copy of the cover sheet (enclosed as the final page of this document), and include the cover sheet as the first page of your submission. as a single PDF file under 20MB in size. If you have trouble submitting online, you can also email your submission to cs229-qa@cs.stanford.edu. However, we strongly recommend using the website submission method as it will provide confirmation of submission, and also allow us to track and return your graded homework to you more easily. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices.", "title": "" }, { "docid": "b74922324e4b0e67092b3303068c8794", "text": "Data mining techniques are used to extract useful knowledge from raw data. The extracted knowledge is valuable and significantly affects the decision maker. Educational data mining (EDM) is a method for extracting useful information that could potentially affect an organization. The increase of technology use in educational systems has led to the storage of large amounts of student data, which makes it important to use EDM to improve teaching and learning processes. EDM is useful in many different areas including identifying at-risk students, identifying priority learning needs for different groups of students, increasing graduation rates, effectively assessing institutional performance, maximizing campus resources, and optimizing subject curriculum renewal. This paper surveys the relevant studies in the EDM field and includes the data and methodologies used in those studies.", "title": "" }, { "docid": "43a4fe61a35c1c34335ac4d1f86ebea3", "text": "The augmented Lagrangian method (ALM) is a benchmark for solving a convex minimization model with linear constraints. We consider the special case where the objective is the sum of m functions without coupled variables. For solving this separable convex minimization model, it is usually required to decompose the ALM subproblem at each iteration into m smaller subproblems, each of which only involves one function in the original objective. Easier subproblems capable of taking full advantage of the functions’ properties individually could thus be generated. In this paper, we focus on the case where full Jacobian decomposition is applied to ALM subproblems, i.e., all the decomposed ALM subproblems are eligible for parallel computation at each iteration. For the first time, we show by an example that the ALM with full Jacobian decomposition could be divergent. To guarantee the convergence, we suggest combining an under-relaxation step and the output of the ALM with full Jacobian decomposition. A novel analysis is presented to illustrate how to choose refined step sizes for this under-relaxation step. Accordingly, a new splitting version of the ALM with full Jacobian decomposition is proposed. We derive the worst-case O(1/k) convergence rate measured by the iteration complexity (where k represents the iteration counter) in both the ergodic and a nonergodic senses for the new algorithm. Finally, an assignment problem is tested to illustrate the efficiency of the new algorithm.", "title": "" }, { "docid": "e181f73c36c1d8c9463ef34da29d9e03", "text": "This paper examines prospects and limitations of citation studies in the humanities. We begin by presenting an overview of bibliometric analysis, noting several barriers to applying this method in the humanities. Following that, we present an experimental tool for extracting and classifying citation contexts in humanities journal articles. This tool reports the bibliographic information about each reference, as well as three features about its context(s): frequency, locationin-document, and polarity. We found that extraction was highly successful (above 85%) for three of the four journals, and statistics for the three citation figures were broadly consistent with previous research. We conclude by noting several limitations of the sentiment classifier and suggesting future areas for refinement. .................................................................................................................................................................................", "title": "" }, { "docid": "77aea5cc0a74546f5c8fef1dd39770bc", "text": "Road condition data are important in transportation management systems. Over the last decades, significant progress has been made and new approaches have been proposed for efficient collection of pavement condition data. However, the assessment of unpaved road conditions has been rarely addressed in transportation research. Unpaved roads constitute approximately 40% of the U.S. road network, and are the lifeline in rural areas. Thus, it is important for timely identification and rectification of deformation on such roads. This article introduces an innovative Unmanned Aerial Vehicle (UAV)-based digital imaging system focusing on efficient collection of surface condition data over rural roads. In contrast to other approaches, aerial assessment is proposed by exploring aerial imagery acquired from an unpiloted platform to derive a threedimensional (3D) surface model over a road distress area for distress measurement. The system consists of a lowcost model helicopter equipped with a digital camera, a Global Positioning System (GPS) receiver and an Inertial Navigation System (INS), and a geomagnetic sensor. A set of image processing algorithms has been developed for precise orientation of the acquired images, and generation of 3D road surface models and orthoimages, which allows for accurate measurement of the size and the dimension of the road surface distresses. The developed system has been tested over several test sites ∗To whom correspondence should be addressed. E-mail: chunsunz@ unimelb.edu.au. with roads of various surface distresses. The experiments show that the system is capable for providing 3D information of surface distresses for road condition assessment. Experiment results demonstrate that the system is very promising and provides high accuracy and reliable results. Evaluation of the system using 2D and 3D models with known dimensions shows that subcentimeter measurement accuracy is readily achieved. The comparison of the derived 3D information with the onsite manual measurements of the road distresses reveals differences of 0.50 cm, demonstrating the potential of the presented system for future practice.", "title": "" }, { "docid": "92fab94ccaf9495fed86eb456602b3b4", "text": "We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.", "title": "" }, { "docid": "921b4ecaed69d7396285909bd53a3790", "text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.", "title": "" }, { "docid": "c8009d5823d7af91dc9b56a4d19eed27", "text": "Built to Last's answer is to consciously build a compmy with even more care than the hotels, airplanes, or computers from which the company earns revenue. Building a company requires much more than hiring smart employees and aggressive salespeople. Visionary companies consider the personality of their potential employees and how they will fare in the company culture. They treasure employees dedicated to the company's mission, while those that don't are \" ejected like a virus. \" They carefully choose goals and develop cultures that encourage innovation and experimentation. Visionary companies plan for the future, measure their current production, and revise plans when conditions change. Much like the TV show Biography, Built to Last gives fascinating historical insight into the birth and growth of The most radical of the three books I reviewed, The Fifth Discipline, can fundamentally change the way you view the world. The Flremise is that businesses, schools, gopernments, and other organizations can best succeed if they are learning organizations. The Fifth Discipline is Peter Senge's vehicle for explaining how five complementary components-systems thinking, personal mastery, mental models, shared vision, and team learning-can support continuous learning and therefore sustainable iniprovement. Senge, a professor a t MIT's Sloan School of Government and a director of the Society for Organizational Learning, looks beyont: simple cause-and-effect explanation:j and instead advocates \" systems thinking \" to discover a more complete understanding of how and why events occur. Systems thinkers go beyond the data readily available, question assumptions, and try to identify the many types of activities that can occur simultaneously. The need for such a worldview is made clear early in the book with the role-playing \" beer game. \" In this game, three participants play the roles of store manager, beverage distributor, and beer brewer. Each has information that would typically he available: the store manager knows how many cases of beer are in inventory , how many are on order, and how many were sold in the last week. The distributor tracks the orders placed with the brewery, inventory, orders received this week from each store, and so on. As the customers' demands vary, the manager, distributor, and brewer make what seem to be reasonable decisions to change the amount they order or brew. Thousands of people have played this and, unfortunately, the results are extremely consistent. As each player tries to maximize profits, each fails to consider how his …", "title": "" }, { "docid": "b61985ecdb51982e6e31b19c862f18e2", "text": "Autonomous indoor navigation of Micro Aerial Vehicles (MAVs) possesses many challenges. One main reason is because GPS has limited precision in indoor environments. The additional fact that MAVs are not able to carry heavy weight or power consuming sensors, such as range finders, makes indoor autonomous navigation a challenging task. In this paper, we propose a practical system in which a quadcopter autonomously navigates indoors and finds a specific target, i.e. a book bag, by using a single camera. A deep learning model, Convolutional Neural Network (ConvNet), is used to learn a controller strategy that mimics an expert pilot’s choice of action. We show our system’s performance through real-time experiments in diverse indoor locations. To understand more about our trained network, we use several visualization techniques.", "title": "" }, { "docid": "09adc565d4a36f396ccd0e1dcb046df0", "text": "We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image. To achieve this, we propose a novel Generative Adversarial Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as the generator, which we call Spatial Transformer GANs (ST-GANs). ST-GANs seek image realism by operating in the geometric warp parameter space. In particular, we exploit an iterative STN warping scheme and propose a sequential training strategy that achieves better results compared to naive training of a single generator. One of the key advantages of ST-GAN is its applicability to high-resolution images indirectly since the predicted warp parameters are transferable between reference frames. We demonstrate our approach in two applications: (1) visualizing how indoor furniture (e.g. from product images) might be perceived in a room, (2) hallucinating how accessories like glasses would look when matched with real portraits.", "title": "" }, { "docid": "41d5b01cf6f731db0752af0953395327", "text": "Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being “too linear” (Goodfellow et al., 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing; linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation.", "title": "" }, { "docid": "b82b46fc0d886e3e87b757a6ca14d4bb", "text": "Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.", "title": "" }, { "docid": "d380a5de56265c80309733370c612316", "text": "Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard \"outcome\" debriefing. \"Process\" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.", "title": "" }, { "docid": "e65ec1afef79e5c85b6fa2009c7ecd95", "text": "Popular domain adaptation (DA) techniques learn a classifier for the target domain by sampling relevant data points from the source and combining it with the target data. We present a Support Vector Machine (SVM) based supervised DA technique, where the similarity between source and target domains is modeled as the similarity between their SVM decision boundaries. We couple the source and target SVMs and reduce the model to a standard single SVM. We test the Coupled-SVM on multiple datasets and compare our results with other popular SVM based DA approaches.", "title": "" }, { "docid": "f11aa75465f087bcd059e2af1dc963d4", "text": "The process of translation is ambiguous, in that there are typically many valid translations for a given sentence. This gives rise to significant variation in parallel corpora, however, most current models of machine translation do not account for this variation, instead treating the problem as a deterministic process. To this end, we present a deep generative model of machine translation which incorporates a chain of latent variables, in order to account for local lexical and syntactic variation in parallel corpora. We provide an indepth analysis of the pitfalls encountered in variational inference for training deep generative models. Experiments on several different language pairs demonstrate that the model consistently improves over strong baselines.", "title": "" }, { "docid": "0f421a4ee46535f01390e04fa24b5502", "text": "Wireless sensor networks (WSNs) are autonomous networks of spatially distributed sensor nodes that are capable of wirelessly communicating with each other in a multihop fashion. Among different metrics, network lifetime and utility, and energy consumption in terms of carbon footprint are key parameters that determine the performance of such a network and entail a sophisticated design at different abstraction levels. In this paper, wireless energy harvesting (WEH), wake-up radio (WUR) scheme, and error control coding (ECC) are investigated as enabling solutions to enhance the performance of WSNs while reducing its carbon footprint. Specifically, a utility-lifetime maximization problem incorporating WEH, WUR, and ECC, is formulated and solved using distributed dual subgradient algorithm based on the Lagrange multiplier method. Discussion and verification through simulation results show how the proposed solutions improve network utility, prolong the lifetime, and pave the way for a greener WSN by reducing its carbon footprint.", "title": "" }, { "docid": "3a1419469eb2c04dee78e3b7d46d1a18", "text": "c∈T ∑ u∈Sc log fu,c(X), where Sc – set of locations, which were identified as a class c ∈ C by the weak localization procedure. 2 Expansion principle • Expansion loss incorporates a prior knowledge about object sizes. • The characteristic size of any class c is controlled by a decay parameter dc. • We use decay d+ for all classes, which present in the image, and decay d− for all classes, which are absent. I = {i1, . . . , in} defines descending order for class scores: fi1,c(x) ≥ · · · ≥ fin,c(x) Gc(f(X);dc) = 1 Z(dc) n ∑", "title": "" }, { "docid": "277919545c003c0c2a266ace0d70de03", "text": "Two single-pole, double-throw transmit/receive switches were designed and fabricated with different substrate resistances using a 0.18-/spl mu/m p/sup $/substrate CMOS process. The switch with low substrate resistances exhibits 0.8-dB insertion loss and 17-dBm P/sub 1dB/ at 5.825 GHz, whereas the switch with high substrate resistances has 1-dB insertion loss and 18-dBm P/sub 1dB/. These results suggest that the optimal insertion loss can be achieved with low substrate resistances and 5.8-GHz T/R switches with excellent insertion loss and reasonable power handling capability can be implemented in a 0.18-/spl mu/m CMOS process.", "title": "" }, { "docid": "754108343e8a57852d4a54abf45f5c43", "text": "Precision measurement of dc high current is usually realized by second harmonic fluxgate current transducers, but the complicated modulation and demodulation circuits with high cost have been limiting their applications. This paper presents a low-cost transducer that can substitute the traditional ones for precision measurement of high current. The new transducer, based on the principle of zero-flux, is the combination of an improved self-oscillating fluxgate sensor with a magnetic integrator in a common feedback loop. The transfer function of the zero-flux control strategy of the transducer is established to verify the validity of the qualitative analysis on operating principle. Origins and major influence factors of the modulation ripple, respectively, caused by the useful signal extraction circuit and the transformer effect are studied, and related suppression methods are proposed, which can be considered as one of the major technical modifications for performance improvement. As verification, a prototype is realized, and several key specifications, including the linearity, small-signal bandwidth, modulation ripple, ratio stability under full load, power-on repeatability, magnetic error, and temperature coefficient, are characterized. Measurement results show that the new transducer with the maximum output ripple 0.3 μA can measure dc current up to ±600 A with a relative accuracy 1.3 ppm in the full scale, and it also can measure ac current and has a -3 dB bandwidth greater than 100 kHz.", "title": "" } ]
scidocsrr
d63f1e7dcbda8cd429b78be6841859a9
Permission based Android security: Issues and countermeasures
[ { "docid": "cb561e56e60ba0e5eef2034158c544c2", "text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.", "title": "" } ]
[ { "docid": "3cdd640f48c1713c3d360da00c634883", "text": "Hate speech detection in social media texts is an important Natural language Processing task, which has several crucial applications like sentiment analysis, investigating cyber bullying and examining socio-political controversies. While relevant research has been done independently on code-mixed social media texts and hate speech detection, our work is the first attempt in detecting hate speech in HindiEnglish code-mixed social media text. In this paper, we analyze the problem of hate speech detection in code-mixed texts and present a Hindi-English code-mixed dataset consisting of tweets posted online on Twitter. The tweets are annotated with the language at word level and the class they belong to (Hate Speech or Normal Speech). We also propose a supervised classification system for detecting hate speech in the text using various character level, word level, and lexicon based features.", "title": "" }, { "docid": "6c4b9b5383269ed47d2077068652f0b7", "text": "Security issues in computer networks have focused on attacks on end systems and the control plane. An entirely new class of emerging network attacks aims at the data plane of the network. Data plane forwarding in network routers has traditionally been implemented with custom-logic hardware, but recent router designs increasingly use software-programmable network processors for packet forwarding. These general-purpose processing devices exhibit software vulnerabilities and are susceptible to attacks. We demonstrate-to our knowledge the first-practical attack that exploits a vulnerability in packet processing software to launch a devastating denial-of-service attack from within the network infrastructure. This attack uses only a single attack packet to consume the full link bandwidth of the router's outgoing link. We also present a hardware-based defense mechanism that can detect situations where malicious packets try to change the operation of the network processor. Using a hardware monitor, our NetFPGA-based prototype system checks every instruction executed by the network processor and can detect deviations from correct processing within four clock cycles. A recovery system can restore the network processor to a safe state within six cycles. This high-speed detection and recovery system can ensure that network processors can be protected effectively and efficiently from this new class of attacks.", "title": "" }, { "docid": "150e7a6f46e93fc917e43e32dedd9424", "text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.", "title": "" }, { "docid": "319a2cf90013976af8ea5cee9f8ddc88", "text": "Inspired by “GoogleTM Sets”, we consider the problem of retrieving items from a concept or cluster, given a query consisting of a few items from that cluster. We formulate this as a Bayesian inference problem and describe a very simple algorithm for solving it. Our algorithm uses a modelbased concept of a cluster and ranks items using a score which evaluates the marginal probability that each item belongs to a cluster containing the query items. For exponential family models with conjugate priors this marginal probability is a simple function of sufficient statistics. We focus on sparse binary data and show that our score can be evaluated exactly using a single sparse matrix multiplication, making it possible to apply our algorithm to very large datasets. We evaluate our algorithm on three datasets: retrieving movies from EachMovie, finding completions of author sets from the NIPS dataset, and finding completions of sets of words appearing in the Grolier encyclopedia. We compare to Google TM Sets and show that Bayesian Sets gives very reasonable set completions.", "title": "" }, { "docid": "f62ea522062fb48860c98140d746ab23", "text": "Feature selection is widely used in preparing high-dimensional data for effective data mining. The explosive popularity of social media produces massive and high-dimensional data at an unprecedented rate, presenting new challenges to feature selection. Social media data consists of (1) traditional high-dimensional, attribute-value data such as posts, tweets, comments, and images, and (2) linked data that provides social context for posts and describes the relationships between social media users as well as who generates the posts, and so on. The nature of social media also determines that its data is massive, noisy, and incomplete, which exacerbates the already challenging problem of feature selection. In this article, we study a novel feature selection problem of selecting features for social media data with its social context. In detail, we illustrate the differences between attribute-value data and social media data, investigate if linked data can be exploited in a new feature selection framework by taking advantage of social science theories. We design and conduct experiments on datasets from real-world social media Web sites, and the empirical results demonstrate that the proposed framework can significantly improve the performance of feature selection. Further experiments are conducted to evaluate the effects of user--user and user--post relationships manifested in linked data on feature selection, and research issues for future work will be discussed.", "title": "" }, { "docid": "171c903403e1b199a22c980d75217f14", "text": "The optical microscope remains a widely-used tool for diagnosis and quantitation of malaria. An automated system that can match the performance of well-trained technicians is motivated by a shortage of trained microscopists. We have developed a computer vision system that leverages deep learning to identify malaria parasites in micrographs of standard, field-prepared thick blood films. The prototype application diagnoses P. falciparum with sufficient accuracy to achieve competency level 1 in the World Health Organization external competency assessment, and quantitates with sufficient accuracy for use in drug resistance studies. A suite of new computer vision techniques-global white balance, adaptive nonlinear grayscale, and a novel augmentation scheme-underpin the system's state-of-the-art performance. We outline a rich, global training set; describe the algorithm in detail; argue for patient-level performance metrics for the evaluation of automated diagnosis methods; and provide results for P. falciparum.", "title": "" }, { "docid": "cd5a267c1dac92e68ba677c4a2e06422", "text": "Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach.", "title": "" }, { "docid": "83b50f380f500bf6e140b3178431f0c6", "text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.", "title": "" }, { "docid": "a34a49a337cd0d198fe8bcc05f8a91ea", "text": "In most real-world audio recordings, we encounter several types of audio events. In this paper, we develop a technique for detecting signature audio events, that is based on identifying patterns of occurrences of automatically learned atomic units of sound, which we call Acoustic Unit Descriptors or AUDs. Experiments show that the methodology works as well for detection of individual events and their boundaries in complex recordings.", "title": "" }, { "docid": "948b157586c75674e75bd50b96162861", "text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.", "title": "" }, { "docid": "5ed1a43c51bfca023764a0159449bc68", "text": "Level Converters are key components of multi-voltage based systems-on-chips. Recently, a great deal of research has been focused on power dissipation reduction using various types of level converters in multi-voltage systems. These level converters include either level up conversion or level down conversion. In this paper we propose a unique level converter called universal level converter (ULC). This level converter is capable of four types of level converting functions, such as up conversion, down conversion, passing and blocking. The universal level converter is simulated in CADENCE using 90nm PTM technology model files. Three types of analysis such as power, parametric and load analysis are performed on the proposed level converter. The power analysis results prove that the proposed level converter has an average power reduction of approximately 87.2% compared to other existing level converters at different technology nodes. The parametric analysis and load analysis show that the proposed level converter provides a stable output for input voltages as low as 0.6V with a varying load from 1fF-200fF. The universal level converter works at dual voltages of 1.2V and 1.02V (85% of Vddh) with VTH value for NMOS as 0.339V and for PMOS as -0.339V. The ULC has an average power consumption of 27.1μW at a load", "title": "" }, { "docid": "cdc3b46933db0c88f482ded1dcdff9e6", "text": "Overvoltages in low voltage (LV) feeders with high penetration of photovoltaics (PV) are usually prevented by limiting the feeder's PV capacity to very conservative values, even if the critical periods rarely occur. This paper discusses the use of droop-based active power curtailment techniques for overvoltage prevention in radial LV feeders as a means for increasing the installed PV capacity and energy yield. Two schemes are proposed and tested in a typical 240-V/75-kVA Canadian suburban distribution feeder with 12 houses with roof-top PV systems. In the first scheme, all PV inverters have the same droop coefficients. In the second, the droop coefficients are different so as to share the total active power curtailed among all PV inverters/houses. Simulation results demonstrate the effectiveness of the proposed schemes and that the option of sharing the power curtailment among all customers comes at the cost of an overall higher amount of power curtailed.", "title": "" }, { "docid": "9193aad006395bd3bd76cabf44012da5", "text": "In recent years, there is growing evidence that plant-foods polyphenols, due to their biological properties, may be unique nutraceuticals and supplementary treatments for various aspects of type 2 diabetes mellitus. In this article we have reviewed the potential efficacies of polyphenols, including phenolic acids, flavonoids, stilbenes, lignans and polymeric lignans, on metabolic disorders and complications induced by diabetes. Based on several in vitro, animal models and some human studies, dietary plant polyphenols and polyphenol-rich products modulate carbohydrate and lipid metabolism, attenuate hyperglycemia, dyslipidemia and insulin resistance, improve adipose tissue metabolism, and alleviate oxidative stress and stress-sensitive signaling pathways and inflammatory processes. Polyphenolic compounds can also prevent the development of long-term diabetes complications including cardiovascular disease, neuropathy, nephropathy and retinopathy. Further investigations as human clinical studies are needed to obtain the optimum dose and duration of supplementation with polyphenolic compounds in diabetic patients.", "title": "" }, { "docid": "6025fb8936761dcf3c6751545b430ec0", "text": "Although many sentiment lexicons in different languages exist, most are not comprehensive. In a recent sentiment analysis application, we used a large Chinese sentiment lexicon and found that it missed a large number of sentiment words used in social media. This prompted us to make a new attempt to study sentiment lexicon expansion. This paper first formulates the problem as a PU learning problem. It then proposes a new PU learning method suitable for the problem based on a neural network. The results are further enhanced with a new dictionary lookup technique and a novel polarity classification algorithm. Experimental results show that the proposed approach greatly outperforms baseline methods.", "title": "" }, { "docid": "d0bdce703addec1bc59e5ab842aedf79", "text": "This paper presents some of the findings from a recent project that conducted a virtual ethnographic study of three formal courses in higher education that use ‘Web 2.0’or social technologies for learning and teaching. It describes the pedagogies adopted within these courses, and goes on to explore some key themes emerging from the research and relating to the pedagogical use of weblogs and wikis in particular. These themes relate primarily to the academy’s tendency to constrain and contain the possibly more radical effects of these new spaces. Despite this, the findings present a range of student and tutor perspectives which show that these technologies have significant potential as new collaborative, volatile and challenging environments for formal learning.", "title": "" }, { "docid": "17d0da8dd05d5cfb79a5f4de4449fcdd", "text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in", "title": "" }, { "docid": "4520cafacd4794ec942030252652ae7c", "text": "While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. OPEN ACCESS Sensors 2014, 14 18852", "title": "" }, { "docid": "ebbe58dcb5ca5374af503592e00956e3", "text": "Our generation has seen the boom and ubiquitous advent of Internet connectivity. Adversaries have been exploiting this omnipresent connectivity as an opportunity to launch cyber attacks. As a consequence, researchers around the globe devoted a big attention to data mining and machine learning with emphasis on improving the accuracy of intrusion detection system (IDS). In this paper, we present a few-shot deep learning approach for improved intrusion detection. We first trained a deep convolutional neural network (CNN) for intrusion detection. We then extracted outputs from different layers in the deep CNN and implemented a linear support vector machine (SVM) and 1-nearest neighbor (1-NN) classifier for few-shot intrusion detection. few-shot learning is a recently developed strategy to handle situation where training samples for a certain class are limited. We applied our proposed method to the two well-known datasets simulating intrusion in a military network: KDD 99 and NSL-KDD. These datasets are imbalanced, and some classes have much less training samples than others. Experimental results show that the proposed method achieved better performances than the state-of-the-art on those two datasets.", "title": "" }, { "docid": "20b00a2cc472dfec851f4aea42578a9e", "text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.", "title": "" }, { "docid": "96669cea810d2918f2d35875f87d45f2", "text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.", "title": "" } ]
scidocsrr
87f1dfeed6c0a652ff01913779db2d48
RECENT ADVANCES IN PERSONAL RECOMMENDER SYSTEMS
[ { "docid": "21756eeb425854184ba2ea722a935928", "text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.", "title": "" } ]
[ { "docid": "5857805620b43cafa7a18461dfb74363", "text": "In this paper, we give an overview for the shared task at the 5th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC 2016): Chinese word segmentation for micro-blog texts. Different with the popular used newswire datasets, the dataset of this shared task consists of the relatively informal micro-texts. Besides, we also use a new psychometric-inspired evaluation metric for Chinese word segmentation, which addresses to balance the very skewed word distribution at different levels of difficulty. The data and evaluation codes can be downloaded from https://github.com/FudanNLP/ NLPCC-WordSeg-Weibo.", "title": "" }, { "docid": "f0958d2c952c7140c998fa13a2bf4374", "text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.", "title": "" }, { "docid": "1af7a41e5cac72ed9245b435c463b366", "text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.", "title": "" }, { "docid": "89357509bc9b4937f85ed1c1b028cc00", "text": "Rotator cuff disorders are considered to be among the most common causes of shoulder pain and disability encountered in both primary and secondary care. The general pathology of subacromial impingment generally relates to a chronic repetitive process in which the conjoint tendon of the rotator cuff undergoes repetitive compression and micro trauma as it passes under the coracoacromial arch. However acute traumatic injuries may also lead to this condition. Diagnosis remains a clinical one, however advances in imaging modalities have enabled clinicians to have an increased understanding of the pathological process. Ultrasound scanning appears to be a justifiable and cost effective assessment tool following plain radiographs in the assessment of shoulder impingment, with MRI scans being reserved for more complex cases. A period of observed conservative management including the use of NSAIDs, physiotherapy with or without the use of subacromial steroid injections is a well-established and accepted practice. However, in young patients or following any traumatic injury to the rotator cuff, surgery should be considered early. If surgery is to be performed this should be done arthroscopically and in the case of complete rotator cuff rupture the tendon should be repaired where possible.", "title": "" }, { "docid": "e33b3ebfc46c371253cf7f68adbbe074", "text": "Although backward folding of the epiglottis is one of the signal events of the mammalian adult swallow, the epiglottis does not fold during the infant swallow. How this functional change occurs is unknown, but we hypothesize that a change in swallow mechanism occurs with maturation, prior to weaning. Using videofluoroscopy, we found three characteristic patterns of swallowing movement at different ages in the pig: an infant swallow, a transitional swallow and a post-weaning (juvenile or adult) swallow. In animals of all ages, the dorsal region of the epiglottis and larynx was held in an intranarial position by a muscular sphincter formed by the palatopharyngeal arch. In the infant swallow, increasing pressure in the oropharynx forced a liquid bolus through the piriform recesses on either side of a relatively stationary epiglottis into the esophagus. As the infant matured, the palatopharyngeal arch and the soft palate elevated at the beginning of the swallow, so exposing a larger area of the epiglottis to bolus pressure. In transitional swallows, the epiglottis was tilted backward relatively slowly by a combination of bolus pressure and squeezing of the epiglottis by closure of the palatopharyngeal sphincter. The bolus, however, traveled alongside but never over the tip of the epiglottis. In the juvenile swallow, the bolus always passed over the tip of the epiglottis. The tilting of the epiglottis resulted from several factors, including the action of the palatopharyngeal sphincter, higher bolus pressure exerted on the epiglottis and the allometry of increased size. In both transitional and juvenile swallows, the subsequent relaxation of the palatopharyngeal sphincter released the epiglottis, which sprang back to its original intranarial position.", "title": "" }, { "docid": "ee4dbe3dc0352a60c61ec8d36ebda56d", "text": "This paper proposes a two-axis-decoupled solar tracker based on parallel mechanism. Utilizing Grassmann line geometry, the type design of the two-axis solar tracker is investigated. Then, singularity is studied to obtain the workspace without singularities. By using the virtual work principle, the inverse dynamics is derived to find out the driving torque. Taking Beijing as a sample city where the solar tracker is placed, the motion trajectory of the tracker is planned to collect the maximum solar energy. The position of the mass center of the solar mirror on the platform is optimized to minimize the driving torque. The driving torque of the proposed tracker is compared with that of a conventional serial tracker, which shows that the proposed tracker can greatly reduce the driving torque and the reducers with large reduction ratio are not necessary. Thus, the complexity and power dissipation of the system can be reduced.", "title": "" }, { "docid": "f7e4c0300f1483883956be3cb5ccc174", "text": "Despite of the fact that graph-based methods are gaining more and more popularity in different scientific areas, it has to be considered that the choice of an appropriate algorithm for a given application is still the most crucial task. The lack of a large database of graphs makes the task of comparing the performance of different graph matching algorithms difficult, and often the selection of an algorithm is made on the basis of a few experimental results available. In this paper we present an experimental comparative evaluation of the performance of four graph matching algorithms. In order to perform this comparison, we have built and made available a large database of graphs, which is also described in detail in this article. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "bdf3afc900c92867c2af9fccabe27451", "text": "In conventional HTTP-based adaptive streaming (HAS), a video source is encoded at multiple levels of constant bitrate representations, and a client makes its representation selections according to the measured network bandwidth. While greatly simplifying adaptation to the varying network conditions, this strategy is not the best for optimizing the video quality experienced by end users. Quality fluctuation can be reduced if the natural variability of video content is taken into consideration. In this work, we study the design of a client rate adaptation algorithm to yield consistent video quality. We assume that clients have visibility into incoming video within a finite horizon. We also take advantage of the client-side video buffer, by using it as a breathing room for not only network bandwidth variability, but also video bitrate variability. The challenge, however, lies in how to balance these two variabilities to yield consistent video quality without risking a buffer underrun. We propose an optimization solution that uses an online algorithm to adapt the video bitrate step-by-step, while applying dynamic programming at each step. We incorporate our solution into PANDA -- a practical rate adaptation algorithm designed for HAS deployment at scale.", "title": "" }, { "docid": "02d8a6c039c3ab37e78160c7a9831714", "text": "In this paper we present the design, fabrication and demonstration of an X-band phased array capable of wide-angle scanning. A new non-symmetric element for wideband tightly coupled dipole arrays is integrated with a low-profile microstrip balun printed on the array ground plane. The feed connects to the array aperture with vertical twin-wire transmission lines that concurrently perform impedance matching. The proposed element arms are identical near the center feed portion but dissimilar towards the ends, forming a ball-and-cup. A 64 element array prototype is verified experimentally and compared to numerical simulation. The array aperture is placed λ/7 (at 8 GHz) above a ground plane and shown to maintain a VSWR < 2 from 8–12.5 GHz while scanning up to 75° and 60° in E and H-plane, respectively.", "title": "" }, { "docid": "72b3fbd8c7f03a4ad1e36ceb5418cba6", "text": "The risk for multifactorial diseases is determined by risk factors that frequently apply across disorders (universal risk factors). To investigate unresolved issues on etiology of and individual’s susceptibility to multifactorial diseases, research focus should shift from single determinant-outcome relations to effect modification of universal risk factors. We present a model to investigate universal risk factors of multifactorial diseases, based on a single risk factor, a single outcome measure, and several effect modifiers. Outcome measures can be disease overriding, such as clustering of disease, frailty and quality of life. “Life course epidemiology” can be considered as a specific application of the proposed model, since risk factors and effect modifiers of multifactorial diseases typically have a chronic aspect. Risk factors are categorized into genetic, environmental, or complex factors, the latter resulting from interactions between (multiple) genetic and environmental factors (an example of a complex factor is overweight). The proposed research model of multifactorial diseases assumes that determinant-outcome relations differ between individuals because of modifiers, which can be divided into three categories. First, risk-factor modifiers that determine the effect of the determinant (such as factors that modify gene-expression in case of a genetic determinant). Second, outcome modifiers that determine the expression of the studied outcome (such as medication use). Third, generic modifiers that determine the susceptibility for multifactorial diseases (such as age). A study to assess disease risk during life requires phenotype and outcome measurements in multiple generations with a long-term follow up. Multiple generations will also enable to separate genetic and environmental factors. Traditionally, representative individuals (probands) and their first-degree relatives have been included in this type of research. We put forward that a three-generation design is the optimal approach to investigate multifactorial diseases. This design has statistical advantages (precision, multiple-informants, separation of non-genetic and genetic familial transmission, direct haplotype assessment, quantify genetic effects), enables unique possibilities to study social characteristics (socioeconomic mobility, partner preferences, between-generation similarities), and offers practical benefits (efficiency, lower non-response). LifeLines is a study based on these concepts. It will be carried out in a representative sample of 165,000 participants from the northern provinces of the Netherlands. LifeLines will contribute to the understanding of how universal risk factors are modified to influence the individual susceptibility to multifactorial diseases, not only at one stage of life but cumulatively over time: the lifeline.", "title": "" }, { "docid": "e12ac0716b29f35fff1ec51b1abb6326", "text": "In my commentary in response to the 3 articles (McKenzie & Lounsbery, 2013; Rink, 2013; Ward, 2013), I focus on 3 areas: (a) content knowledge, (b) a holistic approach to physical education, and (c) policy impact. I use the term quality teaching rather than \"teacher effectiveness.\" Quality teaching is a term with the potential to move our attention beyond a focus merely on issues of effectiveness relating to the achievement of prespecified objectives. I agree with Ward that teacher content knowledge is limited in physical education, and I argue that if the student does not have a connection to or relationship with the content, this will diminish their learning gains. I also argue for a more holistic approach to physical education coming from a broader conception. Physical educators who teach the whole child advocate for a plethora of physical activity, skills, knowledge, and positive attitudes that foster healthy and active playful lifestyles. Play is a valuable educational experience. I also endorse viewing assessment from different perspectives and discuss assessment through a social-critical political lens. The 3 articles also have implications for policy. Physical education is much broader than just physical activity, and we harm the future potential of our field if we adopt a narrow agenda. Looking to the future, I propose that we broaden the kinds of research that we value, support, and appreciate in our field.", "title": "" }, { "docid": "77b4be1fb0b87eb1ee0399c073a7b78f", "text": "In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.", "title": "" }, { "docid": "85b72dedb0c874fcfbb71c1d6f9fce42", "text": "In this paper, we present an optimization of Odlyzko and Schönhage algorithm that computes efficiently Zeta function at large height on the critical line, together with computation of zeros of the Riemann Zeta function thanks to an implementation of this technique. The first family of computations consists in the verification of the Riemann Hypothesis on all the first 10 non trivial zeros. The second family of computations consists in verifying the Riemann Hypothesis at very large height for different height, while collecting statistics in these zones. For example, we were able to compute two billion zeros from the 10-th zero of the Riemann Zeta function.", "title": "" }, { "docid": "ff6b4840787027df75873f38fbb311b4", "text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.", "title": "" }, { "docid": "e5a18d6df921ab96da8e106cdb4eeac7", "text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.", "title": "" }, { "docid": "c00e78121637ee9bcf1640c41204afd0", "text": "In this paper we present a methodology for analyzing polyphonic musical passages comprised by notes that exhibit a harmonically fixed spectral profile (such as piano notes). Taking advantage of this unique note structure we can model the audio content of the musical passage by a linear basis transform and use non-negative matrix decomposition methods to estimate the spectral profile and the temporal information of every note. This approach results in a very simple and compact system that is not knowledge-based, but rather learns notes by observation.", "title": "" }, { "docid": "efffd36e611546d2da975f8a182fb5a5", "text": "Annona muricata is a member of the Annonaceae family and is a fruit tree with a long history of traditional use. A. muricata, also known as soursop, graviola and guanabana, is an evergreen plant that is mostly distributed in tropical and subtropical regions of the world. The fruits of A. muricata are extensively used to prepare syrups, candies, beverages, ice creams and shakes. A wide array of ethnomedicinal activities is contributed to different parts of A. muricata, and indigenous communities in Africa and South America extensively use this plant in their folk medicine. Numerous investigations have substantiated these activities, including anticancer, anticonvulsant, anti-arthritic, antiparasitic, antimalarial, hepatoprotective and antidiabetic activities. Phytochemical studies reveal that annonaceous acetogenins are the major constituents of A. muricata. More than 100 annonaceous acetogenins have been isolated from leaves, barks, seeds, roots and fruits of A. muricata. In view of the immense studies on A. muricata, this review strives to unite available information regarding its phytochemistry, traditional uses and biological activities.", "title": "" }, { "docid": "f22375b6d29a83815aedd999cb945027", "text": "INTRODUCTION\nNumerous methods for motor unit number estimation (MUNE) have been developed. The objective of this article is to summarize and compare the major methods and the available data regarding their reproducibility, validity, application, refinement, and utility.\n\n\nMETHODS\nUsing specified search criteria, a systematic review of the literature was performed. Reproducibility, normative data, application to specific diseases and conditions, technical refinements, and practicality were compiled into a comprehensive database and analyzed.\n\n\nRESULTS\nThe most commonly reported MUNE methods are the incremental, multiple-point stimulation, spike-triggered averaging, and statistical methods. All have established normative data sets and high reproducibility. MUNE provides quantitative assessments of motor neuron loss and has been applied successfully to the study of many clinical conditions, including amyotrophic lateral sclerosis and normal aging.\n\n\nCONCLUSIONS\nMUNE is an important research technique in human subjects, providing important data regarding motor unit populations and motor unit loss over time.", "title": "" }, { "docid": "b7f21081cfd7c87cfce191978ecc218a", "text": "In less than half a century, molecular markers have totally changed our view of nature, and in the process they have evolved themselves. However, all of the molecular methods developed over the years to detect variation do so in one of only three conceptually different classes of marker: protein variants (allozymes), DNA sequence polymorphism and DNA repeat variation. The latest techniques promise to provide cheap, high-throughput methods for genotyping existing markers, but might other traditional approaches offer better value for some applications?", "title": "" } ]
scidocsrr
285fda4fd9e274640892dff2a13211cb
Derivation of GFDM based on OFDM principles
[ { "docid": "d1f8ee3d6dbc7ddc76b84ad2b0bfdd16", "text": "Cognitive radio technology addresses the limited availability of wireless spectrum and inefficiency of spectrum usage. Cognitive Radio (CR) devices sense their environment, detect spatially unused spectrum and opportunistically access available spectrum without creating harmful interference to the incumbents. In cellular systems with licensed spectrum, the efficient utilization of the spectrum as well as the protection of primary users is equally important, which imposes opportunities and challenges for the application of CR. This paper introduces an experimental framework for 5G cognitive radio access in current 4G LTE cellular systems. It can be used to study CR concepts in different scenarios, such as 4G to 5G system migrations, machine-type communications, device-to-device communications, and load balancing. Using our framework, selected measurement results are presented that compare Long Term Evolution (LTE) Orthogonal Frequency Division Multiplex (OFDM) with a candidate 5G waveform called Generalized Frequency Division Multiplexing (GFDM) and quantify the benefits of GFDM in CR scenarios.", "title": "" }, { "docid": "d23fc72c7fb3cbbc9120d2ab9fc14e75", "text": "Generalized frequency division multiplexing (GFDM) is a new concept that can be seen as a generalization of traditional OFDM. The scheme is based on the filtered multi-carrier approach and can offer an increased flexibility, which will play a significant role in future cellular applications. In this paper we present the benefits of the pulse shaped carriers in GFDM. We show that based on the FFT/IFFT algorithm, the scheme can be implemented with reasonable computational effort. Further, to be able to relate the results to the recent LTE standard, we present a suitable set of parameters for GFDM.", "title": "" } ]
[ { "docid": "2d17b30942ce0984dcbcf5ca5ba38bd2", "text": "We review the literature on the relation between narcissism and consumer behavior. Consumer behavior is sometimes guided by self-related motives (e.g., self-enhancement) rather than by rational economic considerations. Narcissism is a case in point. This personality trait reflects a self-centered, self-aggrandizing, dominant, and manipulative orientation. Narcissists are characterized by exhibitionism and vanity, and they see themselves as superior and entitled. To validate their grandiose self-image, narcissists purchase high-prestige products (i.e., luxurious, exclusive, flashy), show greater interest in the symbolic than utilitarian value of products, and distinguish themselves positively from others via their materialistic possessions. Our review lays the foundation for a novel methodological approach in which we explore how narcissism influences eye movement behavior during consumer decision-making. We conclude with a description of our experimental paradigm and report preliminary results. Our findings will provide insight into the mechanisms underlying narcissists' conspicuous purchases. They will also likely have implications for theories of personality, consumer behavior, marketing, advertising, and visual cognition.", "title": "" }, { "docid": "c3d06acdf8b74535fa22ed08420d5433", "text": "Generative adversarial networks have been shown to generate very realistic images by learning through a min-max game. Furthermore, these models are known to model image spaces more easily when conditioned on class labels. In this work, we consider conditioning on fine-grained textual descriptions, thus also enabling us to produce realistic images that correspond to the input text description. Additionally, we consider the task of learning disentangled representations for images through special latent codes, such that we can move them as knobs to alter the generated image. These latent codes take on very interpretable roles and are learnt in a completely unsupervised manner, using ideas from InfoGAN. We show that the learnt latent codes that encode much more variance and semantic interpretability as compared to standard GANs by experimenting on two datasets.", "title": "" }, { "docid": "b4c73776e6a1004f75991df0a26ad407", "text": "Recurrent urinary tract infections (UTIs) are common, especially in women. Low-dose daily or postcoital antimicrobial prophylaxis is effective for prevention of recurrent UTIs and women can self-diagnose and self-treat a new UTI with antibiotics. The increasing resistance rates of Escherichia coli to antimicrobial agents has, however, stimulated interest in nonantibiotic methods for the prevention of UTIs. This article reviews the literature on efficacy of different forms of nonantibiotic prophylaxis. Future studies with lactobacilli strains (oral and vaginal) and the oral immunostimulant OM-89 are warranted.", "title": "" }, { "docid": "c9b7832cd306fc022e4a376f10ee8fc8", "text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.", "title": "" }, { "docid": "4b6a4f9d91bc76c541f4879a1a684a3f", "text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.", "title": "" }, { "docid": "0a2d9103ca2b5c6b4c1f1efef3143d4f", "text": "Recently, a number of coding techniques have been reported to achieve near toll quality synthesized speech at bit-rates around 4 kb/s. These include variants of Code Excited Linear Prediction (CELP), Sinusoidal Transform Coding (STC) and Multi-Band Excitation (MBE). While CELP has been an effective technique for bit-rates above 6 kb/s, STC, MBE, Waveform Interpolation (WI) and Mixed Excitation Linear Prediction (MELP) [1, 2] models seem to be attractive at bit-rates below 3 kb/s. In this paper, we present a system to encode speech with high quality using MELP, a technique previously demonstrated to be effective at bit-rates of 1.6–2.4 kb/s. We have enhanced the MELP model producing significantly higher speech quality at bit-rates above 2.4 kb/s. We describe the development and testing of a high quality 4 kb/s MELP coder.", "title": "" }, { "docid": "381103e7aced15dbc42fd643e0bf69c7", "text": "Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multi-relational domains. In this survey, we review a general form for a lifted graphical model, a par-factor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field.", "title": "" }, { "docid": "ded208999b66a677d90b9e713f3d32ed", "text": "We present Spectrogram, a machine learning based statistical anomaly detection (AD) sensor for defense against web-layer code-injection attacks. These attacks include PHP file inclusion, SQL-injection and cross-sitescripting; memory-layer exploits such as buffer overflows are addressed as well. Statistical AD sensors offer the advantage of being driven by the data that is being protected and not by malcode samples captured in the wild. While models using higher order statistics can often improve accuracy, trade-offs with false-positive rates and model efficiency remain a limiting usability factor. This paper presents a new model and sensor framework that offers a favorable balance under this constraint and demonstrates improvement over some existing approaches. Spectrogram is a network situated sensor that dynamically assembles packets to reconstruct content flows and learns to recognize legitimate web-layer script input. We describe an efficient model for this task in the form of a mixture of Markovchains and derive the corresponding training algorithm. Our evaluations show significant detection results on an array of real world web layer attacks, comparing favorably against other AD approaches.", "title": "" }, { "docid": "ecd67367aed0f3f7e3218cdec8a392b4", "text": "OBJECTIVE\nTo investigate the efficacy of home-based specific stabilizing exercises focusing on the local stabilizing muscles as the only intervention in the treatment of persistent postpartum pelvic girdle pain.\n\n\nDESIGN\nA prospective, randomized, single-blinded, clinically controlled study.\n\n\nSUBJECTS\nEighty-eight women with pelvic girdle pain were recruited 3 months after delivery.\n\n\nMETHODS\nThe treatment consisted of specific stabilizing exercises targeting the local trunk muscles. The reference group had a single telephone contact with a physiotherapist. Primary outcome was disability measured with Oswestry Disability Index. Secondary outcomes were pain, health-related quality of life (EQ-5D), symptom satisfaction, and muscle function.\n\n\nRESULTS\nNo significant differences between groups could be found at 3- or 6-month follow-up regarding primary outcome in disability. Within-group comparisons showed some improvement in both groups in terms of disability, pain, symptom satisfaction and muscle function compared with baseline, although the majority still experienced pelvic girdle pain.\n\n\nCONCLUSION\nTreatment with this home-training concept of specific stabilizing exercises targeting the local muscles was no more effective in improving consequences of persistent postpartum pelvic girdle pain than the clinically natural course. Regardless of whether treatment with specific stabilizing exercises was carried out, the majority of women still experienced some back pain almost one year after pregnancy.", "title": "" }, { "docid": "b4c12965618d7d3a8049a91b513ca896", "text": "There is a convergence in recent theories of creativity that go beyond characteristics and cognitive processes of individuals to recognize the importance of the social construction of creativity. In parallel, there has been a rise in social computing supporting the collaborative construction of knowledge. The panel will discuss the challenges and opportunities from the confluence of these two developments by bringing together the contrasting and controversial perspective of the individual panel members. It will synthesize from different perspectives an analytic framework to understand these new developments, and how to promote rigorous research methods and how to identify the unique challenges in developing evaluation and assessment methods for creativity research.", "title": "" }, { "docid": "91e38df08894f59e134f83ae532b09e7", "text": "Many functional network properties of the human brain have been identified during rest and task states, yet it remains unclear how the two relate. We identified a whole-brain network architecture present across dozens of task states that was highly similar to the resting-state network architecture. The most frequent functional connectivity strengths across tasks closely matched the strengths observed at rest, suggesting this is an \"intrinsic,\" standard architecture of functional brain organization. Furthermore, a set of small but consistent changes common across tasks suggests the existence of a task-general network architecture distinguishing task states from rest. These results indicate the brain's functional network architecture during task performance is shaped primarily by an intrinsic network architecture that is also present during rest, and secondarily by evoked task-general and task-specific network changes. This establishes a strong relationship between resting-state functional connectivity and task-evoked functional connectivity-areas of neuroscientific inquiry typically considered separately.", "title": "" }, { "docid": "d9eed063ea6399a8f33c6cbda3a55a62", "text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "e8167685fcbcea1a4c6a825e50eb45d2", "text": "Statistical methods have been widely employed to study the fundamental properties of language. In recent years, methods from complex and dynamical systems proved useful to create several language models. Despite the large amount of studies devoted to represent texts with physical models, only a limited number of studies have shown how the properties of the underlying physical systems can be employed to improve the performance of natural language processing tasks. In this paper, I address this problem by devising complex networks methods that are able to improve the performance of current statistical methods. Using a fuzzy classification strategy, I show that the topological properties extracted from texts complement the traditional textual description. In several cases, the performance obtained with hybrid approaches outperformed the results obtained when only traditional or networked methods were used. Because the proposed model is generic, the framework devised here could be straightforwardly used to study similar textual applications where the topology plays a pivotal role in the description of the interacting agents.", "title": "" }, { "docid": "bb8d6adec85cbfd773051052d1051860", "text": "Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (GLMS) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on GLM parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm - the \"randomise\" algorithm - for permutation inference with the GLM.", "title": "" }, { "docid": "56b706edc6d1b6a2ff64770cb3f79c2e", "text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.", "title": "" }, { "docid": "be7ad6ff14910b8198b1e94003418989", "text": "An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machine-learning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements from fingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.", "title": "" }, { "docid": "5d6e1a7dfa5bc4cc1332d225342a01f7", "text": "Hashing seeks an embedding of high-dimensional objects into a similarity-preserving low-dimensional Hamming space such that similar objects are indexed by binary codes with small Hamming distances. A variety of hashing methods have been developed, but most of them resort to a single view (representation) of data. However, objects are often described by multiple representations. For instance, images are described by a few different visual descriptors (such as SIFT, GIST, and HOG), so it is desirable to incorporate multiple representations into hashing, leading to multi-view hashing. In this paper we present a deep network for multi-view hashing, referred to as deep multi-view hashing, where each layer of hidden nodes is composed of view-specific and shared hidden nodes, in order to learn individual and shared hidden spaces from multiple views of data. Numerical experiments on image datasets demonstrate the useful behavior of our deep multi-view hashing (DMVH), compared to recently-proposed multi-modal deep network as well as existing shallow models of hashing.", "title": "" }, { "docid": "7d4fa882673f142c4faa8a4ff3c2a205", "text": "This paper presents a different perspective on diversity in search results: diversity by proportionality. We consider a result list most diverse, with respect to some set of topics related to the query, when the number of documents it provides on each topic is proportional to the topic's popularity. Consequently, we propose a framework for optimizing proportionality for search result diversification, which is motivated by the problem of assigning seats to members of competing political parties. Our technique iteratively determines, for each position in the result ranked list, the topic that best maintains the overall proportionality. It then selects the best document on this topic for this position. We demonstrate empirically that our method significantly outperforms the top performing approach in the literature not only on our proposed metric for proportionality, but also on several standard diversity measures. This result indicates that promoting proportionality naturally leads to minimal redundancy, which is a goal of the current diversity approaches.", "title": "" }, { "docid": "f2379daa6c569d797fd000de7e42cae9", "text": "Critical infrastructure components nowadays use microprocessor-based embedded control systems. It is often infeasible, however, to employ the same level of security measures used in general purpose computing systems, due to the stringent performance and resource constraints of embedded control systems. Furthermore, as software sits atop and relies on the firmware for proper operation, software-level techniques cannot detect malicious behavior of the firmware. In this work, we propose ConFirm, a low-cost technique to detect malicious modifications in the firmware of embedded control systems by measuring the number of low-level hardware events that occur during the execution of the firmware. In order to count these events, ConFirm leverages the Hardware Performance Counters (HPCs), which readily exist in many embedded processors. We evaluate the detection capability and performance overhead of the proposed technique on various types of firmware running on ARM- and PowerPC-based embedded processors. Experimental results demonstrate that ConFirm can detect all the tested modifications with low performance overhead.", "title": "" }, { "docid": "da5c1445453853e23477bfea79fd4605", "text": "This paper presents an 8-bit column-driver IC with improved deviation of voltage output (DVO) for thin-film-transistor (TFT) liquid crystal displays (LCDs). The various DVO results contributed by the output buffer of a column driver are predicted by using Monte Carlo simulation under different variation conditions. Relying on this prediction, a better compromise can be achieved between DVO and chip size. This work was implemented using 0.35-μm CMOS technology and the measured maximum DVO is only 6.2 mV.", "title": "" } ]
scidocsrr
8626d44237740695b8dd963290f7f0b9
Influence Maximization Across Partially Aligned Heterogenous Social Networks
[ { "docid": "b9daa134744b8db757fc0857f479bd70", "text": "Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks.\n To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction.", "title": "" }, { "docid": "ee25e4acd98193e7dc3f89f3f98e42e0", "text": "Kempe et al. [4] (KKT) showed the problem of influence maximization is NP-hard and a simple greedy algorithm guarantees the best possible approximation factor in PTIME. However, it has two major sources of inefficiency. First, finding the expected spread of a node set is #P-hard. Second, the basic greedy algorithm is quadratic in the number of nodes. The first source is tackled by estimating the spread using Monte Carlo simulation or by using heuristics[4, 6, 2, 5, 1, 3]. Leskovec et al. proposed the CELF algorithm for tackling the second. In this work, we propose CELF++ and empirically show that it is 35-55% faster than CELF.", "title": "" } ]
[ { "docid": "e795381a345bf3cab74ddfd4d4763c1e", "text": "Context: Recent research discusses the use of ontologies, dictionaries and thesaurus as a means to improve activity labels of process models. However, the trade-off between quality improvement and extra effort is still an open question. It is suspected that ontology-based support could require additional effort for the modeler. Objective: In this paper, we investigate to which degree ontology-based support potentially increases the effort of modeling. We develop a theoretical perspective grounded in cognitive psychology, which leads us to the definition of three design principles for appropriate ontology-based support. The objective is to evaluate the design principles through empirical experimentation. Method: We tested the effect of presenting relevant content from the ontology to the modeler by means of a quantitative analysis. We performed controlled experiments using a prototype, which generates a simplified and context-aware visual representation of the ontology. It logs every action of the process modeler for analysis. The experiment refers to novice modelers and was performed as between-subject design with vs. without ontology-based support. It was carried out with two different samples. Results: Part of the effort-related variables we measured showed significant statistical difference between the group with and without ontology-based support. Overall, for the collected data, the ontology support achieved good results. Conclusion: We conclude that it is feasible to provide ontology-based support to the modeler in order to improve process modeling without strongly compromising time consumption and cognitive effort.", "title": "" }, { "docid": "c10a58037c4b13953236831af304e660", "text": "A 32 nm generation logic technology is described incorporating 2nd-generation high-k + metal-gate technology, 193 nm immersion lithography for critical patterning layers, and enhanced channel strain techniques. The transistors feature 9 Aring EOT high-k gate dielectric, dual band-edge workfunction metal gates, and 4th-generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. Process yield, performance and reliability are demonstrated on a 291 Mbit SRAM test vehicle, with 0.171 mum2 cell size, containing >1.9 billion transistors.", "title": "" }, { "docid": "d90add899632bab1c5c2637c7080f717", "text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.", "title": "" }, { "docid": "ef77d042a04b7fa704f13a0fa5e73688", "text": "The nature of the cellular basis of learning and memory remains an often-discussed, but elusive problem in neurobiology. A popular model for the physiological mechanisms underlying learning and memory postulates that memories are stored by alterations in the strength of neuronal connections within the appropriate neural circuitry. Thus, an understanding of the cellular and molecular basis of synaptic plasticity will expand our knowledge of the molecular basis of learning and memory. The view that learning was the result of altered synaptic weights was first proposed by Ramon y Cajal in 1911 and formalized by Donald O. Hebb. In 1949, Hebb proposed his \" learning rule, \" which suggested that alterations in the strength of synapses would occur between two neurons when those neurons were active simultaneously (1). Hebb's original postulate focused on the need for synaptic activity to lead to the generation of action potentials in the postsynaptic neuron, although more recent work has extended this to include local depolarization at the synapse. One problem with testing this hypothesis is that it has been difficult to record directly the activity of single synapses in a behaving animal. Thus, the challenge in the field has been to relate changes in synaptic efficacy to specific behavioral instances of associative learning. In this chapter, we will review the relationship among synaptic plasticity, learning, and memory. We will examine the extent to which various current models of neuronal plasticity provide potential bases for memory storage and we will explore some of the signal transduction pathways that are critically important for long-term memory storage. We will focus on two systems—the gill and siphon withdrawal reflex of the invertebrate Aplysia californica and the mammalian hippocam-pus—and discuss the abilities of models of synaptic plasticity and learning to account for a range of genetic, pharmacological, and behavioral data.", "title": "" }, { "docid": "d51408ad40bdc9a3a846aaf7da907cef", "text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.", "title": "" }, { "docid": "bea412d20a95c853fe06e7640acb9158", "text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "169db6ecec2243e3566079cd473c7afe", "text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.", "title": "" }, { "docid": "cdd27bbcbab81a243dda6bb855fb8f72", "text": "The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, \"things\" are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks.", "title": "" }, { "docid": "2bf48ea6d0fd3bd4776dc0a90e89254b", "text": "OBJECTIVES\nTo test whether individual differences in gratitude are related to sleep after controlling for neuroticism and other traits. To test whether pre-sleep cognitions are the mechanism underlying this relationship.\n\n\nMETHOD\nA cross-sectional questionnaire study was conducted with a large (186 males, 215 females) community sample (ages=18-68 years, mean=24.89, S.D.=9.02), including 161 people (40%) scoring above 5 on the Pittsburgh Sleep Quality Index, indicating clinically impaired sleep. Measures included gratitude, the Pittsburgh Sleep Quality Index (PSQI), self-statement test of pre-sleep cognitions, the Mini-IPIP scales of Big Five personality traits, and the Social Desirability Scale.\n\n\nRESULTS\nGratitude predicted greater subjective sleep quality and sleep duration, and less sleep latency and daytime dysfunction. The relationship between gratitude and each of the sleep variables was mediated by more positive pre-sleep cognitions and less negative pre-sleep cognitions. All of the results were independent of the effect of the Big Five personality traits (including neuroticism) and social desirability.\n\n\nCONCLUSION\nThis is the first study to show that a positive trait is related to good sleep quality above the effect of other personality traits, and to test whether pre-sleep cognitions are the mechanism underlying the relationship between any personality trait and sleep. The study is also the first to show that trait gratitude is related to sleep and to explain why this occurs, suggesting future directions for research, and novel clinical implications.", "title": "" }, { "docid": "1d3192e66e042e67dabeae96ca345def", "text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.", "title": "" }, { "docid": "f6388d37976740ebb789e7d5f6c072f1", "text": "With the advent of image and video representation of visual scenes in digital computer, subsequent necessity of vision-substitution representation of a given image is felt. The medium for non-visual representation of an image is chosen to be sound due to well developed auditory sensing ability of human beings and wide availability of cheap audio hardware. Visionary information of an image can be conveyed to blind and partially sighted persons through auditory representation of the image within some of the known limitations of human hearing system. The research regarding image sonification has mostly evolved through last three decades. The paper also discusses in brief about the reverse mapping, termed as sound visualization. This survey approaches to summarize the methodologies and issues of the implemented and unimplemented experimental systems developed for subjective sonification of image scenes and let researchers accumulate knowledge about the previous direction of researches in this domain.", "title": "" }, { "docid": "adc03d95eea19cede1ea91aae733943b", "text": "In this paper, we discuss the emerging application of device-free localization (DFL) using wireless sensor networks, which find people and objects in the environment in which the network is deployed, even in buildings and through walls. These networks are termed “RF sensor networks” because the wireless network itself is the sensor, using radio-frequency (RF) signals to probe the deployment area. DFL in cluttered multipath environments has been shown to be feasible, and in fact benefits from rich multipath channels. We describe modalities of measurements made by RF sensors, the statistical models which relate a person's position to channel measurements, and describe research progress in this area.", "title": "" }, { "docid": "45043fe3e4aa28daddea21c6546e7640", "text": "The Booth multiplier has been widely used for high performance signed multiplication by encoding and thereby reducing the number of partial products. A multiplier using the radix-<inline-formula><tex-math notation=\"LaTeX\">$4$ </tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq1-2493547.gif\"/></alternatives></inline-formula> (or modified Booth) algorithm is very efficient due to the ease of partial product generation, whereas the radix- <inline-formula><tex-math notation=\"LaTeX\">$8$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq2-2493547.gif\"/></alternatives></inline-formula> Booth multiplier is slow due to the complexity of generating the odd multiples of the multiplicand. In this paper, this issue is alleviated by the application of approximate designs. An approximate <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq3-2493547.gif\"/></alternatives></inline-formula>-bit adder is deliberately designed for calculating the sum of <inline-formula><tex-math notation=\"LaTeX\">$1\\times$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq4-2493547.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$2\\times$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq5-2493547.gif\"/> </alternatives></inline-formula> of a binary number. This adder requires a small area, a low power and a short critical path delay. Subsequently, the <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq6-2493547.gif\"/></alternatives></inline-formula>-bit adder is employed to implement the less significant section of a recoding adder for generating the triple multiplicand with no carry propagation. In the pursuit of a trade-off between accuracy and power consumption, two signed <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq7-2493547.gif\"/> </alternatives></inline-formula> bit approximate radix-8 Booth multipliers are designed using the approximate recoding adder with and without the truncation of a number of less significant bits in the partial products. The proposed approximate multipliers are faster and more power efficient than the accurate Booth multiplier. The multiplier with 15-bit truncation achieves the best overall performance in terms of hardware and accuracy when compared to other approximate Booth multiplier designs. Finally, the approximate multipliers are applied to the design of a low-pass FIR filter and they show better performance than other approximate Booth multipliers.", "title": "" }, { "docid": "30dfcf624badf766c3c7070548a47af4", "text": "The primary purpose of this paper is to stimulate discussion about a research agenda for a new interdisciplinary field. This field-the study of coordination-draws upon a variety of different disciplines including computer science, organization theory, management science, economics, and psychology. Work in this new area will include developing a body of scientific theory, which we will call \"coordination theory,\" about how the activities of separate actors can be coordinated. One important use for coordination theory will be in developing and using computer and communication systems to help people coordinate their activities in new ways. We will call these systems \"coordination technology.\" Rationale There are four reasons why work in this area is timely: (1) In recent years, large numbers of people have acquired direct access to computers. These computers are now beginning to be connected to each other. Therefore, we now have, for the first time, an opportunity for vastly larger numbers of people to use computing and communications capabilities to help coordinate their work. For example, specialized new software has been developed to (a) support multiple authors working together on the same document, (b) help people display and manipulate information more effectively in face-to-face meetings, and (c) help people intelligently route and process electronic messages. It already appears likely that there will be commercially successful products of this new type (often called \"computer supported cooperative work\" or \"groupware\"), and to some observers these applications herald a paradigm shift in computer usage as significant as the earlier shifts to time-sharing and personal computing. It is less clear whether the continuing development of new computer applications in this area will depend solely on the intuitions of successful designers or whether it will also be guided by a coherent underlying theory of how people coordinate their activities now and how they might do so differently with computer support. (2) In the long run, the dramatic improvements in the costs and capabilities of information technologies are changing-by orders of magnitude-the constraints on how certain kinds of communication and coordination can occur. At the same time, there is a pervasive feeling in American business that the pace of change is accelerating and that we need to create more flexible and adaptive organizations. Together, these changes may soon lead us across a threshhold where entirely new ways of organizing human activities become desirable. For 2 example, new capabilities for communicating information faster, less expensively, and …", "title": "" }, { "docid": "c0650814388c7e1de19ee6e668d40e69", "text": "In this paper we consider persuasion in the context of practical reasoning, and discuss the problems associated with construing reasoning about actions in a manner similar to reasoning about beliefs. We propose a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton. From this perspective, we articulate an interaction protocol, which we call PARMA, for dialogues over proposed actions based on this theory. We outline an axiomatic semantics for the PARMA Protocol, and discuss two implementations which use this protocol to mediate a discussion between humans. We then show how our proposal can be made computational within the framework of agents based on the Belief-Desire-Intention model, and illustrate this proposal with an example debate within a multi agent system.", "title": "" }, { "docid": "886c284d72a01db9bc4eb9467e14bbbb", "text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.", "title": "" }, { "docid": "1c4e71d00521219717607cbef90b5bec", "text": "The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications.", "title": "" }, { "docid": "c3f4f7d75c1b5cfd713ad7a10c887a3a", "text": "This paper presents an open-source diarization toolkit which is mostly dedicated to speaker and developed by the LIUM. This toolkit includes hierarchical agglomerative clustering methods using well-known measures such as BIC and CLR. Two applications for which the toolkit has been used are presented: one is for broadcast news using the ESTER 2 data and the other is for telephone conversations using the MEDIA corpus.", "title": "" }, { "docid": "d161ab557edb4268a0ebc606bb9dbcb6", "text": "Recommender systems play an important role in reducing the negative impact of information overload on those websites where users have the possibility of voting for their preferences on Ítems. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to calcúlate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corrobórate the excellent behaviour of the singularity measure proposed.", "title": "" }, { "docid": "a93bf6b8408bf0adba4985e7bd571d29", "text": "The modern data compression is mainly based on two approaches to entropy coding: Huffman (HC) and arithmetic/range coding (AC). The former is much faster, but approximates probabilities with powers of 2, usually leading to relatively low compression rates. The latter uses nearly exact probabilities easily approaching theoretical compression rate limit (Shannon entropy), but at cost of much larger computational cost. Asymmetric numeral systems (ANS) is a new approach to accurate entropy coding, which allows to end this tradeoff between speed and rate: the recent implementation [1] provides about 50% faster decoding than HC for 256 size alphabet, with compression rate similar to provided by AC. This advantage is due to being simpler than AC: using single natural number as the state, instead of two to represent a range. Beside simplifying renormalization, it allows to put the entire behavior for given probability distribution into a relatively small table: defining entropy coding automaton. The memory cost of such table for 256 size alphabet is a few kilobytes. There is a large freedom while choosing a specific table using pseudorandom number generator initialized with cryptographic key for this purpose allows to simultaneously encrypt the data. This article also introduces and discusses many other variants of this new entropy coding approach, which can provide direct alternatives for standard AC, for large alphabet range coding, or for approximated quasi arithmetic coding.", "title": "" } ]
scidocsrr
9b439b4dd326e5392be3351868cd1645
Swing-up of the double pendulum on a cart by feedforward and feedback control with experimental validation
[ { "docid": "d61ff7159a1559ec2c4be9450c1ad3b6", "text": "This paper presents the control of an underactuated two-link robot called the Pendubot. We propose a controller for swinging the linkage and rise it to its uppermost unstable equilibrium position. The balancing control is based on an energy approach and the passivity properties of the system.", "title": "" } ]
[ { "docid": "caa30379a2d0b8be2e1b4ddf6e6602c2", "text": "Multi-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems. Due to their complexity and huge design space to explore for such systems, CAD tools and frameworks to customize MPSoCs are mandatory. Some academic and industrial frameworks are available to support bus-based MPSoCs, but few works target NoCs as underlying communication architecture. A framework targeting MPSoC customization must provide abstract models to enable fast design space exploration, flexible application mapping strategies, all coupled to features to evaluate the performance of running applications. This paper proposes a framework to customize NoC-based MPSoCs with support to static and dynamic task mapping and C/SystemC simulation models for processors and memories. A simple, specifically designed microkernel executes in each processor, enabling multitasking at the processor level. Graphical tools enable debug and system verification, individualizing data for each task. Practical results highlight the benefit of using dynamic mapping strategies (total execution time reduction) and abstract models (total simulation time reduction without losing accuracy).", "title": "" }, { "docid": "9244b687b0031e895cea1fcf5a0b11da", "text": "Bacopa monnieri (L.) Wettst., a traditional Indian medicinal plant with high commercial potential, is used as a potent nervine tonic. A slow growth protocol was developed for medium-term conservation using mineral oil (MO) overlay. Nodal segments of B. monnieri (two genotypes; IC249250, IC468878) were conserved using MO for 24 months. Single node explants were implanted on MS medium supplemented with 0.2 mg l−1 BA and were covered with MO. Subculture duration could be significantly enhanced from 6 to 24 months, on the above medium. Normal plants regenerated from conserved cultures were successfully established in soil. On the basis of 20 random amplified polymorphic DNA and 5 inter-simple sequence repeat primers analyses and bacoside A content using HPLC, no significant reproducible variation was observed between the controls and in vitro-conserved plants. The results demonstrate the feasibility of using MO for medium-term conservation of B. monnieri germplasm without any adverse genetical and biochemical effects.", "title": "" }, { "docid": "15205e074804764a6df0bdb7186c0d8c", "text": "Lactose (milk sugar) is a fermentable substrate. It can be fermented outside of the body to produce cheeses, yoghurts and acidified milks. It can be fermented within the large intestine in those people who have insufficient expression of lactase enzyme on the intestinal mucosa to ferment this disaccharide to its absorbable, simple hexose sugars: glucose and galactose. In this way, the issues of lactose intolerance and of fermented foods are joined. It is only at the extremes of life, in infancy and old age, in which severe and life-threatening consequences from lactose maldigestion may occur. Fermentation as part of food processing can be used for preservation, for liberation of pre-digested nutrients, or to create ethanolic beverages. Almost all cultures and ethnic groups have developed some typical forms of fermented foods. Lessons from fermentation of non-dairy items may be applicable to fermentation of milk, and vice versa.", "title": "" }, { "docid": "11d551da8299c7da76fbeb22b533c7f1", "text": "The use of brushless permanent magnet DC drive motors in racing motorcycles is discussed in this paper. The application requirements are highlighted and the characteristics of the load demand and drive converter outlined. The possible topologies of the machine are investigated and a design for a internal permanent magnet is developed. This is a 6-pole machine with 18 stator slots and coils of one stator tooth pitch. The performance predictions are put forward and these are obtained from design software. Cooling is vital for these machines and this is briefly discussed.", "title": "" }, { "docid": "5ba3baabc84d02f0039748a4626ace36", "text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.", "title": "" }, { "docid": "ab3dd1f92c09e15ee05ab7f65f676afe", "text": "We introduce a novel learning method for 3D pose estimation from color images. While acquiring annotations for color images is a difficult task, our approach circumvents this problem by learning a mapping from paired color and depth images captured with an RGB-D camera. We jointly learn the pose from synthetic depth images that are easy to generate, and learn to align these synthetic depth images with the real depth images. We show our approach for the task of 3D hand pose estimation and 3D object pose estimation, both from color images only. Our method achieves performances comparable to state-of-the-art methods on popular benchmark datasets, without requiring any annotations for the color images.", "title": "" }, { "docid": "0c34e8355f1635b3679159abd0a82806", "text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.", "title": "" }, { "docid": "769c1933f833cbe0c79422e3e15a6ff3", "text": "The concept of presortedness and its use in sorting are studied. Natural ways to measure presortedness are given and some general properties necessary for a measure are proposed. A concept of a sorting algorithm optimal with respect to a measure of presortedness is defined, and examples of such algorithms are given. A new insertion sort algorithm is shown to be optimal with respect to three natural measures. The problem of finding an optimal algorithm for an arbitrary measure is studied, and partial results are proven.", "title": "" }, { "docid": "f3a253dcae5127fcd4e62fd2508eef09", "text": "ACC: allergic contact cheilitis Bronopol: 2-Bromo-2-nitropropane-1,3-diol MI: methylisothiazolinone MCI: methylchloroisothiazolinone INTRODUCTION Pediatric cheilitis can be a debilitating condition for the child and parents. Patch testing can help isolate allergens to avoid. Here we describe a 2-yearold boy with allergic contact cheilitis improving remarkably after prudent avoidance of contactants and food avoidance.", "title": "" }, { "docid": "dc693ab2e8991630f62caf0f62eb0dc6", "text": "The paper presents the power amplifier design. The introduction of a practical harmonic balance capability at the device measurement stage brings a number of advantages and challenges. Breaking down this traditional barrier means that the test-bench engineer needs to become more aware of the design process and requirements. The inverse is also true, as the measurement specifications for a harmonically tuned amplifier are a bit more complex than just the measurement of load-pull contours. We hope that the new level of integration between both will also result in better exchanges between both sides and go beyond showing either very accurate, highly tuned device models, or using the device model as the traditional scapegoat for unsuccessful PA designs. A nonlinear model and its quality can now be diagnosed through direct comparison of simulated and measured wave forms. The quality of a PA design can be verified by placing the device within the measurement system, practical harmonic balance emulator into the same impedance state in which it will operate in the actual realized design.", "title": "" }, { "docid": "a161b0fe0b38381a96f02694fd84c3bf", "text": "We have been developing human mimetic musculoskeletal humanoids from the view point of human-inspired design approach. Kengoro is our latest version of musculoskeletal humanoid designed to achieve physically interactive actions in real world. This study presents the design concept, body characteristics, and motion achievements of Kengoro. In the design process of Kengoro, we adopted the novel idea of multifunctional skeletal structures to achieve both humanoid performance and humanlike proportions. We adopted the sensor-driver integrated muscle modules for improved muscle control. In order to demonstrate the effectiveness of these body structures, we conducted several preliminary movements using Kengoro.", "title": "" }, { "docid": "1c16fa259b56e3d64f2468fdf758693a", "text": "Dysregulated expression of microRNAs (miRNAs) in various tissues has been associated with a variety of diseases, including cancers. Here we demonstrate that miRNAs are present in the serum and plasma of humans and other animals such as mice, rats, bovine fetuses, calves, and horses. The levels of miRNAs in serum are stable, reproducible, and consistent among individuals of the same species. Employing Solexa, we sequenced all serum miRNAs of healthy Chinese subjects and found over 100 and 91 serum miRNAs in male and female subjects, respectively. We also identified specific expression patterns of serum miRNAs for lung cancer, colorectal cancer, and diabetes, providing evidence that serum miRNAs contain fingerprints for various diseases. Two non-small cell lung cancer-specific serum miRNAs obtained by Solexa were further validated in an independent trial of 75 healthy donors and 152 cancer patients, using quantitative reverse transcription polymerase chain reaction assays. Through these analyses, we conclude that serum miRNAs can serve as potential biomarkers for the detection of various cancers and other diseases.", "title": "" }, { "docid": "ccc70871f57f25da6141a7083bdf5174", "text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap", "title": "" }, { "docid": "f4bd8831ff5bf3372b2ab11d7c53a64b", "text": "The demonstration that dopamine loss is the key pathological feature of Parkinson's disease (PD), and the subsequent introduction of levodopa have revolutionalized the field of PD therapeutics. This review will discuss the significant progress that has been made in the development of new pharmacological and surgical tools to treat PD motor symptoms since this major breakthrough in the 1960s. However, we will also highlight some of the challenges the field of PD therapeutics has been struggling with during the past decades. The lack of neuroprotective therapies and the limited treatment strategies for the nonmotor symptoms of the disease (ie, cognitive impairments, autonomic dysfunctions, psychiatric disorders, etc.) are among the most pressing issues to be addressed in the years to come. It appears that the combination of early PD nonmotor symptoms with imaging of the nigrostriatal dopaminergic system offers a promising path toward the identification of PD biomarkers, which, once characterized, will set the stage for efficient use of neuroprotective agents that could slow down and alter the course of the disease.", "title": "" }, { "docid": "f5f1300baf7ed92626c912b98b6308c9", "text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.", "title": "" }, { "docid": "4f58172c8101b67b9cd544b25d09f2e2", "text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.", "title": "" }, { "docid": "48019a3106c6d74e4cfcc5ac596d4617", "text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.", "title": "" }, { "docid": "91ed0637e0533801be8b03d5ad21d586", "text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.", "title": "" }, { "docid": "9a12ec03e4521a33a7e76c0c538b6b43", "text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.", "title": "" }, { "docid": "c72dc472d12c9c822ae240bec5d57c37", "text": "The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish between higher scoring and lower scoring subjects and which processes are common to all subjects and all items on the test. The analysis is based on detailed performance characteristics, such as verbal protocols, eye-fixation patterns, and errors. The theory is expressed as a pair of computer simulation models that perform like the median or best college students in the sample. The processing characteristic common to all subjects is an incremental, reiterative strategy for encoding and inducing the regularities in each problem. The processes that distinguish among individuals are primarily the ability to induce abstract relations and the ability to dynamically manage a large set of problem-solving goals in working memory.", "title": "" } ]
scidocsrr
63ca519ffc2a3524c53956d8e96867aa
Control-flow integrity principles, implementations, and applications
[ { "docid": "83c81ecb870e84d4e8ab490da6caeae2", "text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.", "title": "" } ]
[ { "docid": "fb02f47ab50ebe817175f21f7192ae6b", "text": "Generative Adversarial Network (GAN) is a prominent generative model that are widely used in various applications. Recent studies have indicated that it is possible to obtain fake face images with a high visual quality based on this novel model. If those fake faces are abused in image tampering, it would cause some potential moral, ethical and legal problems. In this paper, therefore, we first propose a Convolutional Neural Network (CNN) based method to identify fake face images generated by the current best method [20], and provide experimental evidences to show that the proposed method can achieve satisfactory results with an average accuracy over 99.4%. In addition, we provide comparative results evaluated on some variants of the proposed CNN architecture, including the high pass filter, the number of the layer groups and the activation function, to further verify the rationality of our method.", "title": "" }, { "docid": "97a1d44956f339a678da4c7a32b63bf6", "text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.", "title": "" }, { "docid": "6a1a9c6cb2da06ee246af79fdeedbed9", "text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review", "title": "" }, { "docid": "1fde3c7d8109d5d4bfcf1f55facf7a95", "text": "Concerted research effort since the nineteen fifties has lead to effective methods for retrieval of relevant documents from homogeneous collections of text, such as newspaper archives, scientific abstracts and CD-ROM encyclopaedias. However, the triumph of the Web in the nineteen nineties forced a significant paradigm shift in the Information Retrieval field because of the need to address the issues of enormous scale, fluid collection definition, great heterogeneity, unfettered interlinking, democratic publishing, the presence of adversaries and most of all the diversity of purposes for which Web search may be used. Now, the IR field is confronted with a challenge of similarly daunting dimensions – how to bring highly effective search to the complex information spaces within enterprises. Overcoming the challenge would bring massive economic benefit, but victory is far from assured. The present work characterises enterprise search, hints at its economic magnitude, states some of the unsolved research questions in the domain of enterprise search need, proposes an enterprise search test collection and presents results for a small but interesting subproblem.", "title": "" }, { "docid": "f665852770ef2f57cbb5c614410440bf", "text": "Blockchain is a distributed database which is cryptographically protected against malicious modifications. While promising for a wide range of applications, current blockchain platforms rely on digital signatures, which are vulnerable to attacks by means of quantum computers. The same, albeit to a lesser extent, applies to cryptographic hash functions that are used in preparing new blocks, so parties with access to quantum computation would have unfair advantage in procuring mining rewards. Here we propose a possible solution to the quantum-era blockchain challenge and report an experimental realization of a quantum-safe blockchain platform that utilizes quantum key distribution across an urban fiber network for information-theoretically secure authentication. These results address important questions about realizability and scalability of quantum-safe blockchains for commercial and governmental applications.", "title": "" }, { "docid": "519172fb24e370a24da92711d827bf77", "text": "We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the executionguided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%.", "title": "" }, { "docid": "0bbabbcc08ea494330b1675445851f9d", "text": "One trend in the implementation of modern web systems is the use of activity data in the form of log or event messages that capture user and server activity. This data is at the heart of many internet systems in the domains of advertising, relevance, search, recommendation systems, and security, as well as continuing to fulfill its traditional role in analytics and reporting. Many of these uses place real-time demands on data feeds. Activity data is extremely high volume and real-time pipelines present new design challenges. This paper discusses the design and engineering problems we encountered in moving LinkedIn’s data pipeline from a batch-oriented file aggregation mechanism to a real-time publish-subscribe system called Kafka. This pipeline currently runs in production at LinkedIn and handles more than 10 billion message writes each day with a sustained peak of over 172,000 messages per second. Kafka supports dozens of subscribing systems and delivers more than 55 billion messages to these consumer processing each day. We discuss the origins of this systems, missteps on the path to real-time, and the design and engineering problems we encountered along the way.", "title": "" }, { "docid": "34a8413935d1724c626f505421480f54", "text": "In this paper, we introduce the Reinforced Mnemonic Reader for machine comprehension (MC) task, which aims to answer a query about a given context document. We propose several novel mechanisms that address critical problems in MC that are not adequately solved by previous works, such as enhancing the capacity of encoder, modeling long-term dependencies of contexts, refining the predicted answer span, and directly optimizing the evaluation metric. Extensive experiments on TriviaQA and Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-theart results.", "title": "" }, { "docid": "5ce82b8c2cc87ae84026d230f3a97e06", "text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.", "title": "" }, { "docid": "b94673776041fe6463edccf06a4ed205", "text": "This paper explores the current affordances and limitations of video game genre from a library and information science perspective with an emphasis on classification theory. We identify and discuss various purposes of genre relating to video games, including identity, collocation and retrieval, commercial marketing, and educational instruction. Through the use of examples, we discuss the ways in which these purposes are supported by genre classification and conceptualization, and the implications for video games. Suggestions for improved conceptualizations such as family resemblances, prototype theory, faceted classification, and appeal factors for video game genres are considered, with discussions of strengths and weaknesses. This analysis helps inform potential future practical applications for describing video games at cultural heritage institutions such as libraries, museums, and archives, as well as furthering the understanding of video game genre and genre classification for game studies at large. 3 Running head: WHY VIDEO GAME GENRES FAIL", "title": "" }, { "docid": "ca4696183f72882d2f69cc17ab761ef3", "text": "Entropy, as it relates to dynamical systems, is the rate of information production. Methods for estimation of the entropy of a system represented by a time series are not, however, well suited to analysis of the short and noisy data sets encountered in cardiovascular and other biological studies. Pincus introduced approximate entropy (ApEn), a set of measures of system complexity closely related to entropy, which is easily applied to clinical cardiovascular and other time series. ApEn statistics, however, lead to inconsistent results. We have developed a new and related complexity measure, sample entropy (SampEn), and have compared ApEn and SampEn by using them to analyze sets of random numbers with known probabilistic character. We have also evaluated cross-ApEn and cross-SampEn, which use cardiovascular data sets to measure the similarity of two distinct time series. SampEn agreed with theory much more closely than ApEn over a broad range of conditions. The improved accuracy of SampEn statistics should make them useful in the study of experimental clinical cardiovascular and other biological time series.", "title": "" }, { "docid": "1950bc738c3a47a8314b5d44056d9731", "text": "BACKGROUND\nThe discovery of abnormal synchronization of neuronal activity in the basal ganglia in Parkinson's disease (PD) has prompted the development of novel neuromodulation paradigms. Coordinated reset neuromodulation intends to specifically counteract excessive synchronization and to induce cumulative unlearning of pathological synaptic connectivity and neuronal synchrony.\n\n\nMETHODS\nIn this prospective case series, six PD patients were evaluated before and after coordinated reset neuromodulation according to a standardized protocol that included both electrophysiological recordings and clinical assessments.\n\n\nRESULTS\nCoordinated reset neuromodulation of the subthalamic nucleus (STN) applied to six PD patients in an externalized setting during three stimulation days induced a significant and cumulative reduction of beta band activity that correlated with a significant improvement of motor function.\n\n\nCONCLUSIONS\nThese results highlight the potential effects of coordinated reset neuromodulation of the STN in PD patients and encourage further development of this approach as an alternative to conventional high-frequency deep brain stimulation in PD.", "title": "" }, { "docid": "4b3813fdf16d9c020ec1ad1ddd56d1d3", "text": "In this paper we describe a method that can be used for Minimum Bayes Risk (MBR) decoding for speech recognition. Our algorithm can take as input either a single lattice, or multiple lattices for system combination. It has similar functionality to the widely used Consensus method, but has a clearer theoretical basis and appears to give better results both for MBR decoding and system combination. Many different approximations have been described to solve the MBR decoding problem, which is very difficult from an optimization point of view. Our proposed method solves the problem through a novel forward–backward recursion on the lattice, not requiring time markings. We prove that our algorithm iteratively improves a bound on the Bayes risk. © 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6c29713df5186553bee555024bf8c135", "text": "This paper describes the organization and results of the automatic keyphrase extraction task held at the workshop on Semantic Evaluation 2010 (SemEval-2010). The keyphrase extraction task was specifically geared towards scientific articles. Systems were automatically evaluated by matching their extracted keyphrases against those assigned by the authors as well as the readers to the same documents. We outline the task, present the overall ranking of the submitted systems, and discuss the improvements to the state-of-the-art in keyphrase extraction.", "title": "" }, { "docid": "29d43e9ec2afa314c4a00f26ce816e7e", "text": "The aim of this paper is to discuss about various feature selection algorithms applied on different datasets to select the relevant features to classify data into binary and multi class in order to improve the accuracy of the classifier. Recent researches in medical diagnose uses the different kind of classification algorithms to diagnose the disease. For predicting the disease, the classification algorithm produces the result as binary class. When there is a multiclass dataset, the classification algorithm reduces the dataset into a binary class for simplification purpose by using any one of the data reduction methods and the algorithm is applied for prediction. When data reduction on original dataset is carried out, the quality of the data may degrade and the accuracy of an algorithm will get affected. To maintain the effectiveness of the data, the multiclass data must be treated with its original form without maximum reduction, and the algorithm can be applied on the dataset for producing maximum accuracy. Dataset with maximum number of attributes like thousands must incorporate the best feature selection algorithm for selecting the relevant features to reduce the space and time complexity. The performance of Classification algorithm is estimated by how accurately it predicts the individual class on particular dataset. The accuracy constrain mainly depends on the selection of appropriate features from the original dataset. The feature selection algorithms play an important role in classification for better performance. The feature selection is one of", "title": "" }, { "docid": "a79d4b0a803564f417236f2450658fe0", "text": "Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (especially when <inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function. Specifically, the problems of noise and outliers are well addressed by the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (<inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function, while the discriminative representations of both the labeled and unlabeled data are simultaneously learned by explicitly exploring the block-diagonal structure. The proposed problem is formulated as an optimization problem with a well-defined objective function solved by the proposed iterative algorithm. The convergence of the proposed optimization algorithm is analyzed both theoretically and empirically. In addition, we also discuss the relationships between the proposed method and some previous methods. Extensive experiments on both the synthetic and real-world data sets are conducted, and the experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods.", "title": "" }, { "docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf", "text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.", "title": "" }, { "docid": "ecd54b6fad0a1d79440204df72b977fa", "text": "The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.", "title": "" }, { "docid": "0520c57f2cd13ce423e656d89c7f3cc0", "text": "The term ‘‘urban stream syndrome’’ describes the consistently observed ecological degradation of streams draining urban land. This paper reviews recent literature to describe symptoms of the syndrome, explores mechanisms driving the syndrome, and identifies appropriate goals and methods for ecological restoration of urban streams. Symptoms of the urban stream syndrome include a flashier hydrograph, elevated concentrations of nutrients and contaminants, altered channel morphology, and reduced biotic richness, with increased dominance of tolerant species. More research is needed before generalizations can be made about urban effects on stream ecosystem processes, but reduced nutrient uptake has been consistently reported. The mechanisms driving the syndrome are complex and interactive, but most impacts can be ascribed to a few major large-scale sources, primarily urban stormwater runoff delivered to streams by hydraulically efficient drainage systems. Other stressors, such as combined or sanitary sewer overflows, wastewater treatment plant effluents, and legacy pollutants (long-lived pollutants from earlier land uses) can obscure the effects of stormwater runoff. Most research on urban impacts to streams has concentrated on correlations between instream ecological metrics and total catchment imperviousness. Recent research shows that some of the variance in such relationships can be explained by the distance between the stream reach and urban land, or by the hydraulic efficiency of stormwater drainage. The mechanisms behind such patterns require experimentation at the catchment scale to identify the best management approaches to conservation and restoration of streams in urban catchments. Remediation of stormwater impacts is most likely to be achieved through widespread application of innovative approaches to drainage design. Because humans dominate urban ecosystems, research on urban stream ecology will require a broadening of stream ecological research to integrate with social, behavioral, and economic research.", "title": "" }, { "docid": "244a517d3a1c456a602ecc01fb99a78f", "text": "Most literature on time series classification assumes that the beginning and ending points of the pattern of interest can be correctly identified, both during the training phase and later deployment. In this work, we argue that this assumption is unjustified, and this has in many cases led to unwarranted optimism about the performance of the proposed algorithms. As we shall show, the task of correctly extracting individual gait cycles, heartbeats, gestures, behaviors, etc., is generally much more difficult than the task of actually classifying those patterns. We propose to mitigate this problem by introducing an alignment-free time series classification framework. The framework requires only very weakly annotated data, such as “in this ten minutes of data, we see mostly normal heartbeats...,” and by generalizing the classic machine learning idea of data editing to streaming/continuous data, allows us to build robust, fast and accurate classifiers. We demonstrate on several diverse real-world problems that beyond removing unwarranted assumptions and requiring essentially no human intervention, our framework is both significantly faster and significantly more accurate than current state-of-the-art approaches.", "title": "" } ]
scidocsrr
000f74958b907e8493f448a5103ae311
Assessing and moving on from the dominant project management discourse in the light of project overruns
[ { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" } ]
[ { "docid": "e6804e9bfadec46aa25b7edf86bf04e6", "text": "An evolutionary optimization method over continuous search spaces, differential evolution, has recently been successfully applied to real world and artificial optimization problems and proposed also for neural network training. However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed. In this study, differential evolution has been analyzed as a candidate global optimization method for feed-forward neural networks. In comparison to gradient based methods, differential evolution seems not to provide any distinct advantage in terms of learning rate or solution quality. Differential evolution can rather be used in validation of reached optima and in the development of regularization terms and non-conventional transfer functions that do not necessarily provide gradient information.", "title": "" }, { "docid": "4bf9ec9d1600da4eaffe2bfcc73ee99f", "text": "Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Nowadays, large amount of data and information are available, Data can now be stored in many different kinds of databases and information repositories, being available on the Internet. There is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. There are data mining, web mining and knowledge discovery tools and software packages such as WEKA Tool and RapidMiner tool. The work deals with analysis of WEKA, RapidMiner and NetTools spider tools KNIME and Orange. There are various tools available for data mining and web mining. Therefore awareness is required about the quantitative investigation of these tools. This paper focuses on various functional, practical, cognitive as well as analysis aspects that users may be looking for in the tools. Complete study addresses the usefulness and importance of these tools including various aspects. Analysis presents various benefits of these data mining tools along with desired aspects and the features of current tools. KEYWORDSData Mining, KDD, Data Mining Tools.", "title": "" }, { "docid": "8b4e09bb13d3d01d3954f32cbb4c9e27", "text": "Higher-level semantics such as visual attributes are crucial for fundamental multimedia applications. We present a novel attribute discovery approach that can automatically identify, model and name attributes from an arbitrary set of image and text pairs that can be easily gathered on the Web. Different from conventional attribute discovery methods, our approach does not rely on any pre-defined vocabularies and human labeling. Therefore, we are able to build a large visual knowledge base without any human efforts. The discovery is based on a novel deep architecture, named Independent Component Multimodal Autoencoder (ICMAE), that can continually learn shared higher-level representations across the visual and textual modalities. With the help of the resultant representations encoding strong visual and semantic evidences, we propose to (a) identify attributes and their corresponding high-quality training images, (b) iteratively model them with maximum compactness and comprehensiveness, and (c) name the attribute models with human understandable words. To date, the proposed system has discovered 1,898 attributes over 1.3 million pairs of image and text. Extensive experiments on various real-world multimedia datasets demonstrate the quality and effectiveness of the discovered attributes, facilitating multimedia applications such as image annotation and retrieval as compared to the state-of-the-art approaches.", "title": "" }, { "docid": "cbaa93c56f770fc9a1fb4b633b8e4a02", "text": "Jpred (http://www.compbio.dundee.ac.uk/jpred) is a secondary structure prediction server powered by the Jnet algorithm. Jpred performs over 1000 predictions per week for users in more than 50 countries. The recently updated Jnet algorithm provides a three-state (alpha-helix, beta-strand and coil) prediction of secondary structure at an accuracy of 81.5%. Given either a single protein sequence or a multiple sequence alignment, Jpred derives alignment profiles from which predictions of secondary structure and solvent accessibility are made. The predictions are presented as coloured HTML, plain text, PostScript, PDF and via the Jalview alignment editor to allow flexibility in viewing and applying the data. The new Jpred 3 server includes significant usability improvements that include clearer feedback of the progress or failure of submitted requests. Functional improvements include batch submission of sequences, summary results via email and updates to the search databases. A new software pipeline will enable Jnet/Jpred to continue to be updated in sync with major updates to SCOP and UniProt and so ensures that Jpred 3 will maintain high-accuracy predictions.", "title": "" }, { "docid": "8ebb412ce5ded7393daf98a62bc41792", "text": "It has recently been reported that dogs affected by canine heartworm disease (Dirofilaria immitis) can show an increase in plasma levels of myoglobin and cardiac troponin I, two markers of muscle/myocardial injury. In order to determine if this increase is due to myocardial damage, the right ventricle of 24 naturally infected dogs was examined by routine histology and immunohistochemistry with anti-myoglobin and anti-cardiac troponin I antibodies. Microscopic lesions included necrosis and myocyte vacuolization, and were associated with loss of staining for one or both proteins. Results confirm that increased levels of myoglobin and cardiac troponin I are indicative of myocardial damage in dogs affected by heartworm disease.", "title": "" }, { "docid": "da5ad61c492419515e8449b435b42e80", "text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.", "title": "" }, { "docid": "3e57e054e659f78d6bc88de7915b0d85", "text": "While some unmanned aerial vehicles (UAVs) have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. As a result many UAVs rely on fixed cameras to provide a video stream to an operator or observer. With a fixed camera, the video stream is often unsteady due to the multirotor's movement from wind and acceleration. These video streams are often analyzed by both humans and machines, and the unwanted camera movement can cause problems for both. For a human observer, unwanted movement may simply make it harder to follow the video, while for computer algorithms, it may severely impair the algorithm's intended function. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. We believe, however, that this process could be greatly simplified by using data from a UAV's on-board inertial measurement unit (IMU) to stabilize the camera feed. In this paper we present an algorithm for video stabilization based only on IMU data from a UAV platform. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power.", "title": "" }, { "docid": "6f2aff1eb092fffc80aaf26e3c1877ca", "text": "With the advent of social networks and micro-blogging systems, the way of communicating with other people and spreading information has changed substantially. Persons with different backgrounds, age and education exchange information and opinions, spanning various domains and topics, and have now the possibility to directly interact with popular users and authoritative information sources usually unreachable before the advent of these environments. As a result, the mechanism of information propagation changed deeply, the study of which is indispensable for the sake of understanding the evolution of information networks. To cope up with this intention, in this paper, we propose a novel model which enables to delve into the spread of information over a social network along with the change in the user relationships with respect to the domain of discussion. For this, considering Twitter as a case study, we aim at analyzing the multiple paths the information follows over the network with the goal of understanding the dynamics of the information contagion with respect to the change of the topic of discussion. We then provide a method for estimating the influence among users by evaluating the nature of the relationship among them with respect to the topic of discussion they share. Using a vast sample of the Twitter network, we then present various experiments that illustrate our proposal and show the efficacy of the proposed approach in modeling this information spread.", "title": "" }, { "docid": "9c67049b5f934b47346592b73bc57dbe", "text": "In this paper, the problem of switching stabilization for a class of switched nonlinear systems is studied by using average dwell time (ADT) switching, where the subsystems are possibly all unstable. First, a new concept of ADT is given, which is different from the traditional definition of ADT. Based on the new proposed switching signals, a sufficient condition of stabilization for switched nonlinear systems with unstable subsystems is derived. Then, the T-S fuzzy modeling approach is applied to represent the underlying nonlinear system to make the obtained condition easily verified. A novel multiple quadratic Lyapunov function approach is also proposed, by which some conditions are provided in terms of a set of linear matrix inequalities to guarantee the derived T-S fuzzy system to be asymptotically stable. Finally, a numerical example is given to demonstrate the effectiveness of our developed results.", "title": "" }, { "docid": "77cea98467305b9b3b11de8d3cec6ec2", "text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.", "title": "" }, { "docid": "fe79c1c71112b3b40e047db6030aaff9", "text": "We are at a key juncture in history where biodiversity loss is occurring daily and accelerating in the face of population growth, climate change, and rampant development. Simultaneously, we are just beginning to appreciate the wealth of human health benefits that stem from experiencing nature and biodiversity. Here we assessed the state of knowledge on relationships between human health and nature and biodiversity, and prepared a comprehensive listing of reported health effects. We found strong evidence linking biodiversity with production of ecosystem services and between nature exposure and human health, but many of these studies were limited in rigor and often only correlative. Much less information is available to link biodiversity and health. However, some robust studies indicate that exposure to microbial biodiversity can improve health, specifically in reducing certain allergic and respiratory diseases. Overall, much more research is needed on mechanisms of causation. Also needed are a reenvisioning of land-use planning that places human well-being at the center and a new coalition of ecologists, health and social scientists and planners to conduct research and develop policies that promote human interaction with nature and biodiversity. Improvements in these areas should enhance human health and ecosystem, community, as well as human resilience. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "bf499e8252cac48cdd406699c8413e16", "text": "Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a method which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentions of entities are nodes of this graph where edges encode relations between different mentions (e.g., withinand cross-document co-references). Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning. Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on the WIKIHOP dataset (Welbl et al., 2017).", "title": "" }, { "docid": "79685eeb67edbb3fbb6e6340fac420c3", "text": "Fatma Özcan IBM Almaden Research Center San Jose, CA fozcan@us.ibm.com Nesime Tatbul Intel Labs and MIT Cambridge, MA tatbul@csail.mit.edu Daniel J. Abadi Yale University New Haven, CT dna@cs.yale.edu Marcel Kornacker Cloudera San Francisco, CA marcel@cloudera.com C Mohan IBM Almaden Research Center San Jose, CA cmohan@us.ibm.com Karthik Ramasamy Twitter, Inc. San Francisco, CA karthik@twitter.com Janet Wiener Facebook, Inc. Menlo Park, CA jlw@fb.com", "title": "" }, { "docid": "a95b95792bf27000b64a5ef6546806d6", "text": "Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives—optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications.", "title": "" }, { "docid": "3741bbaf5cb1b5be943a14eca49554fa", "text": "Code-mixing is a linguistic phenomenon where multiple languages are used in the same occurrence that is increasingly common in multilingual societies. Codemixed content on social media is also on the rise, prompting the need for tools to automatically understand such content. Automatic Parts-of-Speech (POS) tagging is an essential step in any Natural Language Processing (NLP) pipeline, but there is a lack of annotated data to train such models. In this work, we present a unique language tagged and POS-tagged dataset of code-mixed English-Hindi tweets related to five incidents in India that led to a lot of Twitter activity. Our dataset is unique in two dimensions: (i) it is larger than previous annotated datasets and (ii) it closely resembles typical real-world tweets. Additionally, we present a POS tagging model that is trained on this dataset to provide an example of how this dataset can be used. The model also shows the efficacy of our dataset in enabling the creation of codemixed social media POS taggers.", "title": "" }, { "docid": "29a13944cf4f43ef484512d978396c1e", "text": "The literature examining the relationship between cardiorespiratory fitness and the brain in older adults has increased rapidly, with 30 of 34 studies published since 2008. Here we review cross-sectional and exercise intervention studies in older adults examining the relationship between cardiorespiratory fitness and brain structure and function, typically assessed using Magnetic Resonance Imaging (MRI). Studies of patients with Alzheimer's disease are discussed when available. The structural MRI studies revealed a consistent positive relationship between cardiorespiratory fitness and brain volume in cortical regions including anterior cingulate, lateral prefrontal, and lateral parietal cortex. Support for a positive relationship between cardiorespiratory fitness and medial temporal lobe volume was less consistent, although evident when a region-of-interest approach was implemented. In fMRI studies, cardiorespiratory fitness in older adults was associated with activation in similar regions as those identified in the structural studies, including anterior cingulate, lateral prefrontal, and lateral parietal cortex, despite heterogeneity among the functional tasks implemented. This comprehensive review highlights the overlap in brain regions showing a positive relationship with cardiorespiratory fitness in both structural and functional imaging modalities. The findings suggest that aerobic exercise and cardiorespiratory fitness contribute to healthy brain aging, although additional studies in Alzheimer's disease are needed.", "title": "" }, { "docid": "1d964bb1b82e6de71a6407967a8d9fa0", "text": "Ensuring reliable access to clean and affordable water is one of the greatest global challenges of this century. As the world's population increases, water pollution becomes more complex and difficult to remove, and global climate change threatens to exacerbate water scarcity in many areas, the magnitude of this challenge is rapidly increasing. Wastewater reuse is becoming a common necessity, even as a source of potable water, but our separate wastewater collection and water supply systems are not designed to accommodate this pressing need. Furthermore, the aging centralized water and wastewater infrastructure in the developed world faces growing demands to produce higher quality water using less energy and with lower treatment costs. In addition, it is impractical to establish such massive systems in developing regions that currently lack water and wastewater infrastructure. These challenges underscore the need for technological innovation to transform the way we treat, distribute, use, and reuse water toward a distributed, differential water treatment and reuse paradigm (i.e., treat water and wastewater locally only to the required level dictated by the intended use). Nanotechnology offers opportunities to develop next-generation water supply systems. This Account reviews promising nanotechnology-enabled water treatment processes and provides a broad view on how they could transform our water supply and wastewater treatment systems. The extraordinary properties of nanomaterials, such as high surface area, photosensitivity, catalytic and antimicrobial activity, electrochemical, optical, and magnetic properties, and tunable pore size and surface chemistry, provide useful features for many applications. These applications include sensors for water quality monitoring, specialty adsorbents, solar disinfection/decontamination, and high performance membranes. More importantly, the modular, multifunctional and high-efficiency processes enabled by nanotechnology provide a promising route both to retrofit aging infrastructure and to develop high performance, low maintenance decentralized treatment systems including point-of-use devices. Broad implementation of nanotechnology in water treatment will require overcoming the relatively high costs of nanomaterials by enabling their reuse and mitigating risks to public and environmental health by minimizing potential exposure to nanoparticles and promoting their safer design. The development of nanotechnology must go hand in hand with environmental health and safety research to alleviate unintended consequences and contribute toward sustainable water management.", "title": "" }, { "docid": "a117e006785ab63ef391d882a097593f", "text": "An increasing interest in understanding human perception in social media has led to the study of the processes of personality self-presentation and impression formation based on user profiles and text blogs. However, despite the popularity of online video, we do not know of any attempt to study personality impressions that go beyond the use of text and still photos. In this paper, we analyze one facet of YouTube as a repository of brief behavioral slices in the form of personal conversational vlogs, which are a unique medium for selfpresentation and interpersonal perception. We investigate the use of nonverbal cues as descriptors of vloggers’ behavior and find significant associations between automatically extracted nonverbal cues for several personality judgments. As one notable result, audio and visual cues together can be used to predict 34% of the variance of the Extraversion trait of the Big Five model. In addition, we explore the associations between vloggers’ personality scores and the level of social attention that their videos received in YouTube. Our study is conducted on a dataset of 442 YouTube vlogs and 2,210 annotations collected using Amazon’s Mechanical Turk.", "title": "" }, { "docid": "d20154a6b20e07bc3e13cd74731c1b39", "text": "Stability in cluster analysis is strongly dependent on the data set, especially on how well separated and how homogeneous the clusters are. In the same clustering, some clusters may be very stable and others may be extremely unstable. The Jaccard coefficient, a similarity measure between sets, is used as a clusterwise measure of cluster stability, which is assessed by the bootstrap distribution of the Jaccard coefficient for every single cluster of a clustering compared to the most similar cluster in the bootstrapped data sets. This can be applied to very general cluster analysis methods. Some alternative resampling methods are investigated as well, namely subsetting, jittering the data points and replacing some data points by artificial noise points. The different methods are compared by means of a simulation study. A data example illustrates the use of the cluster-wise stability assessment to distinguish between meaningful stable and spurious clusters, but it is also shown that clusters are sometimes only stable because of the inflexibility of certain clustering methods.", "title": "" }, { "docid": "6008de061d02515a46b7ba924e5d5741", "text": "The purpose of this article is to introduce evidence-based concepts and demonstrate how to find valid evidence to answer clinical questions. Evidence-based decision making (EBDM) requires understanding new concepts and developing new skills including how to: ask good clinical questions, conduct a computerized search, critically appraise the evidence, apply the results in clinical practice, and evaluate the process. This approach recognizes that clinicians can never be completely current with all conditions, medications, materials, or available products. Thus EBDM provides a mechanism for addressing these gaps in knowledge in order to provide the best care possible. In Part 1, a case scenario demonstrates the application of the skills involved in structuring a clinical question and conducting an online search using PubMed. Practice tips are provided along with online resources related to the evidence-based process.", "title": "" } ]
scidocsrr
20cc5fe9f25a1d5e894095d8fb960111
Association between substandard classroom ventilation rates and students' academic achievement.
[ { "docid": "01bf087ff78fb76eab676507d762b80d", "text": "This meta-analysis reviewed the literature on socioeconomic status (SES) and academic achievement in journal articles published between 1990 and 2000. The sample included 101,157 students, 6,871 schools, and 128 school districts gathered from 74 independent samples. The results showed a medium to strong SES–achievement relation. This relation, however, is moderated by the unit, the source, the range of SES variable, and the type of SES–achievement measure. The relation is also contingent upon school level, minority status, and school location. The author conducted a replica of White’s (1982) meta-analysis to see whether the SES–achievement correlation had changed since White’s initial review was published. The results showed a slight decrease in the average correlation. Practical implications for future research and policy are discussed.", "title": "" } ]
[ { "docid": "c3fe8211d76c12fce10221f97f1028b3", "text": "Computer architects put significant efforts on the design space exploration of a new processor, as it determines the overall characteristics (e.g., performance, power, cost) of the final product. To thoroughly explore the space and achieve the best results, they need high design evaluation throughput – the ability to quickly assess a large number of designs with minimal costs. Unfortunately, the existing simulators and performance models are either too slow or too inaccurate to meet this demand. As a result, architects often sacrifice the design space coverage to end up with a sub-optimal product. To address this challenge, we propose RpStacks-MT, a methodology to evaluate multi-core processor designs with high throughput. First, we propose a graph-based multi-core performance model, which overcomes the limitations of the existing models to accurately describe a multi-core processor's key performance behaviors. Second, we propose a reuse distance-based memory system model and a dynamic scheduling reconstruction method, which help our graph model to quickly track the performance changes from processor design changes. Lastly, we combine these models with a state of the art design exploration idea to evaluate multiple processor designs in an efficient way. Our evaluations show that RpStacks-MT achieves extremely high design evaluation throughput – 88× higher versus a conventional cycle-level simulator and 18× higher versus an accelerated simulator (on average, for evaluating 10,000 designs) – while maintaining simulator-level accuracy.", "title": "" }, { "docid": "963e2e56265d07b33cfa009434bce943", "text": "In today’s modern communication industry, antennas are the most important components required to create a communication link. Microstrip antennas are the most suited for aerospace and mobile applications because of their low profile, light weight and low power handling capacity. They can be designed in a variety of shapes in order to obtain enhanced gain and bandwidth, dual band and circular polarization to even ultra wideband operation. The thesis provides a detailed study of the design of probe-fed Rectangular Microstrip Patch Antenna to facilitate dual polarized, dual band operation. The design parameters of the antenna have been calculated using the transmission line model and the cavity model. For the simulation process IE3D electromagnetic software which is based on method of moment (MOM) has been used. The effect of antenna dimensions and substrate parameters on the performance of antenna have been discussed. The antenna has been designed with embedded spur lines and integrated reactive loading for dual band operation with better impedance matching. The designed antenna can be operated at two frequency band with center frequencies 7.62 (with a bandwidth of 11.68%) and 9.37 GHz (with a bandwidth of 9.83%). A cross slot of unequal length has been inserted so as to have dual polarization. This results in a minor shift in the central frequencies of the two bands to 7.81 and 9.28 GHz. At a frequency of 9.16 GHz, circular polarization has been obtained. So the dual band and dual frequency operation has successfully incorporated into a single patch.", "title": "" }, { "docid": "7e8feb5f8d816a0c0626f6fdc4db7c04", "text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.", "title": "" }, { "docid": "610fd71d5e866ead56013642ec7ee69e", "text": "A constructive algorithm is proposed for feed-forward neural networks which uses node-splitting in the hidden layers to build large networks from smaller ones. The small network forms an approximate model of a set of training data, and the split creates a larger, more powerful network which is initialised with the approximate solution already found. The insufficiency of the smaller network in modelling the system which generated the data leads to oscillation in those hidden nodes whose weight vectors cover regions in the input space where more detail is required in the model. These nodes are identified and split in two using principal component analysis, allowing the new nodes to cover the two main modes of the oscillating vector. Nodes are selected for splitting using principal component analysis on the oscillating weight vectors, or by examining the Hessian matrix of second derivatives of the network error with respect to the weights.", "title": "" }, { "docid": "61f0e20762a8ce5c3c40ea200a32dd43", "text": "Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general. Keywords—elearning; gamification; marketing; monetization; viral marketing; virality", "title": "" }, { "docid": "00754b8714c81687afb450908d3a3ac1", "text": "Wearable smart devices are already amongst us. Currently, smartwatches are one of the key drivers of the wearable technology and are being used by a large population of consumers. This paper takes a first look at this increasingly popular technology with a systematic characterization of the smartwatch app markets. We conduct a large scale analysis of three popular smartwatch app markets: Android Wear, Samsung, and Apple, and characterize more than 14,000 smartwatch apps in multiple aspects such as prices, number of developers and categories. Our analysis shows that approximately 41% and 30% of the apps in Android Wear and Samsung app markets are Personalization apps that provide watch faces. Further, we provide a generic taxonomy for apps on all three platforms based on their packaging and modes of communication, that allow us to investigate apps with respect to privacy and security. Finally, we study the privacy risks associated with the app usage by identifying third party trackers integrated into these apps and personal information leakage through network traffic analysis. We show that a higher percentage of Apple apps (62%) are connected to third party trackers compared to Samsung (36%) and Android Wear (46%).", "title": "" }, { "docid": "4949c4698dc9ce7fcea196def92afd06", "text": "Argumentative text has been analyzed both theoretically and computationally in terms of argumentative structure that consists of argument components (e.g., claims, premises) and their argumentative relations (e.g., support, attack). Less emphasis has been placed on analyzing the semantic types of argument components. We propose a two-tiered annotation scheme to label claims and premises and their semantic types in an online persuasive forum, Change My View, with the long-term goal of understanding what makes a message persuasive. Premises are annotated with the three types of persuasive modes: ethos, logos, pathos, while claims are labeled as interpretation, evaluation, agreement, or disagreement, the latter two designed to account for the dialogical nature of our corpus. We aim to answer three questions: 1) can humans reliably annotate the semantic types of argument components? 2) are types of premises/claims positioned in recurrent orders? and 3) are certain types of claims and/or premises more likely to appear in persuasive messages than in nonpersuasive messages?", "title": "" }, { "docid": "5ce82b8c2cc87ae84026d230f3a97e06", "text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.", "title": "" }, { "docid": "720e417783f801e8f97531710b5eb779", "text": "In this article, a novel Vertical Take-Off and Landing (VTOL) Single Rotor Unmanned Aerial Vehicle (SR-UAV) will be presented. The SRUAV's design properties will be analysed in detail, with respect to technical novelties outlining the merits of such a conceptual approach. The system's model will be mathematically formulated, while a cascaded P-PI and PID-based control structure will be utilized in extensive simulation trials for the preliminary evaluation of the SR-UAV's attitude and translational performance.", "title": "" }, { "docid": "c20da8ccf60fbb753815d006627fa673", "text": "This paper presents LiteOS, a multi-threaded operating system that provides Unix-like abstractions for wireless sensor networks. Aiming to be an easy-to-use platform, LiteOS offers a number of novel features, including: (1) a hierarchical file system and a wireless shell interface for user interaction using UNIX-like commands; (2) kernel support for dynamic loading and native execution of multithreaded applications; and (3) online debugging, dynamic memory, and file system assisted communication stacks. LiteOS also supports software updates through a separation between the kernel and user applications, which are bridged through a suite of system calls. Besides the features that have been implemented, we also describe our perspective on LiteOS as an enabling platform. We evaluate the platform experimentally by measuring the performance of common tasks, and demonstrate its programmability through twenty-one example applications.", "title": "" }, { "docid": "54bae3ac2087dbc7dcba553ce9f2ef2e", "text": "The landscape of computing capabilities within the home has seen a recent shift from persistent desktops to mobile platforms, which has led to the use of the cloud as the primary computing platform implemented by developers today. Cloud computing platforms, such as Amazon EC2 and Google App Engine, are popular for many reasons including their reliable, always on, and robust nature. The capabilities that centralized computing platforms provide are inherent to their implementation, and unmatched by previous platforms (e.g., Desktop applications). Thus, third-party developers have come to rely on cloud computing platforms to provide high quality services to their end-users.", "title": "" }, { "docid": "8448f57118fb3db90a4f793cbebc1bc8", "text": "Motivated by increased concern over energy consumption in modern data centers, we propose a new, distributed computing platform called Nano Data Centers (NaDa). NaDa uses ISP-controlled home gateways to provide computing and storage services and adopts a managed peer-to-peer model to form a distributed data center infrastructure. To evaluate the potential for energy savings in NaDa platform we pick Video-on-Demand (VoD) services. We develop an energy consumption model for VoD in traditional and in NaDa data centers and evaluate this model using a large set of empirical VoD access data. We find that even under the most pessimistic scenarios, NaDa saves at least 20% to 30% of the energy compared to traditional data centers. These savings stem from energy-preserving properties inherent to NaDa such as the reuse of already committed baseline power on underutilized gateways, the avoidance of cooling costs, and the reduction of network energy consumption as a result of demand and service co-localization in NaDa.", "title": "" }, { "docid": "1538ff59f18c6e6bc98acedb08ab5f78", "text": "Radar theory and radar system have developed a lot for the last 50 years or so. Recently, a new concept in array radar has been introduced by the multiple-input multiple-output (MIMO) radar, which has the potential to dramatically improve the performance of radars in parameters estimation. While an earlier appeared concept, synthetic impulse and aperture radar (SIAR) is a typical kind of MIMO radar and probes a channel by transmitting multiple signals separated both spectrally and spatially. To the best knowledge of the authors, almost all the analyses available are based on the simple linear array while our SIAR system is based on a circular array. This paper first introduces the recent research and development in and the features of MIMO radars, then discusses our SIAR system as a specific example of MIMO system and finally the unique advantages of SIAR are listed", "title": "" }, { "docid": "9aa1e7c351129fa4a6adb3a8899e518f", "text": "Thousands of unique non-coding RNA (ncRNA) sequences exist within cells. Work from the past decade has altered our perception of ncRNAs from 'junk' transcriptional products to functional regulatory molecules that mediate cellular processes including chromatin remodelling, transcription, post-transcriptional modifications and signal transduction. The networks in which ncRNAs engage can influence numerous molecular targets to drive specific cell biological responses and fates. Consequently, ncRNAs act as key regulators of physiological programmes in developmental and disease contexts. Particularly relevant in cancer, ncRNAs have been identified as oncogenic drivers and tumour suppressors in every major cancer type. Thus, a deeper understanding of the complex networks of interactions that ncRNAs coordinate would provide a unique opportunity to design better therapeutic interventions.", "title": "" }, { "docid": "04f4058d37a33245abf8ed9acd0af35d", "text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.", "title": "" }, { "docid": "833095fbc8c06c5698521420e1aa6a3b", "text": "In the last two decades, Computer Aided Detection (CAD) systems were developed to help radiologists analyse screening mammograms, however benefits of current CAD technologies appear to be contradictory, therefore they should be improved to be ultimately considered useful. Since 2012, deep convolutional neural networks (CNN) have been a tremendous success in image recognition, reaching human performance. These methods have greatly surpassed the traditional approaches, which are similar to currently used CAD solutions. Deep CNN-s have the potential to revolutionize medical image analysis. We propose a CAD system based on one of the most successful object detection frameworks, Faster R-CNN. The system detects and classifies malignant or benign lesions on a mammogram without any human intervention. The proposed method sets the state of the art classification performance on the public INbreast database, AUC = 0.95. The approach described here has achieved 2nd place in the Digital Mammography DREAM Challenge with AUC = 0.85. When used as a detector, the system reaches high sensitivity with very few false positive marks per image on the INbreast dataset. Source code, the trained model and an OsiriX plugin are published online at https://github.com/riblidezso/frcnn_cad.", "title": "" }, { "docid": "cbb5d9269067ad2bbdb2c9823338d752", "text": "This Paper reveals the information about Deep Neural Network (DNN) and concept of deep learning in field of natural language processing i.e. machine translation. Now day's DNN is playing major role in machine leaning technics. Recursive recurrent neural network (R2NN) is a best technic for machine learning. It is the combination of recurrent neural network and recursive neural network (such as Recursive auto encoder). This paper presents how to train the recurrent neural network for reordering for source to target language by using Semi-supervised learning methods. Word2vec tool is required to generate word vectors of source language and Auto encoder helps us in reconstruction of the vectors for target language in tree structure. Results of word2vec play an important role in word alignment of the input vectors. RNN structure is very complicated and to train the large data file on word2vec is also a time-consuming task. Hence, a powerful hardware support (GPU) is required. GPU improves the system performance by decreasing training time period.", "title": "" }, { "docid": "ba0d63c3e6b8807e1a13b36bc30d5d72", "text": "Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.", "title": "" }, { "docid": "fa320a8347093bca4817da2ed7c54e61", "text": "Gases for electrical insulation are essential for the operation of electric power equipment. This Review gives a brief history of gaseous insulation that involved the emergence of the most potent industrial greenhouse gas known today, namely sulfur hexafluoride. SF6 paved the way to space-saving equipment for the transmission and distribution of electrical energy. Its ever-rising usage in the electrical grid also played a decisive role in the continuous increase of atmospheric SF6 abundance over the last decades. This Review broadly covers the environmental concerns related to SF6 emissions and assesses the latest generation of eco-friendly replacement gases. They offer great potential for reducing greenhouse gas emissions from electrical equipment but at the same time involve technical trade-offs. The rumors of one or the other being superior seem premature, in particular because of the lack of dielectric, environmental, and chemical information for these relatively novel compounds and their dissociation products during operation.", "title": "" }, { "docid": "7e8976250bd67e07fb71c6dd8b5be414", "text": "With the rapid growth of product review forums, discussion groups, and Blogs, it is almost impossible for a customer to make an informed purchase decision. Different and possibly contradictory opinions written by different reviewers can even make customers more confused. In the last few years, mining customer reviews (opinion mining) has emerged as an interesting new research direction to address this need. One of the interesting problem in opinion mining is Opinion Question Answering (Opinion QA). While traditional QA can only answer factual questions, opinion QA aims to find the authors' sentimental opinions on a specific target. Current opinion QA systems suffers from several weaknesses. The main cause of these weaknesses is that these methods can only answer a question if they find a content similar to the given question in the given documents. As a result, they cannot answer majority questions like \"What is the best digital camera?\" nor comparative questions, e.g. \"Does SamsungY work better than CanonX?\". In this paper we address the problem of opinion question answering to answer opinion questions about products by using reviewers' opinions. Our proposed method, called Aspect-based Opinion Question Answering (AQA), support answering of opinion-based questions while improving the weaknesses of current techniques. AQA contains five phases: question analysis, question expansion, high quality review retrieval, subjective sentence extraction, and answer grouping. AQA adopts an opinion mining technique in the preprocessing phase to identify target aspects and estimate their quality. Target aspects are attributes or components of the target product that have been commented on in the review, e.g. 'zoom' and 'battery life' for a digital camera. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the AQA in terms of the accuracy of the retrieved answers.", "title": "" } ]
scidocsrr
2e590cd3be228d7cf9aee71e74806c5e
Aerodynamic Loads on Tall Buildings : Interactive Database
[ { "docid": "c49ae120bca82ef0d9e94115ad7107f2", "text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: Tracy.L.Kijewski.1@nd.edu 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556", "title": "" }, { "docid": "6658b58ae09cfb3fbbafc77a87744e4f", "text": "Wind loads on structures under the buffeting action of wind gusts have traditionally been treated by the ‘‘gust loading factor’’ (GLF) method in most major codes and standards around the world. In this scheme, the equivalent-static wind loading used for design is equal to the mean wind force multiplied by the GLF. Although the traditional GLF method ensures an accurate estimation of the displacement response, it may fall short in providing a reliable estimate of other response components. To overcome this shortcoming, a more consistent procedure for determining design loads on tall structures is proposed. This paper highlights an alternative model, in which the GLF is based on the base bending moment rather than the displacement. The expected extreme base moment is computed by multiplying the mean base moment by the proposed GLF. The base moment is then distributed to each floor in terms of the floor load in a format that is very similar to the one used to distribute the base shear in earthquake engineering practice. In addition, a simple relationship between the proposed base moment GLF and the traditional GLF is derived, which makes it convenient to employ the proposed approach while utilizing the existing background information. Numerical examples are presented to demonstrate the efficacy of the proposed procedure in light of the traditional approach. This paper also extends the new framework for the formulation of wind load effects in the acrosswind and torsional directions along the ‘‘GLF’’ format that has generally been used for the alongwind response. A 3D GLF concept is advanced, which draws upon a database of aerodynamic wind loads on typical tall buildings, a mode shape correction procedure and a more realistic formulation of the equivalent-static wind loads and their effects. A numerical example is presented to demonstrate the efficacy of the proposed procedure in light of the traditional approach. It is envisaged that the proposed formulation will be most appropriate for inclusion in codes and standards. r 2003 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "198967b505c9ded9255bff7b82fb2781", "text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.", "title": "" }, { "docid": "8e3b1f49ca8a5afe20a9b66e0088a56a", "text": "Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.", "title": "" }, { "docid": "b66a2ce976a145827b5b9a5dd2ad2495", "text": "Compared to previous head-mounted displays, the compact and low-cost Oculus Rift has claimed to offer improved virtual reality experiences. However, how and what kinds of user experiences are encountered by people when using the Rift in actual gameplay has not been examined. We present an exploration of 10 participants' experiences of playing a first-person shooter game using the Rift. Despite cybersickness and a lack of control, participants experienced heightened experiences, a richer engagement with passive game elements, a higher degree of flow and a deeper immersion on the Rift than on a desktop setup. Overly demanding movements, such as the large range of head motion required to navigate the game environment were found to adversely affect gaming experiences. Based on these and other findings, we also present some insights for designing games for the Rift.", "title": "" }, { "docid": "5f5d4ea7915639ca401d2354bfdb0704", "text": "In next generation cellular networks, cloud computing will have profound impacts on mobile wireless communications. On the one hand, the integration of cloud computing into the mobile environment enables MCC systems. On the other hand, the powerful computing platforms in the cloud for radio access networks lead to a novel concept of C-RAN. In this article we study the topology configuration and rate allocation problem in C-RAN with the objective of optimizing the end-to-end performance of MCC users in next generation cellular networks. We use a decision theoretical approach to tackle the delayed channel state information problem in C-RAN. Simulation results show that the design and operation of future mobile wireless networks can be significantly affected by cloud computing, and the proposed scheme is capable of achieving substantial performance gains over existing schemes.", "title": "" }, { "docid": "af97cf19ca86e1d66b8a81c4b71ff763", "text": "The mechanisms of anterior cruciate ligament (ACL) injuries are still inconclusive from an epidemiological standpoint. An epidemiological approach in a large sample group over an appropriate period of years will be necessary to enhance the current knowledge of the ACL injury mechanism. The objective of the study was to investigate the ACL injury occurrence in a large sample over twenty years and demonstrate the relationships between the ACL injury occurrence and the dynamic knee alignment at the time of the injury. We investigated the activity, the injury mechanism, and the dynamic knee alignment at the time of the injury in 1,718 patients diagnosed as having the ACL injuries. Regarding the activity at the time of the injury, \"competition \"was the most common, accounting for about half of all the injuries. The current result also showed that the noncontact injury was the most common, which was observed especially in many female athletes. Finally, the dynamic alignment of \"Knee-in & Toe- out \"(i.e. dynamic knee valgus) was the most common, accounting for about half. These results enhance our understanding of the ACL injury mechanism and may be used to guide future injury prevention strategies. Key pointsWe investigated the situation of ACL injury occurrence, especially dynamic alignments at the time of injury, in 1,718 patients who had visited our institution for surgery and physical therapy for twenty years.Our epidemiological study of the large patient group revealed that \"knee-in & toe-out \"alignment was the most frequently seen at the time of the ACL injury.From an epidemiological standpoint, we need to pay much attention to avoiding \"Knee-in & Toe-out \"alignment during sports activities.", "title": "" }, { "docid": "8326f993dbb83e631d2e6892e03520e7", "text": "Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires experience. This article documents a new NASA initiative to build a centralized repository of software defect data; in particular, it documents one specific case study on software metrics. Software metrics are used as a basis for prediction of errors in code modules, but there are many different metrics available. McCabe is one of the more popular tools used to produce metrics, but, as will be shown in this paper, other metrics can be more significant.", "title": "" }, { "docid": "04231c12db08408b7207b751a4ad7420", "text": "The fabrication of digital Integrated Circuits (ICs) is increasingly outsourced. Given this trend, security is recognized as an important issue. The threat agent is an attacker at the IC foundry that has information about the circuit and inserts covert, malicious circuitry. The use of 3D IC technology has been suggested as a possible technique to counter this threat. However, to our knowledge, there is no prior work on how such technology can be used effectively. We propose a way to use 3D IC technology for security in this context. Specifically, we obfuscate the circuit by lifting wires to a trusted tier, which is fabricated separately. This is referred to as split manufacturing. For this setting, we provide a precise notion of security, that we call k-security, and a characterization of the underlying computational problems and their complexity. We further propose a concrete approach for identifying sets of wires to be lifted, and the corresponding security they provide. We conclude with a comprehensive empirical assessment with benchmark circuits that highlights the security versus cost trade-offs introduced by 3D IC based circuit obfuscation.", "title": "" }, { "docid": "52ce8c1259050f403723ec38782898f1", "text": "Indian population is growing very fast and is responsible for posing various environmental risks like traffic noise which is the primitive contributor to the overall noise pollution in urban environment. So, an attempt has been made to develop a web enabled application for spatio-temporal semantic analysis of traffic noise of one of the urban road segments in India. Initially, a traffic noise model was proposed for the study area based on the Calixto model. Later, a City Geographic Markup Language (CityGML) model, which is an OGC encoding standard for 3D data representation, was developed and stored into PostGIS. A web GIS framework was implemented for simulation of traffic noise level mapped on building walls using the data from PostGIS. Finally, spatio-temporal semantic analysis to quantify the effects in terms of threshold noise level, number of walls and roofs affected from start to the end of the day, was performed.", "title": "" }, { "docid": "7d42d3d197a4d62e1b4c0f3c08be14a9", "text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.", "title": "" }, { "docid": "21c1be0458cc6908c3f7feb6591af841", "text": "Initial work on automatic emotion recognition concentrates mainly on audio-based emotion classification. Speech is the most important channel for the communication between humans and it may expected that emotional states are trans-fered though content, prosody or paralinguistic cues. Besides the audio modality, with the rapidly developing computer hardware and video-processing devices researches start exploring the video modality. Visual-based emotion recognition works focus mainly on the extraction and recognition of emotional information from the facial expressions. There are also attempts to classify emotional states from body or head gestures and to combine different visual modalities, for instance facial expressions and body gesture captured by two separate cameras [3]. Emotion recognition from psycho-physiological measurements, such as skin conductance, respiration, electro-cardiogram (ECG), electromyography (EMG), electroencephalography (EEG) is another attempt. In contrast to speech, gestures or facial expressions these biopotentials are the result of the autonomic nervous system and cannot be imitated [4]. Research activities in facial expression and speech based emotion recognition [6] are usually performed independently from each other. But in almost all practical applications people speak and exhibit facial expressions at the same time, and consequently both modalities should be used in order to perform robust affect recognition. Therefore, multimodal, and in particularly audiovisual emotion recognition has been emerging in recent times [11], for example multiple classifier systems have been widely investigated for the classification of human emotions [1, 9, 12, 14]. Combining classifiers is a promising approach to improve the overall classifier performance [13, 8]. In multiple classifier systems (MCS) it is assumed that the raw data X originates from an underlying source, but each classifier receives different subsets of (X) of the same raw input data X. Feature vector F j (X) are used as the input to the j−th classifier computing an estimate y j of the class membership of F j (X). This output y j might be a crisp class label or a vector of class memberships, e.g. estimates of posteriori probabilities. Based on the multiple classifier outputs y 1 ,. .. , y N the combiner produces the final decision y. Combiners used in this study are fixed transformations of the multiple classifier outputs y 1 ,. .. , y N. Examples of such combining rules are Voting, (weighted) Averaging, and Multiplying, just to mention the most popular types. 2 Friedhelm Schwenker In addition to a priori fixed combination rules the combiner can be a …", "title": "" }, { "docid": "3c1db6405945425c61495dd578afd83f", "text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.", "title": "" }, { "docid": "ac040c0c04351ea6487ea6663688ebd6", "text": "This paper presents the conceptual design, detailed development and flight testing of AtlantikSolar, a 5.6m-wingspan solar-powered Low-Altitude Long-Endurance (LALE) Unmanned Aerial Vehicle (UAV) designed and built at ETH Zurich. The UAV is required to provide perpetual endurance at a geographic latitude of 45°N in a 4-month window centered around June 21st. An improved conceptual design method is presented and applied to maximize the perpetual flight robustness with respect to local meteorological disturbances such as clouds or winds. Airframe, avionics hardware, state estimation and control method development for autonomous flight operations are described. Flight test results include a 12-hour flight relying solely on batteries to replicate night-flight conditions. In addition, we present flight results from Search-And-Rescue field trials where a camera and processing pod were mounted on the aircraft to create high-fidelity 3D-maps of a simulated disaster area.", "title": "" }, { "docid": "00f4af13461c5f6d15d6883afc50c1d1", "text": "In order to solve the problem that the long cycle and the repetitive work in the process of designing the industrial robot, a modular manipulator system developed for general industrial applications is introduced in this paper. When the application scene is changed, the corresponding robotic modules can be selected to assemble a new robot configuration that meets the requirements. The modules can be divided into two categories: joint modules and link modules. Joint modules consist of three types of revolute joint modules with different torque, and link modules mainly contain T link module and L link module. By connection of different types of modules, various of configurations can be achieved. Considering the traditional 6-DoF manipulators are difficult to meet the needs of the unstructured industrial applications, a 7-DoF redundant manipulator prototype is designed on the basis of the robotic modules.", "title": "" }, { "docid": "a9a3d46bd6f5df951957ddc57d3d390d", "text": "In this paper, we propose a low-power level shifter (LS) capable of converting extremely low-input voltage into high-output voltage. The proposed LS consists of a pre-amplifier with a logic error correction circuit and an output latch stage. The pre-amplifier generates complementary amplified signals, and the latch stage converts them into full-swing output signals. Simulated results demonstrated that the proposed LS in a 0.18-μm CMOS process can convert a 0.19-V input into 1.8-V output correctly. The energy and the delay time of the proposed LS were 0.24 pJ and 21.4 ns when the low supply voltage, high supply voltage, and the input pulse frequency, were 0.4, 1.8 V, and 100 kHz, respectively.", "title": "" }, { "docid": "7974d3e3e9c431256ee35c3032288bd1", "text": "Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.", "title": "" }, { "docid": "73dcb2e355679f2e466029fbbb24a726", "text": "Many of the world's most popular websites catalyze their growth through invitations from existing members. New members can then in turn issue invitations, and so on, creating cascades of member signups that can spread on a global scale. Although these diffusive invitation processes are critical to the popularity and growth of many websites, they have rarely been studied, and their properties remain elusive. For instance, it is not known how viral these cascades structures are, how cascades grow over time, or how diffusive growth affects the resulting distribution of member characteristics present on the site. In this paper, we study the diffusion of LinkedIn, an online professional network comprising over 332 million members, a large fraction of whom joined the site as part of a signup cascade. First we analyze the structural patterns of these signup cascades, and find them to be qualitatively different from previously studied information diffusion cascades. We also examine how signup cascades grow over time, and observe that diffusion via invitations on LinkedIn occurs over much longer timescales than are typically associated with other types of online diffusion. Finally, we connect the cascade structures with rich individual-level attribute data to investigate the interplay between the two. Using novel techniques to study the role of homophily in diffusion, we find striking differences between the local, edge-wise homophily and the global, cascade-level homophily we observe in our data, suggesting that signup cascades form surprisingly coherent groups of members.", "title": "" }, { "docid": "7bc06c5b4fbdbd996f580b8c87b0b949", "text": "Video streaming over HTTP is becoming the de facto dominating paradigm for today's video applications. HTTP as an over-the-top (OTT) protocol has been leveraged for quality video traversal over the Internet. High user-received quality-of-experience (QoE) is driven not only by the new technology, but also by a wide range of user demands. Given the limitation of a traditional TCP/IP network for supporting video transmission, the typical on-off transfer pattern is inevitable. Dynamic adaptive streaming over HTTP (DASH) establishes a simple architecture and enables new video applications to fully utilize the exiting physical network infrastructure. By deploying robust adaptive algorithms at the client side, DASH can provide a smooth streaming experience. We propose a dynamic adaptive algorithm in order to keep a high QoE for the average user's experience. We formulated our QoE optimization in a set of key factors. The results obtained by our empirical network traces show that our approach not only achieves a high average QoE but it also works stably under different network conditions.", "title": "" }, { "docid": "d2c42797307ca5d8e1c706afe510f316", "text": "The continued amalgamation of cloud technologies into all aspects of our daily lives and the technologies we use (i.e. cloud-of-things) creates business opportunities, security and privacy risks, and investigative challenges (in the event of a cybersecurity incident). This study examines the extent to which data acquisition fromWindows phone, a common cloud-of-thing device, is supported by three popular mobile forensics tools. The effect of device settings modification (i.e. enabling screen lock and device reset operations) and alternative acquisition processes (i.e. individual and combined acquisition) on the extraction results are also examined. Our results show that current mobile forensic tool support for Windows Phone 8 remains limited. The results also showed that logical acquisition support was more complete in comparison to physical acquisition support. In one example, the tool was able to complete a physical acquisition of a Nokia Lumia 625, but its deleted contacts and SMSs could not be recovered/extracted. In addition we found that separate acquisition is needed for device removable media to maximize acquisition results, particularly when trying to recover deleted data. Furthermore, enabling flight-mode and disabling location services are highly recommended to eliminate the potential for data alteration during the acquisition process. These results should provide practitioners with an overview of the current capability of mobile forensic tools and the challenges in successfully extracting evidence from the Windows phone platform. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "95df9ceddf114060d981415c0b1d6125", "text": "This paper presents a comparative study of different neural network models for forecasting the weather of Vancouver, British Columbia, Canada. For developing the models, we used one year’s data comprising of daily maximum and minimum temperature, and wind-speed. We used Multi-Layered Perceptron (MLP) and an Elman Recurrent Neural Network (ERNN), which were trained using the one-step-secant and LevenbergMarquardt algorithms. To ensure the effectiveness of neurocomputing techniques, we also tested the different connectionist models using a different training and test data set. Our goal is to develop an accurate and reliable predictive model for weather analysis. Radial Basis Function Network (RBFN) exhibits a good universal approximation capability and high learning convergence rate of weights in the hidden and output layers. Experimental results obtained have shown RBFN produced the most accurate forecast model as compared to ERNN and MLP networks.", "title": "" }, { "docid": "c25b4015787e56f241cabf5e76cb3cc6", "text": "Clients with generalized anxiety disorder (GAD) received either (a) applied relaxation and self-control desensitization, (b) cognitive therapy, or (c) a combination of these methods. Treatment resulted in significant improvement in anxiety and depression that was maintained for 2 years. The large majority no longer met diagnostic criteria; a minority sought further treatment during follow-up. No differences in outcome were found between conditions; review of the GAD therapy literature suggested that this may have been due to strong effects generated by each component condition. Finally, interpersonal difficulties remaining at posttherapy, measured by the Inventory of Interpersonal Problems Circumplex Scales (L. E. Alden, J. S. Wiggins, & A. L. Pincus, 1990) in a subset of clients, were negatively associated with posttherapy and follow-up improvement, suggesting the possible utility of adding interpersonal treatment to cognitive-behavioral therapy to increase therapeutic effectiveness.", "title": "" } ]
scidocsrr
f9f0451cc4a70707c49c6cdcb6508136
Patient outcome prediction via convolutional neural networks based on multi-granularity medical concept embedding
[ { "docid": "897a6d208785b144b5d59e4f346134cd", "text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.", "title": "" }, { "docid": "42c890832d861ad2854fd1f56b13eb45", "text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "6ec3c98e78e78303a0dc0068ab90a17d", "text": "INTRODUCTION\nIn this study we report a large series of patients with unilateral winged scapula (WS), with special attention to long thoracic nerve (LTN) palsy.\n\n\nMETHODS\nClinical and electrodiagnostic data were collected from 128 patients over a 25-year period.\n\n\nRESULTS\nCauses of unilateral WS were LTN palsy (n = 70), spinal accessory nerve (SAN) palsy (n = 39), both LTN and SAN palsy (n = 5), facioscapulohumeral dystrophy (FSH) (n = 5), orthopedic causes (n = 11), voluntary WS (n = 6), and no definite cause (n = 2). LTN palsy was related to neuralgic amyotrophy (NA) in 61 patients and involved the right side in 62 patients.\n\n\nDISCUSSION\nClinical data allow for identifying 2 main clinical patterns for LTN and SAN palsy. Electrodiagnostic examination should consider bilateral nerve conduction studies of the LTN and SAN, and needle electromyography of their target muscles. LTN palsy is the most frequent cause of unilateral WS and is usually related to NA. Voluntary WS and FSH must be considered in young patients. Muscle Nerve 57: 913-920, 2018.", "title": "" }, { "docid": "a986826041730d953dfbf9fbc1b115a6", "text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "title": "" }, { "docid": "49fbe9ddc3087c26ecc373c6731fca77", "text": "Alarm correlation plays an important role in improving the service and reliability in modern telecommunication networks. Most previous research of alarm correlation didn’t consider the effects of noise data in the database. This paper focuses on the method of discovering alarm correlation rules from the database containing noise data. We firstly define two parameters Win_freq and Win_add as the measures of noise data and then present the Robust_search algorithm to solve the problem. At different size of Win_freq and Win_add, the experiments on alarm database containing noise data show that the Robust_search Algorithm can discover more rules with the bigger size of Win_add. We also compare two different interestingness measures of confidence and correlation by experiments.", "title": "" }, { "docid": "c29a2429d6dd7bef7761daf96a29daaf", "text": "In this meta-analysis, we synthesized data from published journal articles that investigated viewers’ enjoyment of fright and violence. Given the limited research on this topic, this analysis was primarily a way of summarizing the current state of knowledge and developing directions for future research. The studies selected (a) examined frightening or violent media content; (b) used self-report measures of enjoyment or preference for such content (the dependent variable); and (c) included independent variables that were given theoretical consideration in the literature. The independent variables examined were negative affect and arousal during viewing, empathy, sensation seeking, aggressiveness, and the respondents’ gender and age. The analysis confirmed that male viewers, individuals lower in empathy, and those higher in sensation seeking and aggressiveness reported more enjoyment of fright and violence. Some support emerged for Zillmann’s (1980, 1996) model of suspense enjoyment. Overall, the results demonstrate the importance of considering how viewers interpret or appraise their reactions to fright and violence. However, the studies were so diverse in design and measurement methods that it was difficult to identify the underlying processes. Suggestions are proposed for future research that will move toward the integration of separate lines of inquiry in a unified approach to understanding entertainment. MEDIA PSYCHOLOGY, 7, 207–237 Copyright © 2005, Lawrence Erlbaum Associates, Inc.", "title": "" }, { "docid": "d580f60d48331b37c55f1e9634b48826", "text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.", "title": "" }, { "docid": "ca17638b251d20cca2973a3f551b822f", "text": "The first edition of Artificial Intelligence: A Modern Approach has become a classic in the AI literature. It has been adopted by over 600 universities in 60 countries, and has been praised as the definitive synthesis of the field. In the second edition, every chapter has been extensively rewritten. Significant new material has been introduced to cover areas such as constraint satisfaction, fast propositional inference, planning graphs, internet agents, exact probabilistic inference, Markov Chain Monte Carlo techniques, Kalman filters, ensemble learning methods, statistical learning, probabilistic natural language models, probabilistic robotics, and ethical aspects of AI. The book is supported by a suite of online resources including source code, figures, lecture slides, a directory of over 800 links to \"AI on the Web,\" and an online discussion group. All of this is available at: aima.cs.berkeley.edu.", "title": "" }, { "docid": "4e263764fd14f643f7b414bc12615565", "text": "We present a superpixel method for full spatial phase and amplitude control of a light beam using a digital micromirror device (DMD) combined with a spatial filter. We combine square regions of nearby micromirrors into superpixels by low pass filtering in a Fourier plane of the DMD. At each superpixel we are able to independently modulate the phase and the amplitude of light, while retaining a high resolution and the very high speed of a DMD. The method achieves a measured fidelity F = 0.98 for a target field with fully independent phase and amplitude at a resolution of 8 × 8 pixels per diffraction limited spot. For the LG10 orbital angular momentum mode the calculated fidelity is F = 0.99993, using 768 × 768 DMD pixels. The superpixel method reduces the errors when compared to the state of the art Lee holography method for these test fields by 50% and 18%, with a comparable light efficiency of around 5%. Our control software is publicly available.", "title": "" }, { "docid": "7afa24cc5aa346b79436c1b9b7b15b23", "text": "Humans demonstrate remarkable abilities to predict physical events in complex scenes. Two classes of models for physical scene understanding have recently been proposed: “Intuitive Physics Engines”, or IPEs, which posit that people make predictions by running approximate probabilistic simulations in causal mental models similar in nature to video-game physics engines, and memory-based models, which make judgments based on analogies to stored experiences of previously encountered scenes and physical outcomes. Versions of the latter have recently been instantiated in convolutional neural network (CNN) architectures. Here we report four experiments that, to our knowledge, are the first rigorous comparisons of simulation-based and CNN-based models, where both approaches are concretely instantiated in algorithms that can run on raw image inputs and produce as outputs physical judgments such as whether a stack of blocks will fall. Both approaches can achieve super-human accuracy levels and can quantitatively predict human judgments to a similar degree, but only the simulation-based models generalize to novel situations in ways that people do, and are qualitatively consistent with systematic perceptual illusions and judgment asymmetries that people show.", "title": "" }, { "docid": "f7d728041dacdd701d2e9700864121ae", "text": "This article analyzes late-life depression, looking carefully at what defines a person as elderly, the incidence of late-life depression, complications and differences in symptoms between young and old patients with depression, subsyndromal depression, bipolar depression in the elderly, the relationship between grief and depression, along with sleep disturbances and suicidal ideation.", "title": "" }, { "docid": "b8322d65e61be7fb252b2e418df85d3e", "text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]", "title": "" }, { "docid": "d646a27556108caebd7ee5691c98d642", "text": "■ Abstract Theory and research on small group performance and decision making is reviewed. Recent trends in group performance research have found that process gains as well as losses are possible, and both are frequently explained by situational and procedural contexts that differentially affect motivation and resource coordination. Research has continued on classic topics (e.g., brainstorming, group goal setting, stress, and group performance) and relatively new areas (e.g., collective induction). Group decision making research has focused on preference combination for continuous response distributions and group information processing. New approaches (e.g., group-level signal detection) and traditional topics (e.g., groupthink) are discussed. New directions, such as nonlinear dynamic systems, evolutionary adaptation, and technological advances, should keep small group research vigorous well into the future.", "title": "" }, { "docid": "67974bd363f89a9da77b2e09851905d3", "text": "Traditional approaches to the task of ACE event detection primarily regard multiple events in one sentence as independent ones and recognize them separately by using sentence-level information. However, events in one sentence are usually interdependent and sentence-level information is often insufficient to resolve ambiguities for some types of events. This paper proposes a novel framework dubbed as Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms (HBTNGMA) to solve the two problems simultaneously. Firstly, we propose a hierarchical and bias tagging networks to detect multiple events in one sentence collectively. Then, we devise a gated multi-level attention to automatically extract and dynamically fuse the sentence-level and document-level information. The experimental results on the widely used ACE 2005 dataset show that our approach significantly outperforms other state-of-the-art methods.", "title": "" }, { "docid": "66dda817ec57dfe5b2acb611fdb0101c", "text": "Magnetometers and accelerometers are sensors that are now integrated in objects of everyday life like automotive applications, mobile phones and so on. Some applications need information of acceleration and attitude with a high accuracy. For example, MEMS magnetometers and accelerometers can be integrated in embedded like mobile phones and GPS receivers. The parameters of such sensors must be precisely estimated to avoid drift and biased values. Thus, calibration is an important step to correctly use these sensors and get the expected measurements. This paper presents the theoretical and experimental steps of a method to compute gains, bias and non orthogonality factors of magnetometer and accelerometer sensors. This method of calibration can be used for automatic calibration in embedded systems. The calibration procedure involves arbitrary rotations of the sensors platform and a visual 2D projection of measurements.", "title": "" }, { "docid": "57502ae793808fded7d446a3bb82ca74", "text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication", "title": "" }, { "docid": "cc6c485fdd8d4d61c7b68bfd94639047", "text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.", "title": "" }, { "docid": "0c9bbeaa783b2d6270c735f004ecc47f", "text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu.", "title": "" }, { "docid": "2793f528a9b29345b1ee8ce1202933e3", "text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.", "title": "" }, { "docid": "f5e934d65fa436cdb8e5cfa81ea29028", "text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.", "title": "" }, { "docid": "3b302ce4b5b8b42a61c7c4c25c0f3cbf", "text": "This paper describes quorum leases, a new technique that allows Paxos-based systems to perform reads with high throughput and low latency. Quorum leases do not sacrifice consistency and have only a small impact on system availability and write latency. Quorum leases allow a majority of replicas to perform strongly consistent local reads, which substantially reduces read latency at those replicas (e.g., by two orders of magnitude in wide-area scenarios). Previous techniques for performing local reads in Paxos systems either (a) sacrifice consistency; (b) allow only one replica to read locally; or (c) decrease the availability of the system and increase the latency of all updates by requiring all replicas to be notified synchronously. We describe the design of quorum leases and evaluate their benefits compared to previous approaches through an implementation running in five geo-distributed Amazon EC2 datacenters.", "title": "" }, { "docid": "842ee1e812d408df7e6f7dfd95e32a36", "text": "Abstract Phase segregation, the process by which the components of a binary mixture spontaneously separate, is a key process in the evolution and design of many chemical, mechanical, and biological systems. In this work, we present a data-driven approach for the learning, modeling, and prediction of phase segregation. A direct mapping between an initially dispersed, immiscible binary fluid and the equilibrium concentration field is learned by conditional generative convolutional neural networks. Concentration field predictions by the deep learning model conserve phase fraction, correctly predict phase transition, and reproduce area, perimeter, and total free energy distributions up to 98% accuracy.", "title": "" } ]
scidocsrr
0f1ae26827d07ebe752c0a88308a6659
A Measure for Objective Evaluation of Image Segmentation Algorithms
[ { "docid": "db8325925cb9fd1ebdcf7480735f5448", "text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "title": "" } ]
[ { "docid": "ccc3c2ee7a08eb239443d5773707d782", "text": "We introduce an iterative normalization and clustering method for single-cell gene expression data. The emerging technology of single-cell RNA-seq gives access to gene expression measurements for thousands of cells, allowing discovery and characterization of cell types. However, the data is confounded by technical variation emanating from experimental errors and cell type-specific biases. Current approaches perform a global normalization prior to analyzing biological signals, which does not resolve missing data or variation dependent on latent cell types. Our model is formulated as a hierarchical Bayesian mixture model with cell-specific scalings that aid the iterative normalization and clustering of cells, teasing apart technical variation from biological signals. We demonstrate that this approach is superior to global normalization followed by clustering. We show identifiability and weak convergence guarantees of our method and present a scalable Gibbs inference algorithm. This method improves cluster inference in both synthetic and real single-cell data compared with previous methods, and allows easy interpretation and recovery of the underlying structure and cell types.", "title": "" }, { "docid": "e5874c373f9bc4565249f335560023ff", "text": "We propose a multi-wing harmonium model for mining multimedia data that extends and improves on earlier models based on two-layer random fields, which capture bidirectional dependencies between hidden topic aspects and observed inputs. This model can be viewed as an undirected counterpart of the two-layer directed models such as LDA for similar tasks, but bears significant difference in inference/learning cost tradeoffs, latent topic representations, and topic mixing mechanisms. In particular, our model facilitates efficient inference and robust topic mixing, and potentially provides high flexibilities in modeling the latent topic spaces. A contrastive divergence and a variational algorithm are derived for learning. We specialized our model to a dual-wing harmonium for captioned images, incorporating a multivariate Poisson for word-counts and a multivariate Gaussian for color histogram. We present empirical results on the applications of this model to classification, retrieval and image annotation on news video collections, and we report an extensive comparison with various extant models.", "title": "" }, { "docid": "dd545adf1fba52e794af4ee8de34fc60", "text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.", "title": "" }, { "docid": "ad9536f85fd5996bd6457b8ed40e11d7", "text": "Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ∼ 90% and show that the method is robust to changes in speed of both the head and the target.", "title": "" }, { "docid": "d319a17ad2fa46e0278e0b0f51832f4b", "text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.", "title": "" }, { "docid": "06c839f10b3d561c3a327bb67aa8ec10", "text": "A great deal of research exists on the neural basis of theory-of-mind (ToM) or mentalizing. Qualitative reviews on this topic have identified a mentalizing network composed of the medial prefrontal cortex, posterior cingulate/precuneus, and bilateral temporal parietal junction. These conclusions, however, are not based on a quantitative and systematic approach. The current review presents a quantitative meta-analysis of neuroimaging studies pertaining to ToM, using the activation-likelihood estimation (ALE) approach. Separate ALE meta-analyses are presented for story-based and nonstory-based studies of ToM. The conjunction of these two meta-analyses reveals a core mentalizing network that includes areas not typically noted by previous reviews. A third ALE meta-analysis was conducted with respect to story comprehension in order to examine the relation between ToM and stories. Story processing overlapped with many regions of the core mentalizing network, and these shared regions bear some resemblance to a network implicated by a number of other processes.", "title": "" }, { "docid": "cbf5c00229e9ac591183f4877006cf2b", "text": "OBJECTIVE\nTo statistically analyze the long-term results of alar base reduction after rhinoplasty.\n\n\nMETHODS\nAmong a consecutive series of 100 rhinoplasty cases, 19 patients required alar base reduction. The mean (SD) follow-up time was 11 (9) months (range, 2 months to 3 years). Using preoperative and postoperative photographs, comparisons were made of the change in the base width (width of base between left and right alar-facial junctions), flare width (width on base view between points of widest alar flare), base height (distance from base to nasal tip on base view), nostril height (distance from base to anterior edge of nostril), and vertical flare (vertical distance from base to the widest alar flare). Notching at the nasal sill was recorded as none, minimal, mild, moderate, and severe.\n\n\nRESULTS\nChanges in vertical flare (P<.05) and nostril height (P<.05) were the only significant differences seen in the patients who required alar reduction. No significant change was seen in base width (P=.92), flare width (P=.41), or base height (P=.22). No notching was noted.\n\n\nCONCLUSIONS\nIt would have been preferable to study patients undergoing alar reduction without concomitant rhinoplasty procedures, but this approach is not practical. To our knowledge, the present study represents the most extensive attempt in the literature to characterize and quantify the postoperative effects of alar base reduction.", "title": "" }, { "docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21", "text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.", "title": "" }, { "docid": "0cd5813a069c8955871784cd3e63aa83", "text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.", "title": "" }, { "docid": "08fee0a21076c8a1d65eb7fc0f88610f", "text": "We propose Smells Phishy?, a board game that contributes to raising users' awareness of online phishing scams. We designed and developed the board game and conducted user testing with 21 participants. The results showed that after playing the game, participants had better understanding of phishing scams and learnt how to better protect themselves. Participants enjoyed playing the game and said that it was a fun and exciting experience. The game increased knowledge and awareness, and encouraged discussion.", "title": "" }, { "docid": "8b1276b7d74230748bdb60930dbc45a5", "text": "The debate continues around transconjunctival versus transcutaneous approaches. Despite the perceived safety of the former, many experienced surgeons continue to advocate the latter. This review aims to present a balanced view of each approach. It will first address the anatomic basis of lower lid aging and then organize recent literature and associated discussion into the transconjunctival and transcutaneous approaches. The integrated algorithm employed by the senior author will be presented. Finally this review will describe less mainstream suture techniques for lower lid rejuvenation and lower lid blepharoplasty complications with a focus upon lower lid malposition.", "title": "" }, { "docid": "49ef68eabca989e07f420a3a88386c77", "text": "Identifying the language used will typically be the first step in most natural language processing tasks. Among the wide variety of language identification methods discussed in the literature, the ones employing the Cavnar and Trenkle (1994) approach to text categorization based on character n-gram frequencies have been particularly successful. This paper presents the R extension package textcat for n-gram based text categorization which implements both the Cavnar and Trenkle approach as well as a reduced n-gram approach designed to remove redundancies of the original approach. A multi-lingual corpus obtained from the Wikipedia pages available on a selection of topics is used to illustrate the functionality of the package and the performance of the provided language identification methods.", "title": "" }, { "docid": "e75df6ff31c9840712cf1a4d7f6582cd", "text": "Endotoxin, a constituent of Gram-negative bacteria, stimulates macrophages to release large quantities of tumor necrosis factor (TNF) and interleukin-1 (IL-1), which can precipitate tissue injury and lethal shock (endotoxemia). Antagonists of TNF and IL-1 have shown limited efficacy in clinical trials, possibly because these cytokines are early mediators in pathogenesis. Here a potential late mediator of lethality is identified and characterized in a mouse model. High mobility group-1 (HMG-1) protein was found to be released by cultured macrophages more than 8 hours after stimulation with endotoxin, TNF, or IL-1. Mice showed increased serum levels of HMG-1 from 8 to 32 hours after endotoxin exposure. Delayed administration of antibodies to HMG-1 attenuated endotoxin lethality in mice, and administration of HMG-1 itself was lethal. Septic patients who succumbed to infection had increased serum HMG-1 levels, suggesting that this protein warrants investigation as a therapeutic target.", "title": "" }, { "docid": "3f68334f7f315921390d385ad45d8aaf", "text": "UNLABELLED\nAcarbose is an α-glucosidase inhibitor produced by Actinoplanes sp. SE50/110 that is medically important due to its application in the treatment of type2 diabetes. In this work, a comprehensive proteome analysis of Actinoplanes sp. SE50/110 was carried out to determine the location of proteins of the acarbose (acb) and the putative pyochelin (pch) biosynthesis gene cluster. Therefore, a comprehensive state-of-the-art proteomics approach combining subcellular fractionation, shotgun proteomics and spectral counting to assess the relative abundance of proteins within fractions was applied. The analysis of four different proteome fractions (cytosolic, enriched membrane, membrane shaving and extracellular fraction) resulted in the identification of 1582 of the 8270 predicted proteins. All 22 Acb-proteins and 21 of the 23 Pch-proteins were detected. Predicted membrane-associated, integral membrane or extracellular proteins of the pch and the acb gene cluster were found among the most abundant proteins in corresponding fractions. Intracellular biosynthetic proteins of both gene clusters were not only detected in the cytosolic, but also in the enriched membrane fraction, indicating that the biosynthesis of acarbose and putative pyochelin metabolites takes place at the inner membrane.\n\n\nBIOLOGICAL SIGNIFICANCE\nActinoplanes sp. SE50/110 is a natural producer of the α-glucosidase inhibitor acarbose, a bacterial secondary metabolite that is used as a drug for the treatment of type 2 diabetes, a disease which is a global pandemic that currently affects 387 million people and accounts for 11% of worldwide healthcare expenditures (www.idf.org). The work presented here is the first comprehensive investigation of protein localization and abundance in Actinoplanes sp. SE50/110 and provides an extensive source of information for the selection of genes for future mutational analysis and other hypothesis driven experiments. The conclusion that acarbose or pyochelin family siderophores are synthesized at the inner side of the cytoplasmic membrane determined from this work, indicates that studying corresponding intermediates will be challenging. In addition to previous studies on the genome and transcriptome, the work presented here demonstrates that the next omic level, the proteome, is now accessible for detailed physiological analysis of Actinoplanes sp. SE50/110, as well as mutants derived from this and related species.", "title": "" }, { "docid": "86af81e39bce547a3f29b4851d033356", "text": "Empirical studies largely support the continuity hypothesis of dreaming. Despite of previous research efforts, the exact formulation of the continuity hypothesis remains vague. The present paper focuses on two aspects: (1) the differential incorporation rate of different waking-life activities and (2) the magnitude of which interindividual differences in waking-life activities are reflected in corresponding differences in dream content. Using a correlational design, a positive, non-zero correlation coefficient will support the continuity hypothesis. Although many researchers stress the importance of emotional involvement on the incorporation rate of waking-life experiences into dreams, formulated the hypothesis that highly focused cognitive processes such as reading, writing, etc. are rarely found in dreams due to the cholinergic activation of the brain during dreaming. The present findings based on dream diaries and the exact measurement of waking activities replicated two recent questionnaire studies. These findings indicate that it will be necessary to specify the continuity hypothesis more fully and include factors (e.g., type of waking-life experience, emotional involvement) which modulate the incorporation rate of waking-life experiences into dreams. Whether the cholinergic state of the brain during REM sleep or other alterations of brain physiology (e.g., down-regulation of the dorsolateral prefrontal cortex) are the underlying factors of the rare occurrence of highly focused cognitive processes in dreaming remains an open question. Although continuity between waking life and dreaming has been demonstrated, i.e., interindividual differences in the amount of time spent with specific waking-life activities are reflected in dream content, methodological issues (averaging over a two-week period, small number of dreams) have limited the capacity for detecting substantial relationships in all areas. Nevertheless, it might be concluded that the continuity hypothesis in its present general form is not valid and should be elaborated and tested in a more specific way.", "title": "" }, { "docid": "1e320f6c5ce9240f580aeb32a47619a1", "text": "The human gut is populated with as many as 100 trillion cells, whose collective genome, the microbiome, is a reflection of evolutionary selection pressures acting at the level of the host and at the level of the microbial cell. The ecological rules that govern the shape of microbial diversity in the gut apply to mutualists and pathogens alike.", "title": "" }, { "docid": "cb641fc639b86abadec4f85efc226c14", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "15d3618efa3413456c6aebf474b18c92", "text": "The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms. In particular the reader can delve into the following subjects: present cryptographic schemes (symmetric and asymmetric), differences between quantum and classical computing, challenges in quantum computing, quantum algorithms (Shor’s and Grover’s), public key encryption schemes affected, symmetric schemes affected, the impact on hash functions, and post quantum cryptography. Specifically, the section of Post-Quantum Cryptography deals with different quantum key distribution methods and mathematicalbased solutions, such as the BB84 protocol, lattice-based cryptography, multivariate-based cryptography, hash-based signatures and code-based cryptography. Keywords—quantum computers; post-quantum cryptography; Shor’s algorithm; Grover’s algorithm; asymmetric cryptography; symmetric cryptography", "title": "" }, { "docid": "57d6a2056453baf04aae577e4a2c048a", "text": "0950-7051/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.knosys.2011.07.017 ⇑ Corresponding author at: Shenzhen Institutes of A Academy of Sciences, Shenzhen 518055, China. E-mail addresses: zy.zhao@siat.ac.cn, zy.zhao10@ @siat.ac.cn (S. Feng), qiang.wang1@siat.ac.cn (Q. (J.Z. Huang), Graham.Williams@togaware.com (G.J. W Fan) . Community detection is an important issue in social network analysis. Most existing methods detect communities through analyzing the linkage of the network. The drawback is that each community identified by those methods can only reflect the strength of connections, but it cannot reflect the semantics such as the interesting topics shared by people. To address this problem, we propose a topic oriented community detection approach which combines both social objects clustering and link analysis. We first use a subspace clustering algorithm to group all the social objects into topics. Then we divide the members that are involved in those social objects into topical clusters, each corresponding to a distinct topic. In order to differentiate the strength of connections, we perform a link analysis on each topical cluster to detect the topical communities. Experiments on real data sets have shown that our approach was able to identify more meaningful communities. The quantitative evaluation indicated that our approach can achieve a better performance when the topics are at least as important as the links to the analysis. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2b2c30fa2dc19ef7c16cf951a3805242", "text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.", "title": "" } ]
scidocsrr
93c558a7adca8ac67221fda4bf4d8a89
Common Elements Wideband MIMO Antenna System for WiFi/LTE Access-Point Applications
[ { "docid": "38f6aaf5844ddb6e4ed0665559b7f813", "text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.", "title": "" }, { "docid": "ecfd9b38cc68c4af9addb4915424d6d0", "text": "The conditions for antenna diversity action are investigated. In terms of the fields, a condition is shown to be that the incident field and the far field of the diversity antenna should obey (or nearly obey) an orthogonality relationship. The role of mutual coupling is central, and it is different from that in a conventional array antenna. In terms of antenna parameters, a sufficient condition for diversity action for a certain class of high gain antennas at the mobile, which approximates most practical mobile antennas, is shown to be zero (or low) mutual resistance between elements. This is not the case at the base station, where the condition is necessary only. The mutual resistance condition offers a powerful design tool, and examples of new mobile diversity antennas are discussed along with some existing designs.", "title": "" } ]
[ { "docid": "b56d144f1cda6378367ea21e9c76a39e", "text": "The main objective of our work has been to develop and then propose a new and unique methodology useful in developing the various features of heart rate variability (HRV) and carotid arterial wall thickness helpful in diagnosing cardiovascular disease. We also propose a suitable prediction model to enhance the reliability of medical examinations and treatments for cardiovascular disease. We analyzed HRV for three recumbent postures. The interaction effects between the recumbent postures and groups of normal people and heart patients were observed based on HRV indexes. We also measured intima-media of carotid arteries and used measurements of arterial wall thickness as other features. Patients underwent carotid artery scanning using high-resolution ultrasound devised in a previous study. In order to extract various features, we tested six classification methods. As a result, CPAR and SVM (gave about 85%-90% goodness of fit) outperforming the other classifiers.", "title": "" }, { "docid": "c4df4e0f9a77328ed5c81c124dbe643b", "text": "In this paper, the bridgeless interleaved boost topology is proposed for plug-in hybrid electric vehicle and electric vehicle battery chargers to achieve high efficiency, which is critical to minimize the charger size, charging time and the amount and cost of electricity drawn from the utility. An analytical model for this topology is developed, enabling the calculation of power losses and efficiency. Experimental and simulation results of prototype units converting the universal AC input voltage to 400 V DC at 3.4 kW are given to verify the proof of concept, and analytical work reported in this paper.", "title": "" }, { "docid": "4a5cfc32cccc96c49739cc49f311ddb4", "text": "We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometrybased and image-based modeling and rendering techniques, has two components. The rst component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is e ective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current imagebased modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.", "title": "" }, { "docid": "1c075aac5462cf6c6251d6c9c1a679c0", "text": "Why You Can’t Find a Taxi in the Rain and Other Labor Supply Lessons from Cab Drivers In a seminal paper, Camerer, Babcock, Loewenstein, and Thaler (1997) find that the wage elasticity of daily hours of work New York City (NYC) taxi drivers is negative and conclude that their labor supply behavior is consistent with target earning (having reference dependent preferences). I replicate and extend the CBLT analysis using data from all trips taken in all taxi cabs in NYC for the five years from 2009-2013. Using the model of expectations-based reference points of Koszegi and Rabin (2006), I distinguish between anticipated and unanticipated daily wage variation and present evidence that only a small fraction of wage variation (about 1/8) is unanticipated so that reference dependence (which is relevant only in response to unanticipated variation) can, at best, play a limited role in determining labor supply. The overall pattern in my data is clear: drivers tend to respond positively to unanticipated as well as anticipated increases in earnings opportunities. This is consistent with the neoclassical optimizing model of labor supply and does not support the reference dependent preferences model. I explore heterogeneity across drivers in their labor supply elasticities and consider whether new drivers differ from more experienced drivers in their behavior. I find substantial heterogeneity across drivers in their elasticities, but the estimated elasticities are generally positive and only rarely substantially negative. I also find that new drivers with smaller elasticities are more likely to exit the industry while drivers who remain learn quickly to be better optimizers (have positive labor supply elasticities that grow with experience). JEL Classification: J22, D01, D03", "title": "" }, { "docid": "7b02c36cef0c195d755b6cc1c7fbda2e", "text": "Content based object retrieval across large scale surveillance video dataset is a significant and challenging task, in which learning an effective compact object descriptor plays a critical role. In this paper, we propose an efficient deep compact descriptor with bagging auto-encoders. Specifically, we take advantage of discriminative CNN to extract efficient deep features, which not only involve rich semantic information but also can filter background noise. Besides, to boost the retrieval speed, auto-encoders are used to map the high-dimensional real-valued CNN features into short binary codes. Considering the instability of auto-encoder, we adopt a bagging strategy to fuse multiple auto-encoders to reduce the generalization error, thus further improving the retrieval accuracy. In addition, bagging is easy for parallel computing, so retrieval efficiency can be guaranteed. Retrieval experimental results on the dataset of 100k visual objects extracted from multi-camera surveillance videos demonstrate the effectiveness of the proposed deep compact descriptor.", "title": "" }, { "docid": "574282b45a87abf6e8478886c0400244", "text": "A mobile wireless sensor network owes its name to the presence of mobile sink or sensor nodes within the network. The advantages of mobile WSN over static WSN are better energy efficiency, improved coverage, enhanced target tracking and superior channel capacity. In this paper we present and discuss hierarchical multi-tiered architecture for mobile wireless sensor network. This architecture is proposed for the future pervasive computing age. We also elaborate on the impact of mobility on different performance metrics in mobile WSN. A study of some of the possible application scenarios for pervasive computing involving mobile WSN is also presented. These application scenarios will be discussed in their implementation context. While discussing the possible applications, we also study related technologies that appear promising to be integrated with mobile WSN in the ubiquitous computing. With an enormous growth in number of cellular subscribers, we therefore place the mobile phone as the key element in future ubiquitous wireless networks. With the powerful computing, communicating and storage capacities of these mobile devices, the network performance can benefit from the architecture in terms of scalability, energy efficiency and packet delay, etc.", "title": "" }, { "docid": "aaba5dc8efc9b6a62255139965b6f98d", "text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.", "title": "" }, { "docid": "65a8c1faa262cd428045854ffcae3fae", "text": "Extracting named entities in text and linking extracted names to a given knowledge base are fundamental tasks in applications for text understanding. Existing systems typically run a named entity recognition (NER) model to extract entity names first, then run an entity linking model to link extracted names to a knowledge base. NER and linking models are usually trained separately, and the mutual dependency between the two tasks is ignored. We propose JERL, Joint Entity Recognition and Linking, to jointly model NER and linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance of the other. To the best of our knowledge, JERL is the first model to jointly optimize NER and linking tasks together completely. In experiments on the CoNLL’03/AIDA data set, JERL outperforms state-of-art NER and linking systems, and we find improvements of 0.4% absolute F1 for NER on CoNLL’03, and 0.36% absolute precision@1 for linking on AIDA.", "title": "" }, { "docid": "9f87424062c624bc417f848cc2f33bf3", "text": "The sentiment mining is a fast growing topic of both academic research and commercial applications, especially with the widespread of short-text applications on the Web. A fundamental problem that confronts sentiment mining is the automatics and correctness of mined sentiment. This paper proposes an DLDA (Double Latent Dirichlet Allocation) model to analyze sentiment for short-texts based on topic model. Central to DLDA is to add sentiment to topic model and consider sentiment as equal to topic, but independent of topic. DLDA is actually two methods DLDA I and its improvement DLDA II. Compared to the single topic-word LDA, the double LDA I, i.e., DLDA I designs another sentiment-word LDA. Both LDAs are independent of each other, but they combine to influence the selected words in short-texts. DLDA II is an improvement of DLDA I. It employs entropy formula to assign weights of words in the Gibbs sampling based on the ideas that words with stronger sentiment orientation should be assigned with higher weights. Experiments show that compared with other traditional topic methods, both DLDA I and II can achieve higher accuracy with less manual needs.", "title": "" }, { "docid": "815215b56160ab38745fded16edd31d6", "text": "Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.", "title": "" }, { "docid": "72c054c955a34fbac8e798665ece8f57", "text": "In this paper, we propose and empirically validate a suite of hotspot patterns: recurring architecture problems that occur in most complex systems and incur high maintenance costs. In particular, we introduce two novel hotspot patterns, Unstable Interface and Implicit Cross-module Dependency. These patterns are defined based on Baldwin and Clark's design rule theory, and detected by the combination of history and architecture information. Through our tool-supported evaluations, we show that these patterns not only identify the most error-prone and change-prone files, they also pinpoint specific architecture problems that may be the root causes of bug-proneness and change-proneness. Significantly, we show that 1) these structure-history integrated patterns contribute more to error- and change-proneness than other hotspot patterns, and 2) the more hotspot patterns a file is involved in, the more error- and change-prone it is. Finally, we report on an industrial case study to demonstrate the practicality of these hotspot patterns. The architect and developers confirmed that our hotspot detector discovered the majority of the architecture problems causing maintenance pain, and they have started to improve the system's maintainability by refactoring and fixing the identified architecture issues.", "title": "" }, { "docid": "1cf73c4949ad0c610e90a172b02803e4", "text": "BACKGROUND\nTo date the manner in which information reaches the nucleus on that part within the three-dimensional structure where specific restorative processes of structural components of the cell are required is unknown. The soluble signalling molecules generated in the course of destructive and restorative processes communicate only as needed.\n\n\nHYPOTHESIS\nAll molecules show temperature-dependent molecular vibration creating a radiation in the infrared region. Each molecule species has in its turn a specific frequency pattern under given specific conditions. Changes in their structural composition result in modified frequency patterns of the molecules in question. The main structural elements of the cell membrane, of the endoplasmic reticulum, of the Golgi apparatus, and of the different microsomes representing the great variety of polar lipids show characteristic frequency patterns with peaks in the region characterised by low water absorption. These structural elements are very dynamic, mainly caused by the creation of signal molecules and transport containers. By means of the characteristic radiation, the area where repair or substitution services are needed could be identified; this spatial information complements the signalling of the soluble signal molecules. Based on their resonance properties receptors located on the outer leaflet of the nuclear envelope should be able to read typical frequencies and pass them into the nucleus. Clearly this physical signalling must be blocked by the cell membrane to obviate the flow of information into adjacent cells.\n\n\nCONCLUSION\nIf the hypothesis can be proved experimentally, it should be possible to identify and verify characteristic infrared frequency patterns. The application of these signal frequencies onto cells would open entirely new possibilities in medicine and all biological disciplines specifically to influence cell growth and metabolism. Similar to this intracellular system, an extracellular signalling system with many new therapeutic options has to be discussed.", "title": "" }, { "docid": "53c0564d82737d51ca9b7ea96a624be4", "text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.", "title": "" }, { "docid": "7d03c3e0e20b825809bebb5b2da1baed", "text": "Flexoelectricity and the concomitant emergence of electromechanical size-effects at the nanoscale have been recently exploited to propose tantalizing concepts such as the creation of “apparently piezoelectric” materials without piezoelectric materials, e.g. graphene, emergence of “giant” piezoelectricity at the nanoscale, enhanced energy harvesting, among others. The aforementioned developments pertain primarily to hard ceramic crystals. In this work, we develop a nonlinear theoretical framework for flexoelectricity in soft materials. Using the concept of soft electret materials, we illustrate an interesting nonlinear interplay between the so-called Maxwell stress effect and flexoelectricity, and propose the design of a novel class of apparently piezoelectric materials whose constituents are intrinsically non-piezoelectric. In particular, we show that the electret-Maxwell stress based mechanism can be combined with flexoelectricity to achieve unprecedentedly high values of electromechanical coupling. Flexoelectricity is also important for a special class of soft materials: biological membranes. In this context, flexoelectricity manifests itself as the development of polarization upon changes in curvature. Flexoelectricity is found to be important in a number of biological functions including hearing, ion transport and in some situations where mechanotransduction is necessary. In this work, we present a simple linearized theory of flexoelectricity in biological membranes and some illustrative examples. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5116079b69aeb1858177429fabd10f80", "text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.", "title": "" }, { "docid": "cb6c4f97fcefa003e890c8c4a97ff34b", "text": "When interacting and communicating with virtual agents in immersive environments, the agents’ behavior should be believable and authentic. Thereby, one important aspect is a convincing auralization of their speech. In this work-in-progress paper a study design to evaluate the effect of adding directivity to speech sound source on the perceived social presence of a virtual agent is presented. Therefore, we describe the study design and discuss first results of a prestudy as well as consequential improvements of the design.", "title": "" }, { "docid": "f68161697aed6d12598b0b9e34aeae68", "text": "Automation in agriculture comes into play to increase productivity, quality and economic growth of the country. Fruit grading is an important process for producers which affects the fruits quality evaluation and export market. Although the grading and sorting can be done by the human, but it is slow, labor intensive, error prone and tedious. Hence, there is a need of an intelligent fruit grading system. In recent years, researchers had developed numerous algorithms for fruit sorting using computer vision. Color, textural and morphological features are the most commonly used to identify the diseases, maturity and class of the fruits. Subsequently, these features are used to train soft computing technique network. In this paper, use of image processing in agriculture has been reviewed so as to provide an insight to the use of vision based systems highlighting their advantages and disadvantages.", "title": "" }, { "docid": "f5fdc2aac2caa3f8ac4648ebe599d707", "text": "This paper describes a Genetic Algorithms approach to a manpower-scheduling problem arising at a major UK hospital. Although Genetic Algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical Genetic Algorithms paradigm in handling the conflict between objectives and constraints. The approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from these permutations. Computational experiments based on 52 weeks of live data are used to evaluate three different decoders with varying levels of intelligence, and four well-known crossover operators. Results are further enhanced by introducing a hybrid crossover operator and by making use of simple bounds to reduce the size of the solution space. The results reveal that the proposed algorithm is able to find high quality solutions and is both faster and more flexible than a recently published Tabu Search approach.", "title": "" }, { "docid": "21528ffae0a6e4bd4fe9acfce5660473", "text": "Ultrasound image quality is related to the receive beamformer’s ability. Delay and sum (DAS), a conventional beamformer, is combined with the coherence factor (CF) technique to suppress side lobe levels. The purpose of this study is to improve these beamformer’s abilities. It has been shown that extension of the receive aperture can improve the receive beamformer’s ability in radar studies. This paper shows that the focusing quality of CF and CF+DAS in medical ultrasound can be increased by extension of the receive aperture’s length in phased synthetic aperture (PSA) imaging. The 3-dB width of the main lobe in the receive beam related to CF focusing decreased to 0.55 mm using the proposed PSA compared to the conventional phased array (PHA) imaging, whose FWHM is about 0.9 mm. The clutter-to-total-energy ratio (CTR) represented by R20 dB showed an improvement of 50 and 33% for CF and CF+DAS beamformers, respectively, with PSA as compared to PHA. In addition, simulation results validated the effectiveness of PSA versus PHA. In applications where there are no important limitations on the SNR, PSA imaging is recommended as it increases the ability of the receive beamformer for better focusing.", "title": "" } ]
scidocsrr
06bb094ed964bfe2f811b3f64da3a733
Evaluating the robustness of repeated measures analyses: the case of small sample sizes and nonnormal data.
[ { "docid": "9e8e57ef22d3dfe139f4b9c9992b0884", "text": "It has been suggested that when the variance assumptions of a repeated measures ANOVA are not met, the df of the mean square ratio should be adjusted by the sample estimate of the Box correction factor, e. This procedure works well when e is low, but the estimate is seriously biased when this is not the case. An alternate estimate is proposed which is shown by Monte Carlo methods to be less biased for moderately large e.", "title": "" } ]
[ { "docid": "10d01b461ed80fbca4340a193fe47701", "text": "Flight delays have a significant impact on the nationpsilas economy. Taxi-out delays in particular constitute a significant portion of the block time of a flight. In the future, it can be expected that accurate predictions of dasiawheels-offpsila time may be used in determining whether an aircraft can meet its allocated slot time, thereby fitting into an en-route traffic flow. Without an accurate taxi-out time prediction for departures, there is no way to effectively manage fuel consumption, emissions, or cost. Dynamically changing operations at the airport makes it difficult to accurately predict taxi-out time. This paper describes a method for estimating average taxi-out times at the airport in 15 minute intervals of the day and at least 15 minutes in advance of aircraft scheduled gate push-back time. A probabilistic framework of stochastic dynamic programming with a learning-based solution strategy called Reinforcement Learning (RL) has been applied. Historic data from the Federal Aviation Administrationpsilas (FAA) Aviation System Performance Metrics (ASPM) database were used to train and test the algorithm. The algorithm was tested on John F. Kennedy International airport (JFK), one of the busiest, challenging, and difficult to predict airports in the United States that significantly influences operations across the entire National Airspace System (NAS). Due to the nature of departure operations at JFK the prediction accuracy of the algorithm for a given day was analyzed in two separate time periods (1) before 4:00 P.M and (2) after 4:00 P.M. On an average across 15 days, the predicted average taxi-out times matched the actual average taxi-out times within plusmn5 minutes for about 65 % of the time (for the period before 4:00 P.M) and 53 % of the time (for the period after 4:00 P.M). The prediction accuracy over the entire day within plusmn5 minutes range of accuracy was about 60 %. Further, application of the RL algorithm to estimate taxi-out times at airports with multi-dependent static surface surveillance data will likely improve the accuracy of prediction. The implications of these results for airline operations and network flow planning are discussed.", "title": "" }, { "docid": "65e273d046a8120532d8cd04bcadca56", "text": "This paper explores the relationship between domain scheduling in avirtual machine monitor (VMM) and I/O performance. Traditionally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O resources as asecondary concern. However, this can resultin poor and/or unpredictable application performance, making virtualization less desirable for applications that require efficient and consistent I/O behavior.\n This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently running different types of applications. In particular, different combinations of processor-intensive, bandwidth-intensive, andlatency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O performance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O performance. This cross product of scheduler configurations and application types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.", "title": "" }, { "docid": "7c4768707a3efd3791520576a8a78e23", "text": "The aim of this paper is to research the effectiveness of SMS verification by understanding the correlation between notification and verification of flood early warning messages. This study contributes to the design of the dissemination techniques for SMS as an early warning messages. The metrics used in this study are using user perceptions of tasks, which include the ease of use (EOU) perception for using SMS and confidence with SMS skills perception, as well as, the users' positive perceptions, which include users' perception of usefulness and satisfaction perception towards using SMS as an early warning messages for floods. Experiments and surveys were conducted in flood-prone areas in Semarang, Indonesia. The results showed that the correlation is in users' perceptions of tasks for the confidence with skill.", "title": "" }, { "docid": "f074965ee3a1d6122f1e68f49fd11d84", "text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.", "title": "" }, { "docid": "ad06ed03454635bf390ea14847fcf4a2", "text": "Mitochondria are important cellular organelles in most metabolic processes and have a highly dynamic nature, undergoing frequent fission and fusion. The dynamic balance between fission and fusion plays critical roles in mitochondrial functions. In recent studies, several large GTPases have been identified as key molecular factors in mitochondrial fission and fusion. Moreover, the posttranslational modifications of these large GTPases, including phosphorylation, ubiquitination and SUMOylation, have been shown to be involved in the regulation of mitochondrial dynamics. Neurons are particularly sensitive and vulnerable to any abnormalities in mitochondrial dynamics, due to their large energy demand and long extended processes. Emerging evidences have thus indicated a strong linkage between mitochondria and neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease and Huntington's disease. In this review, we will describe the regulation of mitochondrial dynamics and its role in neurodegenerative diseases.", "title": "" }, { "docid": "28d739449d55d77e54571edb3c4ec4ad", "text": "Immunologic checkpoint blockade with antibodies that target cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) and the programmed cell death protein 1 pathway (PD-1/PD-L1) have demonstrated promise in a variety of malignancies. Ipilimumab (CTLA-4) and pembrolizumab (PD-1) are approved by the US Food and Drug Administration for the treatment of advanced melanoma, and additional regulatory approvals are expected across the oncologic spectrum for a variety of other agents that target these pathways. Treatment with both CTLA-4 and PD-1/PD-L1 blockade is associated with a unique pattern of adverse events called immune-related adverse events, and occasionally, unusual kinetics of tumor response are seen. Combination approaches involving CTLA-4 and PD-1/PD-L1 blockade are being investigated to determine whether they enhance the efficacy of either approach alone. Principles learned during the development of CTLA-4 and PD-1/PD-L1 approaches will likely be used as new immunologic checkpoint blocking antibodies begin clinical investigation.", "title": "" }, { "docid": "5a8649a0418dbeb68cc5bfb7f98f28fe", "text": "Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future researchers.", "title": "" }, { "docid": "75b654084c7205b209d41a33b9bc03b9", "text": "The aims of the study were to evaluate the per- and post-operative complications and outcomes after cystocele repair with transobturator mesh. A retrospective continuous series study was conducted over a period of 3 years. Clinical evaluation was up to 1 year with additional telephonic interview performed after 34 months on average. When stress urinary incontinence (SUI) was associated with the cystocele, it was treated with the same mesh. One hundred twenty-three patients were treated for cystocele. Per-operative complications occurred in six patients. After 1 year, erosion rate was 6.5%, and only three cystoceles recurred. After treatment of SUI with the same mesh, 87.7% restored continence. Overall patient’s satisfaction rate was 93.5%. Treatment of cystocele using transobturator four arms mesh appears to reduce the risk of recurrence at 1 year, along with high rate of patient’s satisfaction. The transobturator path of the prosthesis arms seems devoid of serious per- and post-operative risks and allows restoring continence when SUI is present.", "title": "" }, { "docid": "b1ae4cfe9ce7a88eb0a503bfafe9606d", "text": "The aim of Chapter 2 is to give an overview of the GPR basic principles and technology. A lot of definitions and often-used terms that will be used throughout the whole work will be explained here. Readers who are familiar with GPR and the demining application can skip parts of this chapter. Section 2.2.4 however can be interesting since a description of the hardware and the design parameters of a time domain GPR are given there. The description is far from complete, but it gives a good overview of the technological difficulties encountered in GPR systems.", "title": "" }, { "docid": "1700821e3c9ec22ec151d151f3ac7925", "text": "This review provides a comprehensive examination of the literature surrounding the current state of K–12 distance education. The growth in K–12 distance education follows in the footsteps of expanded learning opportunities at all levels of public education and training in corporate environments. Implementation has been accomplished with a limited research base, often drawing from studies in adult distance education and policies adapted from traditional learning environments. This review of literature provides an overview of the field of distance education with a focus on the research conducted in K–12 distance education environments. (", "title": "" }, { "docid": "913478fa2a53363c4d8dc6212c960cbf", "text": "The rapidly growing world energy use has already raised concerns over supply difficulties, exhaustion of energy resources and heavy environmental impacts (ozone layer depletion, global warming, climate change, etc.). The global contribution from buildings towards energy consumption, both residential and commercial, has steadily increased reaching figures between 20% and 40% in developed countries, and has exceeded the other major sectors: industrial and transportation. Growth in population, increasing demand for building services and comfort levels, together with the rise in time spent inside buildings, assure the upward trend in energy demand will continue in the future. For this reason, energy efficiency in buildings is today a prime objective for energy policy at regional, national and international levels. Among building services, the growth in HVAC systems energy use is particularly significant (50% of building consumption and 20% of total consumption in the USA). This paper analyses available information concerning energy consumption in buildings, and particularly related to HVAC systems. Many questions arise: Is the necessary information available? Which are the main building types? What end uses should be considered in the breakdown? Comparisons between different countries are presented specially for commercial buildings. The case of offices is analysed in deeper detail. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fb0648489dcf41e98ad617657725a66e", "text": "In this paper, a triple active bridge converter is proposed. The topology is capable of achieving ZVS across the full load range with wide input voltage while minimizing heavy load conduction losses to increase overall efficiency. This topology comprises three full bridges coupled by a three-winding transformer. At light load, by adjusting the phase shift between two input bridges, all switching devices can maintain ZVS due to a controlled circulating current. At heavy load, the two input bridges work in parallel to reduce conduction loss. The operation principles of this topology are introduced and the ZVS boundaries are derived. Based on analytical models of power loss, a 200W laboratory prototype has been built to verify theoretical considerations.", "title": "" }, { "docid": "f2fed9066ac945ae517aef8ec5bb5c61", "text": "BACKGROUND\nThe aging of society is a global trend, and care of older adults with dementia is an urgent challenge. As dementia progresses, patients exhibit negative emotions, memory disorders, sleep disorders, and agitated behavior. Agitated behavior is one of the most difficult problems for family caregivers and healthcare providers to handle when caring for older adults with dementia.\n\n\nPURPOSE\nThe aim of this study was to investigate the effectiveness of white noise in improving agitated behavior, mental status, and activities of daily living in older adults with dementia.\n\n\nMETHODS\nAn experimental research design was used to study elderly participants two times (pretest and posttest). Six dementia care centers in central and southern Taiwan were targeted to recruit participants. There were 63 participants: 28 were in the experimental group, and 35 were in the comparison group. Experimental group participants received 20 minutes of white noise consisting of ocean, rain, wind, and running water sounds between 4 and 5 P.M. daily over a period of 4 weeks. The comparison group received routine care. Questionnaires were completed, and observations of agitated behaviors were collected before and after the intervention.\n\n\nRESULTS\nAgitated behavior in the experimental group improved significantly between pretest and posttest. Furthermore, posttest scores on the Mini-Mental Status Examination and Barthel Index were slightly better for this group than at pretest. However, the experimental group registered no significant difference in mental status or activities of daily living at posttest. For the comparison group, agitated behavior was unchanged between pretest and posttest.\n\n\nCONCLUSIONS\nThe results of this study support white noise as a simple, convenient, and noninvasive intervention that improves agitated behavior in older adults with dementia. These results may provide a reference for related healthcare providers, educators, and administrators who care for older adults with dementia.", "title": "" }, { "docid": "139a89ce2fcdfb987aa3476d3618b919", "text": "Automating the development of construction schedules has been an interesting topic for researchers around the world for almost three decades. Researchers have approached solving scheduling problems with different tools and techniques. Whenever a new artificial intelligence or optimization tool has been introduced, researchers in the construction field have tried to use it to find the answer to one of their key problems—the “better” construction schedule. Each researcher defines this “better” slightly different. This article reviews the research on automation in construction scheduling from 1985 to 2014. It also covers the topic using different approaches, including case-based reasoning, knowledge-based approaches, model-based approaches, genetic algorithms, expert systems, neural networks, and other methods. The synthesis of the results highlights the share of the aforementioned methods in tackling the scheduling challenge, with genetic algorithms shown to be the most dominant approach. Although the synthesis reveals the high applicability of genetic algorithms to the different aspects of managing a project, including schedule, cost, and quality, it exposed a more limited project management application for the other methods.", "title": "" }, { "docid": "7755e8c9234f950d0d5449602269e34b", "text": "In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.", "title": "" }, { "docid": "3257f01d96bd126bd7e3d6f447e0326d", "text": "Voice SMS is an application developed in this work that allows a user to record and convert spoken messages into SMS text message. User can send messages to the entered phone number or the number of contact from the phonebook. Speech recognition is done via the Internet, connecting to Google's server. The application is adapted to input messages in English. Used tools are Android SDK and the installation is done on mobile phone with Android operating system. In this article we will give basic features of the speech recognition and used algorithm. Speech recognition for Voice SMS uses a technique based on hidden Markov models (HMM - Hidden Markov Model). It is currently the most successful and most flexible approach to speech recognition.", "title": "" }, { "docid": "4899e13d5c85b63a823db9c4340824e7", "text": "With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.", "title": "" }, { "docid": "b5b73560481ad29bed07ddf156531561", "text": "IQ heritability, the portion of a population's IQ variability attributable to the effects of genes, has been investigated for nearly a century, yet it remains controversial. Covariance between relatives may be due not only to genes, but also to shared environments, and most previous models have assumed different degrees of similarity induced by environments specific to twins, to non-twin siblings (henceforth siblings), and to parents and offspring. We now evaluate an alternative model that replaces these three environments by two maternal womb environments, one for twins and another for siblings, along with a common home environment. Meta-analysis of 212 previous studies shows that our ‘maternal-effects’ model fits the data better than the ‘family-environments’ model. Maternal effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%. The shared maternal environment may explain the striking correlation between the IQs of twins, especially those of adult twins that were reared apart. IQ heritability increases during early childhood, but whether it stabilizes thereafter remains unclear. A recent study of octogenarians, for instance, suggests that IQ heritability either remains constant through adolescence and adulthood, or continues to increase with age. Although the latter hypothesis has recently been endorsed, it gathers only modest statistical support in our analysis when compared to the maternal-effects hypothesis. Our analysis suggests that it will be important to understand the basis for these maternal effects if ways in which IQ might be increased are to be identified.", "title": "" }, { "docid": "67ae045b8b9a8e181ed0a33b204528cf", "text": "We report four experiments examining effects of instance similarity on the application of simple explicit rules. We found effects of similarity to illustrative exemplars in error patterns and reaction times. These effects arose even though participants were given perfectly predictive rules, the similarity manipulation depended entirely on rule-irrelevant features, and attention to exemplar similarity was detrimental to task performance. Comparison of results across studies suggests that the effects are mandatory, non-strategic and not subject to conscious control, and as a result, should be pervasive throughout categorization.", "title": "" }, { "docid": "e992ffd4ebbf9d096de092caf476e37d", "text": "If self-regulation conforms to an energy or strength model, then self-control should be impaired by prior exertion. In Study 1, trying to regulate one's emotional response to an upsetting movie was followed by a decrease in physical stamina. In Study 2, suppressing forbidden thoughts led to a subsequent tendency to give up quickly on unsolvable anagrams. In Study 3, suppressing thoughts impaired subsequent efforts to control the expression of amusement and enjoyment. In Study 4, autobiographical accounts of successful versus failed emotional control linked prior regulatory demands and fatigue to self-regulatory failure. A strength model of self-regulation fits the data better than activation, priming, skill, or constant capacity models of self-regulation.", "title": "" } ]
scidocsrr
1ef17b08bba3731e8b0724c26e87707e
A Fine-Grained Performance Model of Cloud Computing Centers
[ { "docid": "807cd6adc45a2adb7943c5a0fb5baa94", "text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.", "title": "" } ]
[ { "docid": "7ca908e7896afc49a0641218e1c4febf", "text": "Timely and accurate classification and interpretation of high-resolution images are very important for urban planning and disaster rescue. However, as spatial resolution gets finer, it is increasingly difficult to recognize complex patterns in high-resolution remote sensing images. Deep learning offers an efficient strategy to fill the gap between complex image patterns and their semantic labels. However, due to the hierarchical abstract nature of deep learning methods, it is difficult to capture the precise outline of different objects at the pixel level. To further reduce this problem, we propose an object-based deep learning method to accurately classify the high-resolution imagery without intensive human involvement. In this study, high-resolution images were used to accurately classify three different urban scenes: Beijing (China), Pavia (Italy), and Vaihingen (Germany). The proposed method is built on a combination of a deep feature learning strategy and an object-based classification for the interpretation of high-resolution images. Specifically, high-level feature representations extracted through the convolutional neural networks framework have been systematically investigated over five different layer configurations. Furthermore, to improve the classification accuracy, an object-based classification method also has been integrated with the deep learning strategy for more efficient image classification. Experimental results indicate that with the combination of deep learning and object-based classification, it is possible to discriminate different building types in Beijing Scene, such as commercial buildings and residential buildings with classification accuracies above 90%.", "title": "" }, { "docid": "8abd03202f496de4bec6270946d53a9c", "text": "In this paper, we use time-series modeling to forecast taxi travel demand, in the context of a mobile application-based taxi hailing service. In particular, we model the passenger demand density at various locations in the city of Bengaluru, India. Using the data, we first shortlist time-series models that suit our application. We then analyse the performance of these models by using Mean Absolute Percentage Error (MAPE) as the performance metric. In order to improve the model performance, we employ a multi-level clustering technique where we aggregate demand over neighboring cells/geohashes. We observe that the improved model based on clustering leads to a forecast accuracy of 80% per km2. In addition, our technique obtains an accuracy of 89% per km2 for the most frequently occurring use case.", "title": "" }, { "docid": "6d8e78d8c48aab17aef0b9e608f13b99", "text": "Optimal real-time distributed V2G and G2V management of electric vehicles Sonja Stüdli, Emanuele Crisostomi, Richard Middleton & Robert Shorten a Centre for Complex Dynamic Systems and Control, The University of Newcastle, New South Wales, Australia b Department of Energy, Systems, Territory and Constructions, University of Pisa, Pisa, Italy c IBM Research, Dublin, Ireland Accepted author version posted online: 10 Dec 2013.Published online: 05 Feb 2014.", "title": "" }, { "docid": "383e569dcd1f0c648ad2274588f76961", "text": "BACKGROUND\nOutcomes are poor for patients with previously treated, advanced or metastatic non-small-cell lung cancer (NSCLC). The anti-programmed death ligand 1 (PD-L1) antibody atezolizumab is clinically active against cancer, including NSCLC, especially cancers expressing PD-L1 on tumour cells, tumour-infiltrating immune cells, or both. We assessed efficacy and safety of atezolizumab versus docetaxel in previously treated NSCLC, analysed by PD-L1 expression levels on tumour cells and tumour-infiltrating immune cells and in the intention-to-treat population.\n\n\nMETHODS\nIn this open-label, phase 2 randomised controlled trial, patients with NSCLC who progressed on post-platinum chemotherapy were recruited in 61 academic medical centres and community oncology practices across 13 countries in Europe and North America. Key inclusion criteria were Eastern Cooperative Oncology Group performance status 0 or 1, measurable disease by Response Evaluation Criteria In Solid Tumors version 1.1 (RECIST v1.1), and adequate haematological and end-organ function. Patients were stratified by PD-L1 tumour-infiltrating immune cell status, histology, and previous lines of therapy, and randomly assigned (1:1) by permuted block randomisation (with a block size of four) using an interactive voice or web system to receive intravenous atezolizumab 1200 mg or docetaxel 75 mg/m(2) once every 3 weeks. Baseline PD-L1 expression was scored by immunohistochemistry in tumour cells (as percentage of PD-L1-expressing tumour cells TC3≥50%, TC2≥5% and <50%, TC1≥1% and <5%, and TC0<1%) and tumour-infiltrating immune cells (as percentage of tumour area: IC3≥10%, IC2≥5% and <10%, IC1≥1% and <5%, and IC0<1%). The primary endpoint was overall survival in the intention-to-treat population and PD-L1 subgroups at 173 deaths. Biomarkers were assessed in an exploratory analysis. We assessed safety in all patients who received at least one dose of study drug. This study is registered with ClinicalTrials.gov, number NCT01903993.\n\n\nFINDINGS\nPatients were enrolled between Aug 5, 2013, and March 31, 2014. 144 patients were randomly allocated to the atezolizumab group, and 143 to the docetaxel group. 142 patients received at least one dose of atezolizumab and 135 received docetaxel. Overall survival in the intention-to-treat population was 12·6 months (95% CI 9·7-16·4) for atezolizumab versus 9·7 months (8·6-12·0) for docetaxel (hazard ratio [HR] 0·73 [95% CI 0·53-0·99]; p=0·04). Increasing improvement in overall survival was associated with increasing PD-L1 expression (TC3 or IC3 HR 0·49 [0·22-1·07; p=0·068], TC2/3 or IC2/3 HR 0·54 [0·33-0·89; p=0·014], TC1/2/3 or IC1/2/3 HR 0·59 [0·40-0·85; p=0·005], TC0 and IC0 HR 1·04 [0·62-1·75; p=0·871]). In our exploratory analysis, patients with pre-existing immunity, defined by high T-effector-interferon-γ-associated gene expression, had improved overall survival with atezolizumab. 11 (8%) patients in the atezolizumab group discontinued because of adverse events versus 30 (22%) patients in the docetaxel group. 16 (11%) patients in the atezolizumab group versus 52 (39%) patients in the docetaxel group had treatment-related grade 3-4 adverse events, and one (<1%) patient in the atezolizumab group versus three (2%) patients in the docetaxel group died from a treatment-related adverse event.\n\n\nINTERPRETATION\nAtezolizumab significantly improved survival compared with docetaxel in patients with previously treated NSCLC. Improvement correlated with PD-L1 immunohistochemistry expression on tumour cells and tumour-infiltrating immune cells, suggesting that PD-L1 expression is predictive for atezolizumab benefit. Atezolizumab was well tolerated, with a safety profile distinct from chemotherapy.\n\n\nFUNDING\nF Hoffmann-La Roche/Genentech Inc.", "title": "" }, { "docid": "e632dfe8a37846339ceb44ae4f406a1a", "text": "Search engines are increasingly relying on large knowledge bases of facts to provide direct answers to users’ queries. However, the construction of these knowledge bases is largely manual and does not scale to the long and heavy tail of facts. Open information extraction tries to address this challenge, but typically assumes that facts are expressed with verb phrases, and therefore has had difficulty extracting facts for noun-based relations. We describe ReNoun, an open information extraction system that complements previous efforts by focusing on nominal attributes and on the long tail. ReNoun’s approach is based on leveraging a large ontology of noun attributes mined from a text corpus and from user queries. ReNoun creates a seed set of training data by using specialized patterns and requiring that the facts mention an attribute in the ontology. ReNoun then generalizes from this seed set to produce a much larger set of extractions that are then scored. We describe experiments that show that we extract facts with high precision and for attributes that cannot be extracted with verb-based techniques.", "title": "" }, { "docid": "1ae3eb81ae75f6abfad4963ee0056be5", "text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.", "title": "" }, { "docid": "0e9c280e39dbad16cf7bbf961ed4bdb1", "text": "This paper reviews the state-of-the-art research on multi-robot systems, with a focus on multi-robot cooperation and coordination. By primarily classifying multi-robot systems into active and passive cooperative systems, three main research topics of multi-robot systems are focused on: task allocation, multi-sensor fusion and localization. In addition, formation control and coordination methods for multi-robots are reviewed.", "title": "" }, { "docid": "26c58183e71f916f37d67f1cf848f021", "text": "With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect ® ) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 μg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, μg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 μg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 μg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 μg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, International Journal of Biomedical Engineering and Clinical Science 2016; 2(1): 8-17 9 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 μg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect ® Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.", "title": "" }, { "docid": "1d3b0669eda182f312a0a77d4bccf373", "text": "CONTEXT\nMedical issues are widely reported in the mass media. These reports influence the general public, policy makers and health-care professionals. This information should be valid, but is often criticized for being speculative, inaccurate and misleading. An understanding of the obstacles medical reporters meet in their work can guide strategies for improving the informative value of medical journalism.\n\n\nOBJECTIVE\nTo investigate constraints on improving the informative value of medical reports in the mass media and elucidate possible strategies for addressing these.\n\n\nDESIGN\nWe reviewed the literature and organized focus groups, a survey of medical journalists in 37 countries, and semi-structured telephone interviews.\n\n\nRESULTS\nWe identified nine barriers to improving the informative value of medical journalism: lack of time, space and knowledge; competition for space and audience; difficulties with terminology; problems finding and using sources; problems with editors and commercialism. Lack of time, space and knowledge were the most common obstacles. The importance of different obstacles varied with the type of media and experience. Many health reporters feel that it is difficult to find independent experts willing to assist journalists, and also think that editors need more education in critical appraisal of medical news. Almost all of the respondents agreed that the informative value of their reporting is important. Nearly everyone wanted access to short, reliable and up-to-date background information on various topics available on the Internet. A majority (79%) was interested in participating in a trial to evaluate strategies to overcome identified constraints.\n\n\nCONCLUSIONS\nMedical journalists agree that the validity of medical reporting in the mass media is important. A majority acknowledge many constraints. Mutual efforts of health-care professionals and journalists employing a variety of strategies will be needed to address these constraints.", "title": "" }, { "docid": "c1978e4936ed5bda4e51863dea7e93ee", "text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.", "title": "" }, { "docid": "99efebd647fa083fab4e0f091b0b471b", "text": "This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f90fcd27a0ac4a22dc5f229f826d64bf", "text": "While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.", "title": "" }, { "docid": "b12f1b1ff7618c1f54462c18c768dae8", "text": "Retrieval is the key process for understanding learning and for promoting learning, yet retrieval is not often granted the central role it deserves. Learning is typically identified with the encoding or construction of knowledge, and retrieval is considered merely the assessment of learning that occurred in a prior experience. The retrieval-based learning perspective outlined here is grounded in the fact that all expressions of knowledge involve retrieval and depend on the retrieval cues available in a given context. Further, every time a person retrieves knowledge, that knowledge is changed, because retrieving knowledge improves one’s ability to retrieve it again in the future. Practicing retrieval does not merely produce rote, transient learning; it produces meaningful, long-term learning. Yet retrieval practice is a tool many students lack metacognitive awareness of and do not use as often as they should. Active retrieval is an effective but undervalued strategy for promoting meaningful learning.", "title": "" }, { "docid": "9b1cf7cb855ba95693b90efacc34ac6d", "text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.", "title": "" }, { "docid": "0d4ab4099b3293286cafaf260d5a8114", "text": "This exploratory research investigates how students and professionals use social network sites (SNSs) in the setting of developing and emerging countries. Data collection included focus groups consisting of medical students and faculty as well as the analysis of a Facebook site centred on medical and clinical topics. The findings show how users, both students and professionals, appropriate social network sites from their mobile phones as rich educational tools in informal learning contexts. First, unlike in previous studies, the analysis revealed explicit forms of educational content embedded in Facebook, such as quizzes and case presentations and associated deliberate (e-)learning practices which are typically found in (more) formal educational settings. Second, from a socio-cultural learning perspective, it is shown how the participation in such virtual professional communities across national boundaries permits the announcement and negotiation of occupational status and professional identities. Introduction and background Technologies for development and health in \"resource-limited\" environments Technological innovations have given hope that new ICT tools will result in the overall progress and well-being of developing countries, in particular with respect to health and education services. Great expectations are attached to the spread of mobile communication technologies. The number of mobile cellular subscriptions worldwide is currently 4.7 billion and increasing. This includes people in remote and rural areas and \"resource-limited\" settings (The World Bank, 2011). To a much lesser extent there is also a discussion on affordances of social network sites (SNSs) in such contexts (Marcelo, Adejumo, & Luna, 2011). Discourses and projects on ICT(4)D (information technology for development) or mHealth (mobile technology for health) tend to be based on techno-centric and deterministic approaches where learning materials, either software or hardware, are distributed by central authorities or knowledge is \"delivered\" according to \"push-strategies\"; or, using the words of Traxler, information is pumped through the infrastructure, often in \"educationally naïve\" ways (in press). Similarly, the main direction of techno-centric and transmissional approaches appears to be from developed to \"developing\" countries, respectively from experts to novices. In spite of all efforts the situation is still problematic and ambitious visions have been only realised to a limited extent. For example, the goal of providing every person worldwide with access to an informed and educated healthcare provider by 2015 is unlikely to be realised. In particular, little progress has been made in meeting the information needs of frontline healthcare providers and ordinary citizens in low resource settings (Smith & Koehlmoos, 2011). Very often it is basic knowledge that is needed, related for example to the treatment of childhood pneumonia or diarrhoea, which cannot be accessed by healthcare providers such as family caregivers or health workers (HIFA Report, 2010). With this research we attempt to shed light on aspects of technology use, such as engagement with SNSs and mobile phones, in the context of health education in developing countries which, we would argue, have been widely neglected. In doing so, we hope to contribute to the academic discourses on SNSs and mobile learning. Since our approach follows the principles of case study research, the remainder of this paper is structured as follows. We continue with a brief and, admittedly, selective characterization of two underlying academic discourses that can inform this research, namely mobile learning and research on SNSs. After presenting our methodological approach and results we discuss the findings in the light of multiple theoretical concepts and empirical studies from these fields. We conclude with some practical considerations, limitations and directions for further research. Educational discourses on mobile learning and social network sites In the field of mobile learning, a small, yet rapidly growing research community, recent work has considered the (educational) use of mobile phones as an appropriation of cultural resources (Pachler, Cook, & Bachmair, 2010). In contrast to the classical binary and quantitative model of adoption, appropriation is centred on the question of how people use mobile phones once they have adopted them (Wirth, Von Pape, & Karnowski, 2008). Researchers define appropriation as the emerging \"processes of the internalization of the pre-given world of cultural products\" by the engagement of learners in the form of social practices with particular settings inside and outside of formal educational settings (Pachler, et al., 2010). While mobile learning research tend to focus on learning in schools, universities, workplaces or on life-long learning in industrialised countries (Frohberg, Göth, & Schwabe, 2009; Pachler, Pimmer, & Seipold, 2011; Pimmer, Pachler, & Attwell, 2010), some attention has also been paid to developing countries (see for example Traxler & Kukulska-Hulme, 2005). Research on SNSs is becoming increasingly popular not only in industrialised nations (boyd & Ellison, 2007) but, to a lesser extent, also in developing countries (Kolko, Rose, & Johnson, 2007). Increasing importance is attached to educational aspects of SNSs (Selwyn, 2009), though there is relatively little theoretical and empirical attention paid by social researchers to the form and nature of that learning in general (Merchant, 2011). Socio-cultural approaches to learning in general, and to social networks and mobile learning in particular are based on the notions of participation, belonging, communities and identity construction. It was suggested, for example, that such networks create a \"sense of place in a social world\" (Merchant, 2011) and can be considered as \"multi-audience identity production sites\" (Zhao, Grasmuck, & Martin, 2008). By documenting daily episodes by means of mobiles and social networks, such tools are said to contribute to the formation of (multiple) identities related to the live-worlds of users. In this sense, learning is considered as situated meaning-making and identity formation (Pachler, et al., 2010). The influence of SNSs on community practices was also discussed. An empirical study suggested, for example, that social network sites helped maintain relations as people move across different offline communities (Ellison, Steinfield, & Lampe, 2007). Also in formal educational environments, when social networks were deliberately used in order to support classroom-based teaching and learning, (unintended) community building was observed (Arnold & Paulus, 2010). However, research has little to say with respect to vocational and professional aspects of the use of SNSs. One study reported that a company's internal social network site supported professionals in building stronger relations with their weak ties and in getting in touch with professionals they did not know before (DiMicco et al., 2008). Another study that observed the use of mobiles and social software for the compilation of e-portfolios witnessed influences on identity trajectory according to the concepts of belonging to a workplace, becoming and then being a professional (Chan, 2011).", "title": "" }, { "docid": "c56831d181d70ad663a5430092ee8978", "text": "1Student, Department of Computer Science & Engineering, G.H.Raisoni Institute of Engineering & Management, Jalgaon, Maharashtra, India. 2Assistant Professor, Department of Information and Technology , G.H.Raisoni Institute of Engineering & Management, Jalgaon, Maharashtra, India. ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. Therefore a two-stage enhanced web crawler framework is proposed for efficiently harvesting deep web interfaces. The proposed enhanced web crawler is divided into two stages. In the first stage, site locating is performed by using reverse searching which finds relevant content. In the second stage, enhanced web crawler achieves fast in site searching by excavating most relevant links of site. It uses a novel deep web crawling framework based on reinforcement learning which is effective for crawling the deep web. The experimental results show that the method outperforms the state of art methods in terms of crawling capability and achieves higher harvest rates than other crawlers.", "title": "" }, { "docid": "1ca801ec3c0f5c0cbda2061ecd3cbfc0", "text": "One objective of the French-funded (ANR-2006-SECU-006) ISyCri Project (ISyCri stands for Interoperability of Systems in Crisis situation) is to provide the crisis cell in charge of the situation management with an information system (IS) able to support the interoperability of partners involved in this collaborative situation. Such a system is called Mediation Information System (MIS). This system must be in charge of (i) information exchange, (ii) services sharing and (iii) behavior orchestration. This paper presents the first step of the MIS engineering, the deduction of a collaborative process used to coordinate actors of the crisis cell. Especially, this paper give a formal definition of the deduction rules used to deduce the collaborative process.", "title": "" }, { "docid": "4f52223cb3150b1b7a7079147bcb3bc2", "text": "MAX NEUENDORF,1 AES Member, MARKUS MULTRUS,1 AES Member, NIKOLAUS RETTELBACH1, GUILLAUME FUCHS1, JULIEN ROBILLIARD1, JÉRÉMIE LECOMTE1, STEPHAN WILDE1, STEFAN BAYER,10 AES Member, SASCHA DISCH1, CHRISTIAN HELMRICH10, ROCH LEFEBVRE,2 AES Member, PHILIPPE GOURNAY2, BRUNO BESSETTE2, JIMMY LAPIERRE,2 AES Student Member, KRISTOFER KJÖRLING3, HEIKO PURNHAGEN,3 AES Member, LARS VILLEMOES,3 AES Associate Member, WERNER OOMEN,4 AES Member, ERIK SCHUIJERS4, KEI KIKUIRI5, TORU CHINEN6, TAKESHI NORIMATSU1, KOK SENG CHONG7, EUNMI OH,8 AES Member, MIYOUNG KIM8, SCHUYLER QUACKENBUSH,9 AES Fellow, AND BERNHARD GRILL1", "title": "" }, { "docid": "fc2a0f6979c2520cee8f6e75c39790a8", "text": "In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.", "title": "" } ]
scidocsrr
3a679b1cf471a4c3223668d27ae4f340
Understanding the requirements for developing open source software systems
[ { "docid": "c63d32013627d0bcea22e1ad62419e62", "text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.", "title": "" } ]
[ { "docid": "f944f5e334a127cd50ab3ec0d3c2b603", "text": "First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by linearly coupling the two. We show how to reconstruct Nesterov’s accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov’s original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov’s methods cannot apply to. 1998 ACM Subject Classification G.1.6 Optimization, F.2 Analysis of Algorithms and Problem Complexity", "title": "" }, { "docid": "2ddf013dc4e0fc5e35823e0485777066", "text": "The aim of this work is to design a SLAM algorithm for localization and mapping of aerial platform for ocean observation. The aim is to determine the direction of travel, given that the aerial platform flies over the water surface and in an environment with few static features and dynamic background. This approach is inspired by the bird techniques which use landmarks as navigation direction. In this case, the blimp is chosen as the platform, therefore the payload is the most important concern in the design so that the desired lift can be achieved. The results show the improved SLAM is were able to achieve the desired waypoint.", "title": "" }, { "docid": "934532bd18f37112c7362db0fffa89a0", "text": "Combination therapies exploit the chances for better efficacy, decreased toxicity, and reduced development of drug resistance and owing to these advantages, have become a standard for the treatment of several diseases and continue to represent a promising approach in indications of unmet medical need. In this context, studying the effects of a combination of drugs in order to provide evidence of a significant superiority compared to the single agents is of particular interest. Research in this field has resulted in a large number of papers and revealed several issues. Here, we propose an overview of the current methodological landscape concerning the study of combination effects. First, we aim to provide the minimal set of mathematical and pharmacological concepts necessary to understand the most commonly used approaches, divided into effect-based approaches and dose-effect-based approaches, and introduced in light of their respective practical advantages and limitations. Then, we discuss six main common methodological issues that scientists have to face at each step of the development of new combination therapies. In particular, in the absence of a reference methodology suitable for all biomedical situations, the analysis of drug combinations should benefit from a collective, appropriate, and rigorous application of the concepts and methods reviewed here.", "title": "" }, { "docid": "fd45363f75f9206aa13e139d784e5d52", "text": "Multivariate pattern recognition methods are increasingly being used to identify multiregional brain activity patterns that collectively discriminate one cognitive condition or experimental group from another, using fMRI data. The performance of these methods is often limited because the number of regions considered in the analysis of fMRI data is large compared to the number of observations (trials or participants). Existing methods that aim to tackle this dimensionality problem are less than optimal because they either over-fit the data or are computationally intractable. Here, we describe a novel method based on logistic regression using a combination of L1 and L2 norm regularization that more accurately estimates discriminative brain regions across multiple conditions or groups. The L1 norm, computed using a fast estimation procedure, ensures a fast, sparse and generalizable solution; the L2 norm ensures that correlated brain regions are included in the resulting solution, a critical aspect of fMRI data analysis often overlooked by existing methods. We first evaluate the performance of our method on simulated data and then examine its effectiveness in discriminating between well-matched music and speech stimuli. We also compared our procedures with other methods which use either L1-norm regularization alone or support vector machine-based feature elimination. On simulated data, our methods performed significantly better than existing methods across a wide range of contrast-to-noise ratios and feature prevalence rates. On experimental fMRI data, our methods were more effective in selectively isolating a distributed fronto-temporal network that distinguished between brain regions known to be involved in speech and music processing. These findings suggest that our method is not only computationally efficient, but it also achieves the twin objectives of identifying relevant discriminative brain regions and accurately classifying fMRI data.", "title": "" }, { "docid": "3380a9a220e553d9f7358739e3f28264", "text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.", "title": "" }, { "docid": "c4062390a6598f4e9407d29e52c1a3ed", "text": "We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially higher fractions of the more compact Drosophila melanogaster (37%-53%), Caenorhabditis elegans (18%-37%), and Saccharaomyces cerevisiae (47%-68%) genomes. From yeasts to vertebrates, in order of increasing genome size and general biological complexity, increasing fractions of conserved bases are found to lie outside of the exons of known protein-coding genes. In all groups, the most highly conserved elements (HCEs), by log-odds score, are hundreds or thousands of bases long. These elements share certain properties with ultraconserved elements, but they tend to be longer and less perfectly conserved, and they overlap genes of somewhat different functional categories. In vertebrates, HCEs are associated with the 3' UTRs of regulatory genes, stable gene deserts, and megabase-sized regions rich in moderately conserved noncoding sequences. Noncoding HCEs also show strong statistical evidence of an enrichment for RNA secondary structure.", "title": "" }, { "docid": "fc509e8f8c0076ad80df5ff6ee6b6f1e", "text": "The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.", "title": "" }, { "docid": "60664c058868f08a67d14172d87a4756", "text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.", "title": "" }, { "docid": "98df4ff146fe0067c87a3b5514ea0934", "text": "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.", "title": "" }, { "docid": "9afc0411331ac43bc54df639760813af", "text": "Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.", "title": "" }, { "docid": "cbfffcdb150143ccacaf3700aadea59e", "text": "Recurrent Neural Networks (RNNs), and specifically a variant with Long ShortTerm Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.", "title": "" }, { "docid": "6f05e76961d4ef5fc173bafd5578081f", "text": "Edmodo is simply a controlled online networking application that can be used by teachers and students to communicate and remain connected. This paper explores the experiences from a group of students who were using Edmodo platform in their course work. It attempts to use the SAMR (Substitution, Augmentation, Modification and Redefinition) framework of technology integration in education to access and evaluate technology use in the classroom. The respondents were a group of 62 university students from a Kenyan University whose lecturer had created an Edmodo account and introduced the students to participate in their course work during the September to December 2015 semester. More than 82% of the students found that they had a personal stake in the quality of work presented through the platforms and that they were able to take on different subtopics and collaborate to create one final product. This underscores the importance of Edmodo as an environment with skills already in the hands of the students that we can use to integrate technology in the classroom.", "title": "" }, { "docid": "e4e0e01b3af99dfd88ff03a1057b40d3", "text": "There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation - events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.", "title": "" }, { "docid": "7bfd3237b1a4c3c651b4c5389019f190", "text": "Recent developments in web technologies including evolution of web standards, improvements in browser performance, and the emergence of free and open-source software (FOSS) libraries are driving a general shift from server-side to client-side web applications where a greater share of the computational load is transferred to the browser. Modern client-side approaches allow for improved user interfaces that rival traditional desktop software, as well as the ability to perform simulations and visualizations within the browser. We demonstrate the use of client-side technologies to create an interactive web application for a simulation model of biochemical oxygen demand and dissolved oxygen in rivers called the Webbased Interactive River Model (WIRM). We discuss the benefits, limitations and potential uses of client-side web applications, and provide suggestions for future research using new and upcoming web technologies such as offline access and local data storage to create more advanced client-side web applications for environmental simulation modeling. 2014 Elsevier Ltd. All rights reserved. Software availability Product Title: Web-based Interactive River Model (WIRM) Developer: Jeffrey D. Walker Contact Address: Dept. of Civil and Environmental Engineering, Tufts University, 200 College Ave, Medford, MA 02155 Contact E-mail: jeffrey.walker@tufts.edu Available Since: 2013 Programming Language: JavaScript, Python Availability: http://wirm.walkerjeff.com/ Cost: Free", "title": "" }, { "docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79", "text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.", "title": "" }, { "docid": "bcdb0e6dcbab8fcccfea15edad00a761", "text": "This article presents the 1:4 wideband balun based on transmission lines that was awarded the first prize in the Wideband Baluns Student Design Competition. The competition was held during the 2014 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2014). It was initiated in 2011 and is sponsored by the MTT-17 Technical Coordinating Committee. The winner must implement and measure a wideband balun of his or her own design and achieve the highest possible operational frequency from at least 1 MHz (or below) while meeting the following conditions: ? female subminiature version A (SMA) connectors are used to terminate all ports ? a minimum impedance transformation ratio of two ? a maximum voltage standing wave ratio (VSWR) of 2:1 at all ports ? an insertion loss of less than 1 dB ? a common-mode rejection ratio (CMRR) of more than 25 dB ? imbalance of less than 1 dB and 2.5?.", "title": "" }, { "docid": "aad2d6385cb8c698a521caea00fe56d2", "text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some", "title": "" }, { "docid": "5392e45840929b05b549a64a250774e5", "text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.", "title": "" }, { "docid": "1e80f38e3ccc1047f7ee7c2b458c0beb", "text": "This thesis presents an approach to robot arm control exploiting natural dynamics. The approach consists of using a compliant arm whose joints are controlled with simple non-linear oscillators. The arm has special actuators which makes it robust to collisions and gives it a smooth compliant, motion. The oscillators produce rhythmic commands of the joints of the arm, and feedback of the joint motions is used to modify the oscillator behavior. The oscillators enable the resonant properties of the arm to be exploited to perform a variety of rhythmic and discrete tasks. These tasks include tuning into the resonant frequencies of the arm itself, juggling, turning cranks, playing with a Slinky toy, sawing wood, throwing balls, hammering nails and drumming. For most of these tasks, the controllers at each joint are completely independent, being coupled by mechanical coupling through the physical arm of the robot. The thesis shows that this mechanical coupling allows the oscillators to automatically adjust their commands to be appropriate for the arm dynamics and the task. This coordination is robust to large changes in the oscillator parameters, and large changes in the dynamic properties of the arm. As well as providing a wealth of experimental data to support this approach, the thesis also provides a range of analysis tools, both approximate and exact. These can be used to understand and predict the behavior of current implementations, and design new ones. These analysis techniques improve the value of oscillator solutions. The results in the thesis suggest that the general approach of exploiting natural dynamics is a powerful method for obtaining coordinated dynamic behavior of robot arms. Thesis Supervisor: Rodney A. Brooks Title: Professor of Electrical Engineering and Computer Science, MIT 5.4. CASE (C): MODIFYING THE NATURAL DYNAMICS 95", "title": "" }, { "docid": "a987f009509e9c4f5c29b27275713eac", "text": "PURPOSE\nThis article provides a critical overview of problem-based learning (PBL), its effectiveness for knowledge acquisition and clinical performance, and the underlying educational theory. The focus of the paper is on (1) the credibility of claims (both empirical and theoretical) about the ties between PBL and educational outcomes and (2) the magnitude of the effects.\n\n\nMETHOD\nThe author reviewed the medical education literature, starting with three reviews published in 1993 and moving on to research published from 1992 through 1998 in the primary sources for research in medical education. For each study the author wrote a summary, which included study design, outcome measures, effect sizes, and any other information relevant to the research conclusion.\n\n\nRESULTS AND CONCLUSION\nThe review of the literature revealed no convincing evidence that PBL improves knowledge base and clinical performance, at least not of the magnitude that would be expected given the resources required for a PBL curriculum. The results were considered in light of the educational theory that underlies PBL and its basic research. The author concludes that the ties between educational theory and research (both basic and applied) are loose at best.", "title": "" } ]
scidocsrr
fa29448fa3f997481548cc9c99abf421
Similarity by Composition
[ { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "6a8a849bc8272a7b73259e732e3be81b", "text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.", "title": "" }, { "docid": "b5a5c48f998f77a56821d03c7f8ad64e", "text": "A microwave sensor having features useful for the noninvasive determination of blood glucose levels is described. The sensor output is an amplitude only measurement of the standing wave versus frequency sampled at a fixed point on an open-terminated spiral-shaped microstrip line. Test subjects press their thumb against the line and apply contact pressure sufficient to fall within a narrow pressure range. Data are reported for test subjects whose blood glucose is independently measured using a commercial glucometer.", "title": "" }, { "docid": "cebd2d1ae41ea1179256b885cbd13d3d", "text": "The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization. Unlike the state-of-the-art methods for landmark localization and pose correction, where large amount of manually annotated images or 3D facial models are required, the proposed method relies on a small set of frontal images only. By observing that the frontal facial image of both humans and animals, is the one having the minimum rank of all different poses, a model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem is solved, concerning minimization of the nuclear norm (convex surrogate of the rank function) and the matrix $$\\ell _1$$ ℓ 1 norm accounting for occlusions. The proposed method is assessed in frontal view reconstruction of human and animal faces, landmark localization, pose-invariant face recognition, face verification in unconstrained conditions, and video inpainting by conducting experiment on 9 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems.", "title": "" }, { "docid": "842740ba02fd8d4a515dad3a4acc0c55", "text": "In this paper we present a multivariate analysis of evoked hemodynamic responses and their spatiotemporal dynamics as measured with fast fMRI. This analysis uses standard multivariate statistics (MANCOVA) and the general linear model to make inferences about effects of interest and canonical variates analysis (CVA) to describe the important features of these effects. We have used these techniques to characterize the form of hemodynamic transients that are evoked during a cognitive or sensorimotor task. In particular we do not assume that the neural or hemodynamic response reaches some \"steady state\" but acknowledge that these physiological changes could show profound task-dependent adaptation and time-dependent changes during the task. To address this issue we have modeled hemodynamic responses using appropriate temporal basis functions and estimated their exact form within the general linear model using MANCOVA. We do not propose that this analysis is a particularly powerful way to make inferences about functional specialization (or more generally functional anatomy) because it only provides statistical inferences about the distributed (whole brain) responses evoked by different conditions. However, its application to characterizing the temporal aspects of evoked hemodynamic responses reveals some compelling and somewhat unexpected perspectives on transient but stereotyped responses to changes in cognitive or sensorimotor processing. The most remarkable observation is that these responses can be biphasic and show profound differences in their form depending on the extant task or condition. Furthermore these differences can be seen in the absence of changes in mean signal.", "title": "" }, { "docid": "22e7479c10d7b963e9dd2cd3aeee6706", "text": "In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter “H” surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter “H”. The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.", "title": "" }, { "docid": "0452e261fcd1a18b49e037493abda496", "text": "Joint torque sensory feedback is an effective technique for achieving high-performance robot force and motion control. However, most robots are not equipped with joint torque sensors, and it is difficult to add them without changing the joint's mechanical structure. A method for estimating joint torque that exploits the existing structural elasticity of robotic joints with harmonic drive transmission is proposed in this paper. In the presented joint torque estimation method, motor-side and link-side position measurements along with a proposed harmonic drive compliance model, are used to realize stiff and sensitive joint torque estimation, without the need for adding an additional elastic body and using strain gauges to measure the joint torque. The proposed method has been experimentally studied and its performance is compared with measurements of a commercial torque sensor. The results have attested the effectiveness of the proposed torque estimation method.", "title": "" }, { "docid": "28c82ece7caa6e07bf31a143c2d3adbd", "text": "We develop a novel method for training of GANs for unsupervised and class conditional generation of images, called Linear Discriminant GAN (LD-GAN). The discriminator of an LD-GAN is trained to maximize the linear separability between distributions of hidden representations of generated and targeted samples, while the generator is updated based on the decision hyper-planes computed by performing LDA over the hidden representations. LD-GAN provides a concrete metric of separation capacity for the discriminator, and we experimentally show that it is possible to stabilize the training of LD-GAN simply by calibrating the update frequencies between generators and discriminators in the unsupervised case, without employment of normalization methods and constraints on weights. In the class conditional generation tasks, the proposed method shows improved training stability together with better generalization performance compared to WGAN (Arjovsky et al. 2017) that employs an auxiliary classifier.", "title": "" }, { "docid": "5e7acc47170cbe30d330096b8aa87956", "text": "For years we have known that cortical neurons collectively have synchronous or oscillatory patterns of activity, the frequencies and temporal dynamics of which are associated with distinct behavioural states. Although the function of these oscillations has remained obscure, recent experimental and theoretical results indicate that correlated fluctuations might be important for cortical processes, such as attention, that control the flow of information in the brain.", "title": "" }, { "docid": "c91c74a262669d0539a37fa7b51938aa", "text": "BACKGROUND\nBioengineered hyaluronic acid derivatives are currently available that provide for safe and effective soft-tissue augmentation in the comprehensive approach to nonsurgical facial rejuvenation. Current hyaluronic acid fillers do not require preinjection skin testing and produce reproducible, longer-lasting, nonpermanent results compared with other fillers, such as collagen.\n\n\nMETHODS\nA review of the authors' extensive experience at the University of Texas Southwestern Medical Center was conducted to formulate the salient requirements for successful utilization of hyaluronic acid fillers. Indications, technical refinements, and key components for optimized product administration categorized by anatomical location are described. The efficacy and longevity of results are also discussed.\n\n\nRESULTS\nBioengineered hyaluronic acid fillers allow for safe and effective augmentation of selected anatomical regions of the face, when properly administered. Combined treatment with botulinum toxin type A can enhance the effects and longevity by as much as 50 percent. Key components to optimal filler administration include proper anatomical evaluation, changing or combining various fillers based on particle size, altering the depth of injection, using different injection techniques, and coadministration of botulinum toxin type A when indicated. Concomitant administration of hyaluronic acid fillers along with surgical methods of facial rejuvenation can serve as a powerful tool in maximizing a comprehensive treatment plan.\n\n\nCONCLUSIONS\nCurrent techniques in nonsurgical facial rejuvenation and shaping with hyaluronic acid fillers are safe, effective, and long-lasting. Combination regimens that include surgical facial rejuvenation techniques and/or coadministration of botulinum toxin type A further optimize results, leading to greater patient satisfaction.", "title": "" }, { "docid": "35dd6675e287b5e364998ee138677032", "text": "Focussed structured document retrieval employs the concept of best entry points (BEPs), which are intended to provide optimal starting-points from which users can browse to relevant document components. This paper describes two small-scale studies, using experimental data from the Shakespeare user study, which developed and evaluated different approaches to the problem of automatic identification of BEPs.", "title": "" }, { "docid": "b6f0d75d0bd8c050c391e148367829a4", "text": "Insufficient supply of animal protein is a major problem in developing countries including Nigeria. Rabbits are adjudged to be a convenient source of palatable and nutritious meat, high in protein, and contain low fat and cholesterol. A doe can produce more than 15 times her own weight in offspring in a year. However, its productivity may be limited by inadequate nutrition. The objective of this study was to determine the effect of probiotic (Saccharomyces cerevisiae) supplementation on growth performance and some hematological parameters of rabbit. The appropriate level of the probiotic inclusion for excellent health status and optimum productivity was also determined. A total of 40 male rabbits were randomly divided into four groups (A–D) of ten rabbits each. Each group was subdivided into two replicates of five rabbits each. They were fed pelleted grower mash ad libitum. The feed for groups A to C were supplemented with bioactive yeast (probiotic) at inclusion levels of 0.08, 0.12, and 0.16 g yeast/kg diet, respectively. Group D had no yeast (control). Daily feed intake was determined. The rabbits were weighed weekly. The packed cell volume (PCV), hemoglobin concentration, white blood cell total, and differential counts were determined at the 8th week, 16th week, and 22nd week following standard procedures. The three results which did not have any significant difference were pooled together. Group A which had 0.08 g yeast/kg of diet had a significantly lower (P ≤ 0.05) PCV than groups B (which had 0.12 g yeast/kg of diet) and C (which had 0.16 g yeast/kg of diet) as well as D (the control). Total WBC count for groups B and C (14.35 ± 0.100 × 103/μl and 14.65 ± 0.786 × 103/μl, respectively) were significantly higher (P ≤ 0.05) than groups A and D (6.33 ± 0.335 × 103/μl and 10.40 ± 0.296 × 103/μl, respectively). Also the absolute neutrophils and lymphocytes counts were significantly higher (P ≤ 0.05) in groups B and C than in groups A and D. Group B had significantly higher (P ≤ 0.05) weight gain (1.025 ± 0.006 kg/rabbit) followed by group A (0.950 ± 0.092 kg/rabbit). The control (group D) had the least weight gain of 0.623 ± 0.0.099 kg/rabbit. These results showed that like most probiotics, bioactive yeast at an appropriate level of inclusion had a significant beneficial effect on health status and growth rate of rabbit. Probiotic supplementation level of 0.12 g yeast/kg of diet was recommended for optimum rabbit production.", "title": "" }, { "docid": "8240e0ebc13c75d774f7cc8576f78bfc", "text": "We have built an anatomically correct testbed (ACT) hand with the purpose of understanding the intrinsic biomechanical and control features in human hands that are critical for achieving robust, versatile, and dexterous movements, as well as rich object and world exploration. By mimicking the underlying mechanics and controls of the human hand in a hardware platform, our goal is to achieve previously unmatched grasping and manipulation skills. In this paper, the novel constituting mechanisms, unique muscle to joint relationships, and movement demonstrations of the thumb, index finger, middle finger, and wrist of the ACT Hand are presented. The grasping and manipulation abilities of the ACT Hand are also illustrated. The fully functional ACT Hand platform allows for the possibility to design and experiment with novel control algorithms leading to a deeper understanding of human dexterity.", "title": "" }, { "docid": "eac322eae08da165b436308336aac37a", "text": "The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM-based scheduling, and to identify how it should be incorporated into current practice. The analysis of the current scheduling processes identifies significant discrepancies between the overall and the detailed levels of scheduling. The overall scheduling process is described as an individual endeavor with limited and unsystematic sharing of knowledge within and between projects. Thus, the reuse of scheduling data and experiences are inadequate, preventing continuous improvements of the overall schedules. Besides, the overall scheduling process suffers from lack of information, caused by uncoordinated and unsynchronized overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve overall scheduling, which in turn, can secure a better starting point of the LPS. The study points to the necessity of involving subcontractors and manufactures in the earliest phases of the project in order to create project specific information for the overall schedule. In addition, the design process should be prioritized and coordinated with each craft, a process library should be introduced to promote transfer of knowledge and continuous improvements, and information flow between design and scheduling processes must change from push to pull.", "title": "" }, { "docid": "01835769f2dc9391051869374e200a6a", "text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.", "title": "" }, { "docid": "fcd98a7540dd59e74ea71b589c255adb", "text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.", "title": "" }, { "docid": "292db0e308281a3c1c9be44f76eacc93", "text": "This paper proposes steganalysis methods for extensions of least-significant bit (LSB) overwriting to both of the two lowest bit planes in digital images: there are two distinct embedding paradigms. The author investigates how detectors for standard LSB replacement can be adapted to such embedding, and how the methods of \"structural steganalysis\", which gives the most sensitive detectors for standard LSB replacement, may be extended and applied to make more sensitive purpose-built detectors for two bit plane steganography. The literature contains only one other detector specialized to detect replacement multiple bits, and those presented here are substantially more sensitive. The author also compares the detectability of standard LSB embedding with the two methods of embedding in the lower two bit planes: although the novel detectors have a high accuracy from the steganographer's point of view, the empirical results indicate that embedding in the two lowest bit planes is preferable (in some cases, highly preferable) to embedding in one", "title": "" }, { "docid": "f9b56de3658ef90b611c78bdb787d85b", "text": "Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.", "title": "" }, { "docid": "c79c4bdf28ca638161cb82ac9991d5e9", "text": "This letter proposes a novel wideband circularly polarized magnetoelectric dipole antenna. In the proposed antenna, a pair of rotationally symmetric horizontal patches functions as an electric dipole, and two vertical patches with the ground act as an equivalent magnetic dipole. A Γ-shaped probe is used to excite the antenna, and a metallic cavity with two gaps is designed for wideband and good performance in radiation. A prototype was fabricated and measured. The experimental results show that the proposed antenna has an impedance bandwidth of 65% for SWR≤2 from 1.76 to 3.46 GHz, a 3-dB axial-ratio bandwidth of 71.5% from 1.68 to 3.55 GHz, and a stable gain of 8 ± 1 dBi. Good unidirectional radiation characteristic and low back-lobe level are achieved over the whole operating frequency band.", "title": "" }, { "docid": "5b62ac3acefed74bf82f2c375b10c9e2", "text": "P2P lending is a new form of lending where in the lenders and borrowers can meet at a common platform like Prosper and ZOPA and strike a best deal. While the borrower looks for a lender who offers the fund at a cheaper interest rate, the lender looks for a borrower whose probability of default is nil or minimal. The peer to peer lending sites can help the lenders judge the borrower by allowing the analysis of the social structures like friendship networks and group affiliations. A particular user can be judged based on his profile and on the information extracted from his social network like borrower's friend's profile and activities (like lending, borrowing and bidding activities). We are using classification algorithm to classify good and bad borrowers, where the input attributes consists of both core credit and social network information. Most of these algorithms only take a single table as input, whereas in the real world most data are stored in multiple tables and managed by relational database systems. Transferring data from multiple tables into a single table, especially merging the social network data causes problems like high redundancy. A simple classifier performs well on real single table data but when applied in a multi-relational (Multi table) setting; its accuracy suffers from the altered statistical information of individual attributes during “join”. Therefore we are using a multi relational Bayesian classification method to predict the default probabilities of borrowers.", "title": "" }, { "docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a", "text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.", "title": "" } ]
scidocsrr
8890d941123da99a28bbdfe2b12638ca
QoE and power efficiency tradeoff for fog computing networks with fog node cooperation
[ { "docid": "37be9e992a6a99af165f7c6ddbbed36d", "text": "The past 15 years have seen the rise of the Cloud, along with rapid increase in Internet backbone traffic and more sophisticated cellular core networks. There are three different types of “Clouds:” (1) data center, (2) backbone IP network and (3) cellular core network, responsible for computation, storage, communication and network management. Now the functions of these three types of Clouds are “descending” to be among or near the end users, i.e., to the edge of networks, as “Fog.”", "title": "" }, { "docid": "ae19bd4334434cfb8c5ac015dc8d3bd4", "text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.", "title": "" }, { "docid": "9e4417a0ea21de3ffffb9017f0bad705", "text": "Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.", "title": "" } ]
[ { "docid": "0a7558a172509707b33fcdfaafe0b732", "text": "Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.", "title": "" }, { "docid": "4bd161b3e91dea05b728a72ade72e106", "text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: julio.rodriguez@epfl.ch and jrodrigu@physik.uni-bielefeld.de", "title": "" }, { "docid": "84d2cb7c4b8e0f835dab1cd3971b60c5", "text": "Ambient intelligence (AmI) deals with a new world of ubiquitous computing devices, where physical environments interact intelligently and unobtrusively with people. These environments should be aware of people's needs, customizing requirements and forecasting behaviors. AmI environments can be diverse, such as homes, offices, meeting rooms, schools, hospitals, control centers, vehicles, tourist attractions, stores, sports facilities, and music devices. Artificial intelligence research aims to include more intelligence in AmI environments, allowing better support for humans and access to the essential knowledge for making better decisions when interacting with these environments. This article, which introduces a special issue on AmI, views the area from an artificial intelligence perspective.", "title": "" }, { "docid": "88128ec1201e2202f13f2c09da0f07f2", "text": "A new mechanism is proposed for exciting the magnetic state of a ferromagnet. Assuming ballistic conditions and using WKB wave functions, we predict that a transfer of vectorial spin accompanies an electric current flowing perpendicular to two parallel magnetic films connected by a normal metallic spacer. This spin transfer drives motions of the two magnetization vectors within their instantaneously common plane. Consequent new mesoscopic precession and switching phenomena with potential applications are predicted. PACS: 75.50.Rr; 75.70.Cn A magnetic multilayer (MML) is composed of alternating ferromagnetic and paramagnetic sublayers whose thicknesses usually range between 1 and l0 nm. The discovery in 1988 of gian t magne tore s i s tance (GMR) in such multilayers stimulates much current research [1]. Although the initial reports dealt with currents flowing in the layer planes (CIP), the magnetoresistive phenomenon is known to be even stronger for currents flowing perpendicular to the plane (CPP) [2]. We predict here that the spinpolarized nature of such a perpendicular current generally creates a mutual transference of spin angular momentum between the magnetic sublayers which is manifested in their dynamic response. This response, which occurs only for CPP geometry, we propose to characterize as spin transfer . It can dominate the Larmor response to the magnetic field induced by * Fax: + 1-914-945-3291; email: slon@watson.ibm.com. the current when the magnetic sublayer thickness is about 1 nm and the smaller of its other two dimensions is less than 10= to 10 3 r im. On this mesoscopic scale, two new phenomena become possible: a steady precession driven by a constant current, and alternatively a novel form of switching driven by a pulsed current. Other forms of current-driven magnetic response without the use of any electromagnetically induced magnetic field are already known. Reports of both theory and experiments show how the exchange effect of external current flowing through a ferromagnetic domain wall causes it to move [3]. Even closer to the present subject is the magnetic response to tunneling current in the case of the sandwich structure f e r r o m a g n e t / i n s u l a t o r / f e r r o m a g n e t ( F / I / F ) predicted previously [4]. Unfortunately, theoretical relations indicated that the dissipation of energy, and therefore temperature rise, needed to produce more than barely observable spin-transfer through a tunneling barrier is prohibitively large. 0304-8853/96/$15.00 Copyright © 1996 Elsevier Science B.V. All rights reserved. PH S0304-8853(96)00062-5 12 ,/.C, Slo,cgewski / Journal of Magnetism and Magnetic Materials 159 (1996) L/ L7 However. the advent of multilayers incorporating very thin paramagnetic metallic spacers, rather than a barrier, places the realization of spin transfer in a different light. In the first place, the metallic spacer implies a low resistance and therefore low Ohmic dissipation for a given current, to which spin-transfer effects are proportional. Secondly, numerous experiments [5] and theories [6] show that the fundamental interlayer exchange coupling of RKKY type diminishes in strength and varies in sign as spacer thickness increases. Indeed, there exist experimental spacers which are thick enough (e.g. 4 nm) for the exchange coupling to be negligible even though spin relaxation is too weak to significantly diminish the GMR effect which relies on preservation of spin direction during electron transit across the spacer. Moreover, the same fact of long spin relaxation time in magnetic multilayers is illustrated on an even larger distance scale, an order of magnitude greater than the circa 10 nm electron mean free path, by spin injection experiments [7]. It follows, as we show below, that interesting current-driven spin-transfer effects are expected under laboratory conditions involving very small distance scales. We begin with simple arguments to explain current-driven spin transfer and establish its physical scale. We then sketch a detailed treatment and summarize its results. Finally, we predict two spin-transfer phenomena: steady magnetic precession driven by a constant current and a novel form of magnetic switching. We consider the five metallic regions represented schematically in Fig. 1. Layers A, B, and C are paramagnetic, whilst F I and F2 are ferromagnetic. The instantaneous macroscopic vectors hS~ and kS 2 forming the included angle 0 represent the respective total spin momenta per unit area of the ferromagnets. Now consider a flow of electrons moving rightward through the sandwich. The works on spin injection [7] show that if the thickness of spacer B is less than the spin-diffusion length, usually at least 100 nm, then some degree of spin polarization along the instantaneous axis parallel to the vector S~ of local ferromagnetic polarization in FI will be present in the electrons impinging on F2. This leads us to consider a three-layer (B, F2, C in Fig. 1) model in which an electron with initial spin state along the direction Sj is incident from S i ~ i S2 ~, EF=0J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "title": "" }, { "docid": "7161122eaa9c9766e9914ba0f2ee66ef", "text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.", "title": "" }, { "docid": "b741698d7e4d15cb7f4e203f2ddbce1d", "text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.", "title": "" }, { "docid": "f35007fdca9c35b4c243cb58bd6ede7a", "text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).", "title": "" }, { "docid": "957170b015e5acd4ab7ce076f5a4c900", "text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.", "title": "" }, { "docid": "d30343a3a888139eb239c6605ccb0f41", "text": "Low-power wireless networks are quickly becoming a critical part of our everyday infrastructure. Power consumption is a critical concern, but power measurement and estimation is a challenge. We present Powertrace, which to the best of our knowledge is the first system for network-level power profiling of low-power wireless systems. Powertrace uses power state tracking to estimate system power consumption and a structure called energy capsules to attribute energy consumption to activities such as packet transmissions and receptions. With Powertrace, the power consumption of a system can be broken down into individual activities which allows us to answer questions such as “How much energy is spent forwarding packets for node X?”, “How much energy is spent on control traffic and how much on critical data?”, and “How much energy does application X account for?”. Experiments show that Powertrace is accurate to 94% of the energy consumption of a device. To demonstrate the usefulness of Powertrace, we use it to experimentally analyze the power behavior of the proposed IETF standard IPv6 RPL routing protocol and a sensor network data collection protocol. Through using Powertrace, we find the highest power consumers and are able to reduce the power consumption of data collection with 24%. It is our hope that Powertrace will help the community to make empirical energy evaluation a widely used tool in the low-power wireless research community toolbox.", "title": "" }, { "docid": "70b325c1767e9977ac27894cfa051fab", "text": "BACKGROUND\nDecreased systolic function is central to the pathogenesis of heart failure in millions of patients worldwide, but mechanism-related adverse effects restrict existing inotropic treatments. This study tested the hypothesis that omecamtiv mecarbil, a selective cardiac myosin activator, will augment cardiac function in human beings.\n\n\nMETHODS\nIn this dose-escalating, crossover study, 34 healthy men received a 6-h double-blind intravenous infusion of omecamtiv mecarbil or placebo once a week for 4 weeks. Each sequence consisted of three ascending omecamtiv mecarbil doses (ranging from 0·005 to 1·0 mg/kg per h) with a placebo infusion randomised into the sequence. Vital signs, blood samples, electrocardiographs (ECGs), and echocardiograms were obtained before, during, and after each infusion. The primary aim was to establish maximum tolerated dose (the highest infusion rate tolerated by at least eight participants) and plasma concentrations of omecamtiv mecarbil; secondary aims were evaluation of pharmacodynamic and pharmacokinetic characteristics, safety, and tolerability. This study is registered at ClinicalTrials.gov, number NCT01380223.\n\n\nFINDINGS\nThe maximum tolerated dose of omecamtiv mecarbil was 0·5 mg/kg per h. Omecamtiv mecarbil infusion resulted in dose-related and concentration-related increases in systolic ejection time (mean increase from baseline at maximum tolerated dose, 85 [SD 5] ms), the most sensitive indicator of drug effect (r(2)=0·99 by dose), associated with increases in stroke volume (15 [2] mL), fractional shortening (8% [1]), and ejection fraction (7% [1]; all p<0·0001). Omecamtiv mecarbil increased atrial contractile function, and there were no clinically relevant changes in diastolic function. There were no clinically significant dose-related adverse effects on vital signs, serum chemistries, ECGs, or adverse events up to a dose of 0·625 mg/kg per h. The dose-limiting toxic effect was myocardial ischaemia due to excessive prolongation of systolic ejection time.\n\n\nINTERPRETATION\nThese first-in-man data show highly dose-dependent augmentation of left ventricular systolic function in response to omecamtiv mecarbil and support potential clinical use of the drug in patients with heart failure.\n\n\nFUNDING\nCytokinetics Inc.", "title": "" }, { "docid": "b5ecd3e4e14cae137b88de8bd4c92c5d", "text": "Design and analysis of ultrahigh-frequency (UHF) micropower rectifiers based on a diode-connected dynamic threshold MOSFET (DTMOST) is discussed. An analytical design model for DTMOST rectifiers is derived based on curve-fitted diode equation parameters. Several DTMOST six-stage charge-pump rectifiers were designed and fabricated using a CMOS 0.18-mum process with deep n-well isolation. Measured results verified the design model with average accuracy of 10.85% for an input power level between -4 and 0 dBm. At the same time, three other rectifiers based on various types of transistors were fabricated on the same chip. The measured results are compared with a Schottky diode solution.", "title": "" }, { "docid": "bde70da078bba2a63899cc7eb2a9aaf9", "text": "In the past few years, cloud computing develops very quickly. A large amount of data are uploaded and stored in remote public cloud servers which cannot fully be trusted by users. Especially, more and more enterprises would like to manage their data by the aid of the cloud servers. However, when the data outsourced in the cloud are sensitive, the challenges of security and privacy becomes urgent for wide deployment of the cloud systems. This paper proposes a secure data sharing scheme to ensure the privacy of data owner and the security of the outsourced cloud data. The proposed scheme provides flexible utility of data while solving the privacy and security challenges for data sharing. The security and efficiency analysis demonstrate that the designed scheme is feasible and efficient. At last, we discuss its application in electronic health record.", "title": "" }, { "docid": "6883add239f58223ef1941d5044d4aa8", "text": "A novel jitter equalization circuit is presented that addresses crosstalk-induced jitter in high-speed serial links. A simple model of electromagnetic coupling demonstrates the generation of crosstalk-induced jitter. The analysis highlights unique aspects of crosstalk-induced jitter that differ from far-end crosstalk. The model is used to predict the crosstalk-induced jitter in 2-PAM and 4-PAM, which is compared to measurement. Furthermore, the model suggests an equalizer that compensates for the data-induced electromagnetic coupling between adjacent links and is suitable for pre- or post-emphasis schemes. The circuits are implemented using 130-nm MOSFETs and operate at 5-10 Gb/s. The results demonstrate reduced deterministic jitter and lower bit-error rate (BER). At 10 Gb/s, the crosstalk-induced jitter equalizer opens the eye at 10/sup -12/ BER from 17 to 45 ps and lowers the rms jitter from 8.7 to 6.3 ps.", "title": "" }, { "docid": "ba9030da218e0ba5d4369758d80be5b9", "text": "Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs, in conjunction with stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching, or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable candidate samples with notable variability, and in particular provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.", "title": "" }, { "docid": "5cfef434d0d33ac5859bcdb77227d7b7", "text": "The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.", "title": "" }, { "docid": "16546193b0096392d4f5ebf6ad7d35a8", "text": "According to the ways to see the real environments, mirror metaphor augmented reality systems can be classified into video see-through virtual mirror displays and reflective half-mirror displays. The two systems have distinctive characteristics and application fields with different types of complexity. In this paper, we introduce a system configuration to implement a prototype of a reflective half-mirror display-based augmented reality system. We also present a two-phase calibration method using an extra camera for the system. Finally, we describe three error sources in the proposed system and show the result of analysis of these errors with several experiments.", "title": "" }, { "docid": "bbea93884f1f0189be1061939783a1c0", "text": "Severe adolescent female stress urinary incontinence (SAFSUI) can be defined as female adolescents between the ages of 12 and 17 years complaining of involuntary loss of urine multiple times each day during normal activities or sneezing or coughing rather than during sporting activities. An updated review of its likely prevalence, etiology, and management is required. The case of a 15-year-old female adolescent presenting with a 7-year history of SUI resistant to antimuscarinic medications and 18 months of intensive physiotherapy prompted this review. Issues of performing physical and urodynamic assessment at this young age were overcome in order to achieve the diagnosis of urodynamic stress incontinence (USI). Failed use of tampons was followed by the insertion of (retropubic) suburethral synthetic tape (SUST) under assisted local anesthetic into tissues deemed softer than the equivalent for an adult female. Whereas occasional urinary incontinence can occur in between 6 % and 45 % nulliparous adolescents, the prevalence of non‐neurogenic SAFSUI is uncertain but more likely rare. Risk factors for the occurrence of more severe AFSUI include obesity, athletic activities or high-impact training, and lung diseases such as cystic fibrosis (CF). This first reported use of a SUST in a patient with SAFSUI proved safe and completely curative. Artificial urinary sphincters, periurethral injectables and pubovaginal slings have been tried previously in equivalent patients. SAFSUI is a relatively rare but physically and emotionally disabling presentation. Multiple conservative options may fail, necessitating surgical management; SUST can prove safe and effective.", "title": "" }, { "docid": "cac556bfbdf64e655766da2404cb24c2", "text": "How can we learn a classi€er that is “fair” for a protected or sensitive group, when we do not know if the input to the classi€er belongs to the protected group? How can we train such a classi€er when data on the protected group is dicult to aŠain? In many settings, €nding out the sensitive input aŠribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we o‰en do not know many aŠributes of the user, e.g., race or age, and many aŠributes of the content are hard to determine, e.g., the language or topic. Œus, it is not feasible to use a di‚erent classi€er calibrated based on knowledge of the sensitive aŠribute. Here, we use an adversarial training procedure to remove information about the sensitive aŠribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training e‚ects the resulting fairness properties. We €nd two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary’s notion of fairness. ACM Reference format: Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi. 2017. Data Decisions and Œeoretical Implications when Adversarially Learning Fair Representations. In Proceedings of 2017Workshop on Fairness, Accountability, and Transparency in Machine Learning, Halifax, Canada, August 2017 (FAT/ML ’17), 5 pages.", "title": "" } ]
scidocsrr
ee3f6043d2b4fc2c1ab7bf983cd18563
Performance analysis of data security algorithms used in the railway traffic control systems
[ { "docid": "34ceb0e84b4e000b721f87bcbec21094", "text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.", "title": "" } ]
[ { "docid": "debb7f6f8e00b536dd823c4b513f5950", "text": "It is known that in the Tower of Ha noi graphs there are at most two different shortest paths between any fixed pair of vertices. A formula is given that counts, for a given vertex v, thenumber of verticesu such that there are two shortest u, v-paths. The formul a is expressed in terms of Stern’s diatomic sequenceb(n) (n ≥ 0) and implies that only for vertices of degree two this number is zero. Plane embeddings of the Tower of Hanoi graphs are also presented that provide an explicit description ofb(n) as the number of elements of the sets of vertices of the Tower of Hanoi graphs intersected by certain lines in the plane. © 2004 Elsevier Ltd. All rights reserved. MSC (2000):05A15; 05C12; 11B83; 51M15", "title": "" }, { "docid": "1145d2375414afbdd5f1e6e703638028", "text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).", "title": "" }, { "docid": "def621d47a8ead24754b1eebe590314a", "text": "Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.", "title": "" }, { "docid": "6dd81725ffdb5a90c9f02c4faca784a3", "text": "In 1989 the IT function of the exploration and production division of British Petroleum Company set out to transform itself in response to a severe economic environment and poor internal perceptions of IT performance. This case study traces and analyzes the changes made over six years. The authors derive a model of the transformed IT organization comprising seven components which they suggest can guide IT departments in general as they seek to reform themselves in the late 1990's. This model is seen to fit well with recent thinking on general management in that the seven components of change can be reclassified into the Bartlett and Ghoshal (1994) framework of Purpose, Process and People. Some suggestions are made on how to apply the model in other organizations.", "title": "" }, { "docid": "eee51fc5cd3bee512b01193fa396e19a", "text": "Croston’s method is a widely used to predict inventory demand when it is inter­ mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop­ erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]", "title": "" }, { "docid": "87785a3cd233389e23f4773f24c17d1d", "text": "Modern processors use high-performance cache replacement policies that outperform traditional alternatives like least-recently used (LRU). Unfortunately, current cache models do not capture these high-performance policies as most use stack distances, which are inherently tied to LRU or its variants. Accurate predictions of cache performance enable many optimizations in multicore systems. For example, cache partitioning uses these predictions to divide capacity among applications in order to maximize performance, guarantee quality of service, or achieve other system objectives. Without an accurate model for high-performance replacement policies, these optimizations are unavailable to modern processors. We present a new probabilistic cache model designed for high-performance replacement policies. It uses absolute reuse distances instead of stack distances, and models replacement policies as abstract ranking functions. These innovations let us model arbitrary age-based replacement policies. Our model achieves median error of less than 1% across several high-performance policies on both synthetic and SPEC CPU2006 benchmarks. Finally, we present a case study showing how to use the model to improve shared cache performance.", "title": "" }, { "docid": "88785ff4fe8ff37edebbf8c74f8e2465", "text": "We propose a data-driven method for automatic deception detection in real-life trial data using visual and verbal cues. Using OpenFace with facial action unit recognition, we analyze the movement of facial features of the witness when posed with questions and the acoustic patterns using OpenSmile. We then perform a lexical analysis on the spoken words, emphasizing the use of pauses and utterance breaks, feeding that to a Support Vector Machine to test deceit or truth prediction. We then try out a method to incorporate utterance-based fusion of visual and lexical analysis, using string based matching.", "title": "" }, { "docid": "8ba7352e7726f47be779a699a422ecb5", "text": "Autonomous driving has attracted tremendous attention especially in the past few years. The key techniques for a self-driving car include solving tasks like 3D map construction, self-localization, parsing the driving road and understanding objects, which enable vehicles to reason and act. However, large scale data set for training and system evaluation is still a bottleneck for developing robust perception models. In this paper, we present the ApolloScape dataset [1] and its applications for autonomous driving. Compared with existing public datasets from real scenes, e.g. KITTI [2] or Cityscapes [3], ApolloScape contains much large and richer labelling including holistic semantic dense point cloud for each site, stereo, per-pixel semantic labelling, lanemark labelling, instance segmentation, 3D car instance, high accurate location for every frame in various driving videos from multiple sites, cities and daytimes. For each task, it contains at lease 15x larger amount of images than SOTA datasets. To label such a complete dataset, we develop various tools and algorithms specified for each task to accelerate the labelling process, such as 3D-2D segment labeling tools, active labelling in videos etc. Depend on ApolloScape, we are able to develop algorithms jointly consider the learning and inference of multiple tasks. In this paper, we provide a sensor fusion scheme integrating camera videos, consumer-grade motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robust self-localization and semantic segmentation for autonomous driving. We show that practically, sensor fusion and joint learning of multiple tasks are beneficial to achieve a more robust and accurate system. We expect our dataset and proposed relevant algorithms can support and motivate researchers for further development of multi-sensor fusion and multi-task learning in the field of computer vision.", "title": "" }, { "docid": "89263084f29469d1c363da55c600a971", "text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.", "title": "" }, { "docid": "4adfc2bf6907305fc4da20a5b753c2b1", "text": "Book recommendation systems can benefit commercial websites, social media sites, and digital libraries, to name a few, by alleviating the knowledge acquisition process of users who look for books that are appealing to them. Even though existing book recommenders, which are based on either collaborative filtering, text content, or the hybrid approach, aid users in locating books (among the millions available), their recommendations are not personalized enough to meet users’ expectations due to their collective assumption on group preference and/or exact content matching, which is a failure. To address this problem, we have developed PBRecS, a book recommendation system that is based on social interactions and personal interests to suggest books appealing to users. PBRecS relies on the friendships established on a social networking site, such as LibraryThing, to generate more personalized suggestions by including in the recommendations solely books that belong to a user’s friends who share common interests with the user, in addition to applying word-correlation factors for partially matching book tags to disclose books similar in contents. The conducted empirical study on data extracted from LibraryThing has verified (i) the effectiveness of PBRecS using social-media data to improve the quality of book recommendations and (ii) that PBRecS outperforms the recommenders employed by Amazon and LibraryThing.", "title": "" }, { "docid": "64fbffe75209359b540617fac4930c44", "text": "Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy.", "title": "" }, { "docid": "238b49907eb577647354e4145f4b1e7e", "text": "The work here presented contributes to the development of ground target tracking control systems for fixed wing unmanned aerial vehicles (UAVs). The control laws are derived at the kinematic level, relying on a commercial inner loop controller onboard that accepts commands in indicated air speed and bank, and appropriately sets the control surface deflections and thrust in order to follow those references in the presence of unknown wind. Position and velocity of the target on the ground is assumed to be known. The algorithm proposed derives from a path following control law that enables the UAV to converge to a circumference centered at the target and moving with it, thus keeping the UAV in the vicinity of the target even if the target moves at a velocity lower than the UAV stall speed. If the target speed is close to the UAV speed, the control law behaves similar to a controller that tracks a particular T. Oliveira Science Laboratory, Portuguese Air Force Academy, Sintra, 2715-021, Portugal e-mail: tmoliveira@academiafa.edu.pt P. Encarnação (B) Faculty of Engineering, Catholic University of Portugal, Rio de Mouro, 2635-631, Portugal e-mail: pme@fe.lisboa.ucp.pt point on the circumference centered at the target position. Real flight tests results show the good performance of the control scheme presented.", "title": "" }, { "docid": "4d5461e076839bf2364a190808959acb", "text": "environment, are becoming increasingly prevalent. However, if agents are to behave intelligently in complex, dynamic, and noisy environments, we believe that they must be able to learn and adapt. The reinforcement learning (RL) paradigm is a popular way for such agents to learn from experience with minimal feedback. One of the central questions in RL is how best to generalize knowledge to successfully learn and adapt. In reinforcement learning problems, agents sequentially observe their state and execute actions. The goal is to maximize a real-valued reward signal, which may be time delayed. For example, an agent could learn to play a game by being told what the state of the board is, what the legal actions are, and then whether it wins or loses at the end of the game. However, unlike in supervised learning scenarios, the agent is never provided the “correct” action. Instead, the agent can only gather data by interacting with an environment, receiving information about the results, its actions, and the reward signal. RL is often used because of the framework’s flexibility and due to the development of increasingly data-efficient algorithms. RL agents learn by interacting with the environment, gathering data. If the agent is virtual and acts in a simulated environment, training data can be collected at the expense of computer time. However, if the agent is physical, or the agent must act on a “real-world” problem where the online reward is critical, such data can be expensive. For instance, a physical robot will degrade over time and must be replaced, and an agent learning to automate a company’s operations may lose money while training. When RL agents begin learning tabula rasa, mastering difficult tasks may be infeasible, as they require significant amounts of data even when using state-of-the-art RL approaches. There are many contemporary approaches to speed up “vanilla” RL methods. Transfer learning (TL) is one such technique. Transfer learning is an umbrella term used when knowledge is Articles", "title": "" }, { "docid": "e84e83443d65498a7ea37669122389e5", "text": "In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function f . Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of f in a small but promising region and speedily identify the optimum. We formalise this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.", "title": "" }, { "docid": "28ece47474132a3f8df9aa39be02d194", "text": "The degree of heavy metal (Hg, Cr, Cd, and Pb) pollution in honeybees (Apis mellifera) was investigated in several sampling sites around central Italy including both polluted and wildlife areas. The honeybee readily inhabits all environmental compartments, such as soil, vegetation, air, and water, and actively forages the area around the hive. Therefore, if it functions in a polluted environment, plant products used by bees may also be contaminated, and as a result, also a part of these pollutants will accumulate in the organism. The bees, foragers in particular, are good biological indicators that quickly detect the chemical impairment of the environment by the high mortality and the presence of pollutants in their body or in beehive products. The experiment was carried out using 24 colonies of honeybees bred in hives dislocated whether within urban areas or in wide countryside areas. Metals were analyzed on the foragers during all spring and summer seasons, when the bees were active. Results showed no presence of mercury in all samples analyzed, but honeybees accumulated several amounts of lead, chromium, and cadmium. Pb reported a statistically significant difference among the stations located in urban areas and those in the natural reserves, showing the highest values in honeybees collected from hives located in Ciampino area (Rome), next to the airport. The mean value for this sampling station was 0.52 mg kg−1, and July and September were characterized by the highest concentrations of Pb. Cd also showed statistically significant differences among areas, while for Cr no statistically significant differences were found.", "title": "" }, { "docid": "3e83dd048f23e63982c5766690661fe9", "text": "The Reactor design pattern handles service requests that are delivered concurrently to an application by one or more clients. Each service in an application may consist of serveral methods and is represented by a separate event handler that is responsible for dispatching service-specific requests. Dispatching of event handlers is performed by an initiation dispatcher, which manages the registered event handlers. Demultiplexing of service requests is performed by a synchronous event demultiplexer.", "title": "" }, { "docid": "bb2c7c7d064eebcef527efe93a7c873b", "text": "We have proposed and verified an efficient architecture for a high-speed I/O transceiver design that implements far-end crosstalk (FEXT) cancellation. In this design, TX pre-emphasis, used traditionally to reduce ISI, is combined with FEXT cancellation at the transmitter to remove crosstalk-induced jitter and interference. The architecture has been verified via simulation models based on channel measurement. A prototype implementation of a 12.8Gbps source-synchronous serial link transmitter has been developed in TSMC's 0.18mum CMOS technology. The proposed design consists of three 12.8Gbps data lines that uses a half-rate PLL clock of 6.4GHz. The chip includes a PRBS generator to simplify multi-lane testing. Simulation results show that, even with a 2times reduction in line separation, FEXT cancellation can successfully reduce jitter by 51.2 %UI and widen the eye by 14.5%. The 2.5 times 1.5 mm2 core consumes 630mW per lane at 12.8Gbps with a 1.8V supply", "title": "" }, { "docid": "dfd5de557cbd3338aa2321e4f7aeca1c", "text": "N Engl J Med 2005;353:1387-94. Copyright © 2005 Massachusetts Medical Society. A 56-year-old man was referred to the transplantation infectious-disease clinic because of a low-grade fever and left axillary lymphadenopathy. The patient had received a cadaveric kidney transplant five years earlier for polycystic kidney disease. He had been in his usual state of health until three weeks before the referral to the infectious-disease clinic, when he discovered palpable, tender lymph nodes in the left epitrochlear region and axilla. Ten days later a low-grade fever, dry cough, nasal congestion, and night sweats developed, for which trimethoprim–sulfamethoxazole was prescribed, without benefit. He was referred to a specialist in infectious diseases. The patient did not have headache, sore throat, chest or abdominal pain, dyspnea, diarrhea, or dysuria. He had hypertension, gout, nephrolithiasis, gastroesophageal reflux disease, and prostate cancer, which had been treated with radiation therapy two years earlier. He was a policeman who worked in an office. He had not traveled outside of the United States recently. He had acquired a kitten several months earlier and recalled receiving multiple scratches on his hands when he played with it. His medications were cyclosporine (325 mg daily), mycophenolate mofetil (2 g daily), amlodipine, furosemide, colchicine, doxazosin, and pravastatin. Prednisone had been discontinued one year previously. He reported no allergies to medications. The temperature was 36.0°C and the blood pressure 105/75 mm Hg. On physical examination, the patient appeared well. The head, neck, lungs, heart, and abdomen were unremarkable. On the dorsum of the left hand was a single, violaceous nodule with a flat, necrotic eschar on top (Fig. 1); there was no erythema, fluctuance, pus, or other drainage, and there was no sinus tract. The patient said that this lesion had nearly healed, but that he had been scratching it and thought that this irritation prevented it from healing. There was a tender left epitrochlear lymph node, 2 cm by 2 cm, and a mass of matted, tender lymph nodes, 5 cm in diameter, in the left axilla. There was no lymphangitic streaking or cellulitis. The results of a complete blood count revealed no abnormalities (Table 1). Additional laboratory studies were obtained, and clarithromycin (500 mg, twice a day) was prescribed. Within a day of starting treatment, the patient’s temperature rose to 39.4°C, and the fever was accompanied by shaking chills. He was admitted to the hospital. The temperature was 38.6°C, the pulse was 78 beats per minute, and the blood pressure was 100/60 mm Hg. The results of a physical examination were unchanged presentation of case", "title": "" }, { "docid": "1152fde10a30dc0d28838988d5207a34", "text": "The ability to write diverse poems in different styles under the same poetic imagery is an important characteristic of human poetry writing. Most previous works on automatic Chinese poetry generation focused on improving the coherency among lines. Some work explored style transfer but suffered from expensive expert labeling of poem styles. In this paper, we target on stylistic poetry generation in a fully unsupervised manner for the first time. We propose a novel model which requires no supervised style labeling by incorporating mutual information, a concept in information theory, into modeling. Experimental results show that our model is able to generate stylistic poems without losing fluency and coherency.", "title": "" }, { "docid": "a1b387e3199aa1c70fa07196426af256", "text": "Hyperbolic embeddings offer excellent quality with few dimensions when embedding hierarchical data structures. We give a combinatorial construction that embeds trees into hyperbolic space with arbitrarily low distortion without optimization. On WordNet, this algorithm obtains a meanaverage-precision of 0.989 with only two dimensions, outperforming existing work by 0.11 points. We provide bounds characterizing the precisiondimensionality tradeoff inherent in any hyperbolic embedding. To embed general metric spaces, we propose a hyperbolic generalization of multidimensional scaling (h-MDS). We show how to perform exact recovery of hyperbolic points from distances, provide a perturbation analysis, and give a recovery result that enables us to reduce dimensionality. Finally, we extract lessons from the algorithms and theory above to design a scalable PyTorch-based implementation that can handle incomplete information.", "title": "" } ]
scidocsrr
c14512660c09c02d1faa4b6688ef42f5
Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks
[ { "docid": "ffeb8ab86966a7ac9b8c66bdec7bfc32", "text": "Electrophysiological connectivity patterns in cortex often have a few strong connections, which are sometimes bidirectional, among a lot of weak connections. To explain these connectivity patterns, we created a model of spike timing–dependent plasticity (STDP) in which synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential, filtered with two different time constants. Our model describes several nonlinear effects that are observed in STDP experiments, as well as the voltage dependence of plasticity. We found that, in a simulated recurrent network of spiking neurons, our plasticity rule led not only to development of localized receptive fields but also to connectivity patterns that reflect the neural code. For temporal coding procedures with spatio-temporal input correlations, strong connections were predominantly unidirectional, whereas they were bidirectional under rate-coded input with spatial correlations only. Thus, variable connectivity patterns in the brain could reflect different coding principles across brain areas; moreover, our simulations suggested that plasticity is fast.", "title": "" } ]
[ { "docid": "675795d2799838f72898afcfcbd77370", "text": "Data-driven techniques for interactive narrative generation are the subject of growing interest. Reinforcement learning (RL) offers significant potential for devising data-driven interactive narrative generators that tailor players’ story experiences by inducing policies from player interaction logs. A key open question in RL-based interactive narrative generation is how to model complex player interaction patterns to learn effective policies. In this paper we present a deep RL-based interactive narrative generation framework that leverages synthetic data produced by a bipartite simulated player model. Specifically, the framework involves training a set of Q-networks to control adaptable narrative event sequences with long short-term memory network-based simulated players. We investigate the deep RL framework’s performance with an educational interactive narrative, CRYSTAL ISLAND. Results suggest that the deep RL-based narrative generation framework yields effective personalized interactive narratives.", "title": "" }, { "docid": "537cf2257d1ca9ef49f023dbdc109e0b", "text": "0950-7051/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.knosys.2010.07.006 * Corresponding author. Tel.: +886 3 5712121x573 E-mail addresses: bill.net.tw@yahoo.com.tw (Y.-S (L.-I. Tong). The autoregressive integrated moving average (ARIMA), which is a conventional statistical method, is employed in many fields to construct models for forecasting time series. Although ARIMA can be adopted to obtain a highly accurate linear forecasting model, it cannot accurately forecast nonlinear time series. Artificial neural network (ANN) can be utilized to construct more accurate forecasting model than ARIMA for nonlinear time series, but explaining the meaning of the hidden layers of ANN is difficult and, moreover, it does not yield a mathematical equation. This study proposes a hybrid forecasting model for nonlinear time series by combining ARIMA with genetic programming (GP) to improve upon both the ANN and the ARIMA forecasting models. Finally, some real data sets are adopted to demonstrate the effectiveness of the proposed forecasting model. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "54c9c1323a03f0ef3af5eea204fd51ce", "text": "The fabrication and characterization of magnetic sensors consisting of double magnetic layers are described. Both thin film based material and wire based materials were used for the double layers. The sensor elements were fabricated by patterning NiFe/CoFe multilayer thin films. This thin film based sensor exhibited a constant output voltage per excitation magnetic field at frequencies down to 0.1 Hz. The magnetic sensor using a twisted FeCoV wire, the conventional material for the Wiegand effect, had the disadvantage of an asymmetric output voltage generated by an alternating magnetic field. It was found that the magnetic wire whose ends were both slightly etched exhibited a symmetric output voltage.", "title": "" }, { "docid": "f917a32b3bfed48dfe14c05d248ef53f", "text": "Recently Adleman has shown that a small traveling salesman problem can be solved by molecular operations. In this paper we show how the same principles can be applied to breaking the Data Encryption Standard (DES). We describe in detail a library of operations which are useful when working with a molecular computer. We estimate that given one arbitrary (plain-text, cipher-text) pair, one can recover the DES key in about 4 months of work. Furthermore, we show that under chosen plain-text attack it is possible to recover the DES key in one day using some preprocessing. Our method can be generalized to break any cryptosystem which uses keys of length less than 64 bits.", "title": "" }, { "docid": "1315349a48c402398c7c4164c92e95bf", "text": "Over the past years, the computing industry has started various initiatives announced to increase computer security by means of new hardware architectures. The most notable effort is the Trusted Computing Group (TCG) and the Next-Generation Secure Computing Base (NGSCB). This technology offers useful new functionalities as the possibility to verify the integrity of a platform (attestation) or binding quantities on a specific platform (sealing).In this paper, we point out the deficiencies of the attestation and sealing functionalities proposed by the existing specification of the TCG: we show that these mechanisms can be misused to discriminate certain platforms, i.e., their operating systems and consequently the corresponding vendors. A particular problem in this context is that of managing the multitude of possible configurations. Moreover, we highlight other shortcomings related to the attestation, namely system updates and backup. Clearly, the consequences caused by these problems lead to an unsatisfactory situation both for the private and business branch, and to an unbalanced market when such platforms are in wide use.To overcome these problems generally, we propose a completely new approach: the attestation of a platform should not depend on the specific software or/and hardware (configuration) as it is today's practice but only on the \"properties\" that the platform offers. Thus, a property-based attestation should only verify whether these properties are sufficient to fulfill certain (security) requirements of the party who asks for attestation. We propose and discuss a variety of solutions based on the existing Trusted Computing (TC) functionality. We also demonstrate, how a property-based attestation protocol can be realized based on the existing TC hardware such as a Trusted Platform Module (TPM).", "title": "" }, { "docid": "70bce8834a23bc84bea7804c58bcdefe", "text": "This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band.", "title": "" }, { "docid": "d318f73ccfd1069acbf7e95596fb1028", "text": "In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices usage patterns is proposed and some preliminary results on learning set creation are described.", "title": "" }, { "docid": "5aa20cb4100085a12d02c6789ad44097", "text": "The rapid progress in nanoelectronics showed an urgent need for microwave measurement of impedances extremely different from the 50Ω reference impedance of measurement instruments. In commonly used methods input impedance or admittance of a device under test (DUT) is derived from measured value of its reflection coefficient causing serious accuracy problems for very high and very low impedances due to insufficient sensitivity of the reflection coefficient to impedance of the DUT. This paper brings theoretical description and experimental verification of a method developed especially for measurement of extreme impedances. The method can significantly improve measurement sensitivity and reduce errors caused by the VNA. It is based on subtraction (or addition) of a reference reflection coefficient and the reflection coefficient of the DUT by a passive network, amplifying the resulting signal by an amplifier and measuring the amplified signal as a transmission coefficient by a common vector network analyzer (VNA). A suitable calibration technique is also presented.", "title": "" }, { "docid": "cf2e23cddb72b02d1cca83b4c3bf17a8", "text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr", "title": "" }, { "docid": "329486a3e7f13f79c9b02365ff555fdf", "text": "A novel ultra-wideband (UWB) bandpass filter (BPF) with improved upper stopband performance using a defected ground structure (DGS) is presented in this letter. The proposed BPF is composed of seven DGSs that are positioned under the input and output microstrip line and coupled double step impedance resonator (CDSIR). By using CDSIR and open loop defected ground structure (OLDGS), we can achieve UWB BPF characteristics, and by using the conventional CDGSs under the input and output microstrip line, we can improve the upper stopband performance. Simulated and measured results are found in good agreement with each other, showing a wide passband from 3.4 to 10.9 GHz, minimum insertion loss of 0.61 dB at 7.02 GHz, a group delay variation of less than 0.4 ns in the operating band, and a wide upper stopband with more than 30 dB attenuation up to 20 GHz. In addition, the proposed UWB BPF has a compact size (0.27¿g ~ 0.29¿g , ¿g : guided wavelength at the central frequency of 6.85 GHz).", "title": "" }, { "docid": "4c004745828100f6ccc6fd660ee93125", "text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.", "title": "" }, { "docid": "36fb4d86453a2e73c2989c04286b2ee2", "text": "Video super-resolution (SR) aims to generate a highresolution (HR) frame from multiple low-resolution (LR) frames in a local temporal window. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.", "title": "" }, { "docid": "dbd06c81892bc0535e2648ee21cb00b4", "text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.", "title": "" }, { "docid": "682fe9a6e4e30a38ce5c05ee1f809bd1", "text": "3 chapter This chapter examines the effects of fiscal consolidation —tax hikes and government spending cuts—on economic activity. Based on a historical analysis of fiscal consolidation in advanced economies, and on simulations of the IMF's Global Integrated Monetary and Fiscal Model (GIMF), it finds that fiscal consolidation typically reduces output and raises unemployment in the short term. At the same time, interest rate cuts, a fall in the value of the currency, and a rise in net exports usually soften the contractionary impact. Consolidation is more painful when it relies primarily on tax hikes; this occurs largely because central banks typically provide less monetary stimulus during such episodes, particularly when they involve indirect tax hikes that raise inflation. Also, fiscal consolidation is more costly when the perceived risk of sovereign default is low. These findings suggest that budget deficit cuts are likely to be more painful if they occur simultaneously across many countries, and if monetary policy is not in a position to offset them. Over the long term, reducing government debt is likely to raise output, as real interest rates decline and the lighter burden of interest payments permits cuts to distortionary taxes. Budget deficits and government debt soared during the Great Recession. In 2009, the budget deficit averaged about 9 percent of GDP in advanced economies, up from only 1 percent of GDP in 2007. 1 By the end of 2010, government debt is expected to reach about 100 percent of GDP—its highest level in 50 years. Looking ahead, population aging could create even more serious problems for public finances. In response to these worrisome developments, virtually all advanced economies will face the challenge of fiscal consolidation. Indeed, many governments are already undertaking or planning The main authors of this chapter are Daniel Leigh (team leader), Advanced economies are defined as the 33 economies so designated based on the World Economic Outlook classification described in the Statistical Appendix. large spending cuts and tax hikes. An important and timely question is, therefore, whether fiscal retrenchment will hurt economic performance. Although there is widespread agreement that reducing debt has important long-term benefits, there is no consensus regarding the short-term effects of fiscal austerity. On the one hand, the conventional Keynesian view is that cutting spending or raising taxes reduces economic activity in the short term. On the other hand, a number of studies present evidence that cutting budget deficits can …", "title": "" }, { "docid": "b8c48e65558504284849e05c9d3f1a19", "text": "Applications in radar systems and communications systems require very often antennas with beam steering or multi beam capabilities. For the millimeter frequency range Rotman lenses can be useful as multiple beam forming networks for linear antennas providing the advantage of broadband performance. The design and development of Rotman lens at 220 GHz feeding an antenna array for beam steering applications is presented. The construction is completely realized in waveguide technology. Experimental results are compared with theoretical considerations and electromagnetic simulations.", "title": "" }, { "docid": "b7bf3ae864ce774874041b0e5308323f", "text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.", "title": "" }, { "docid": "35ffdb3e5b2ac637f7e8d796c4cdc97e", "text": "Pedestrian detection in real world scenes is a challenging problem. In recent years a variety of approaches have been proposed, and impressive results have been reported on a variety of databases. This paper systematically evaluates (1) various local shape descriptors, namely Shape Context and Local Chamfer descriptor and (2) four different interest point detectors for the detection of pedestrians. Those results are compared to the standard global Chamfer matching approach. A main result of the paper is that Shape Context trained on real edge images rather than on clean pedestrian silhouettes combined with the Hessian-Laplace detector outperforms all other tested approaches.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "f7239ce387f17b279263e6bdaff612d0", "text": "Purpose – This survey aims to study and analyze current techniques and methods for context-aware web service systems, to discuss future trends and propose further steps on making web services systems context-aware. Design/methodology/approach – The paper analyzes and compares existing context-aware web service-based systems based on techniques they support, such as context information modeling, context sensing, distribution, security and privacy, and adaptation techniques. Existing systems are also examined in terms of application domains, system type, mobility support, multi-organization support and level of web services implementation. Findings – Supporting context-aware web service-based systems is increasing. It is hard to find a truly context-aware web service-based system that is interoperable and secure, and operates on multi-organizational environments. Various issues, such as distributed context management, context-aware service modeling and engineering, context reasoning and quality of context, security and privacy issues have not been well addressed. Research limitations/implications – The number of systems analyzed is limited. Furthermore, the survey is based on published papers. Therefore, up-to-date information and development might not be taken into account. Originality/value – Existing surveys do not focus on context-awareness techniques for web services. This paper helps to understand the state of the art in context-aware techniques for web services that can be employed in the future of services which is built around, amongst others, mobile devices, web services, and pervasive environments.", "title": "" }, { "docid": "995ad137b6711f254c6b9852611242b5", "text": "In this paper, we study beam selection for millimeter-wave (mm-wave) multiuser multiple input multiple output (MIMO) systems where a base station (BS) and users are equipped with antenna arrays. Exploiting a certain sparsity of mm-wave channels, a low-complexity beam selection method for beamforming by low-cost analog beamformers is derived. It is shown that beam selection can be carried out without explicit channel estimation using the notion of compressive sensing (CS). Due to various reasons (e.g., the background noise and interference), some users may choose the same BS beam, which results in high inter-user interference. To overcome this problem, we further consider BS beam selection by users. Through simulations, we show that the performance gap between the proposed approach and the optimal beamforming approach, which requires full channel state information (CSI), becomes narrower for a larger number of users at a moderate/low signal-to-noise ratio (SNR). Since the optimal beamforming approach is difficult to be used due to prohibitively high computational complexity for large antenna arrays with a large number of users, the proposed approach becomes attractive for BSs and users in mm-wave systems where large antenna arrays can be employed.", "title": "" } ]
scidocsrr
71da5b2e542c147f90c0ceaa1a557ac5
Features for Masking-Based Monaural Speech Separation in Reverberant Conditions
[ { "docid": "44c9de5fbaac78125277a9995890b43c", "text": "In the real world, speech is usually distorted by both reverberation and background noise. In such conditions, speech intelligibility is degraded substantially, especially for hearing-impaired (HI) listeners. As a consequence, it is essential to enhance speech in the noisy and reverberant environment. Recently, deep neural networks have been introduced to learn a spectral mapping to enhance corrupted speech, and shown significant improvements in objective metrics and automatic speech recognition score. However, listening tests have not yet shown any speech intelligibility benefit. In this paper, we propose to enhance the noisy and reverberant speech by learning a mapping to reverberant target speech rather than anechoic target speech. A preliminary listening test was conducted, and the results show that the proposed algorithm is able to improve speech intelligibility of HI listeners in some conditions. Moreover, we develop a masking-based method for denoising and compare it with the spectral mapping method. Evaluation results show that the masking-based method outperforms the mapping-based method.", "title": "" } ]
[ { "docid": "75fcc3987407274148485394acf8856b", "text": "Here we critically review studies that used electroencephalography (EEG) or event-related potential (ERP) indices as a biomarker of Alzheimer's disease. In the first part we overview studies that relied on visual inspection of EEG traces and spectral characteristics of EEG. Second, we survey analysis methods motivated by dynamical systems theory (DST) as well as more recent network connectivity approaches. In the third part we review studies of sleep.  Next, we compare the utility of early and late ERP components in dementia research. In the section on mismatch negativity (MMN) studies we summarize their results and limitations and outline the emerging field of computational neurology. In the following we overview the use of EEG in the differential diagnosis of the most common neurocognitive disorders. Finally, we provide a summary of the state of the field and conclude that several promising EEG/ERP indices of synaptic neurotransmission are worth considering as potential biomarkers. Furthermore, we highlight some practical issues and discuss future challenges as well.", "title": "" }, { "docid": "eb12e9e10d379fcbc156e94c3b447ce1", "text": "Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges.\n We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead.", "title": "" }, { "docid": "792cb4f62ad83e0ee0c94b60626103b9", "text": "Microservices have become a popular pattern for deploying scale-out application logic and are used at companies like Netflix, IBM, and Google. An advantage of using microservices is their loose coupling, which leads to agile and rapid evolution, and continuous re-deployment. However, developers are tasked with managing this evolution and largely do so manually by continuously collecting and evaluating low-level service behaviors. This is tedious, error-prone, and slow. We argue for an approach based on service evolution modeling in which we combine static and dynamic information to generate an accurate representation of the evolving microservice-based system. We discuss how our approach can help engineers manage service upgrades, architectural evolution, and changing deployment trade-offs.", "title": "" }, { "docid": "8f601e751650b56be81b069c42089640", "text": "Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent codebased schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to its promising results. In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation.", "title": "" }, { "docid": "59e3e0099e215000b34e32d90b0bd650", "text": "We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.", "title": "" }, { "docid": "a25041f4b95b68d2b8b9356d2f383b69", "text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.", "title": "" }, { "docid": "2e1a6dfb1208bc09a227c7e16ffc7b4f", "text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.", "title": "" }, { "docid": "12ee85d0fa899e4e864bc1c30dedcd22", "text": "An object-oriented simulation (OOS) consists of a set of objects that interact with each other over time. This paper provides a thorough introduction to OOS, addresses the important issue of composition versus inheritance, describes frames and frameworks for OOS, and presents an example of a network simulation language as an illustration of OOS.", "title": "" }, { "docid": "5b9693b031e5fbea9afbc8c9f729829c", "text": "Block coordinate descent (BCD) methods are widely-used for large-scale numerical optimization because of their cheap iteration costs, low memory requirements, amenability to parallelization, and ability to exploit problem structure. Three main algorithmic choices influence the performance of BCD methods: the block partitioning strategy, the block selection rule, and the block update rule. In this paper we explore all three of these building blocks and propose variations for each that can lead to significantly faster BCD methods. We (i) propose new greedy block-selection strategies that guarantee more progress per iteration than the Gauss-Southwell rule; (ii) explore practical issues like how to implement the new rules when using “variable” blocks; (iii) explore the use of message-passing to compute matrix or Newton updates efficiently on huge blocks for problems with a sparse dependency between variables; and (iv) consider optimal active manifold identification, which leads to bounds on the “active-set complexity” of BCD methods and leads to superlinear convergence for certain problems with sparse solutions (and in some cases finite termination at an optimal solution). We support all of our findings with numerical results for the classic machine learning problems of least squares, logistic regression, multi-class logistic regression, label propagation, and L1-regularization.", "title": "" }, { "docid": "5598e6e1541e84924a56d3ac874dd19f", "text": "Online dating sites have become popular platforms for people to look for potential romantic partners. It is important to understand users' dating preferences in order to make better recommendations on potential dates. The message sending and replying actions of a user are strong indicators for what he/she is looking for in a potential date and reflect the user's actual dating preferences. We study how users' online dating behaviors correlate with various user attributes using a real-world dateset from a major online dating site in China. Our study provides a firsthand account of the user online dating behaviors in China, a country with a large population and unique culture. The results can provide valuable guidelines to the design of recommendation engine for potential dates.", "title": "" }, { "docid": "4e002bc3c0a42869c5c9eb4911c67ccf", "text": "Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as stragglers. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adopt the framework of Tandon et al. [1] and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines $f$ theoretically permissible, via an $O(f^{2})$ decoding algorithm. The idea is based on a suitably designed Reed-Solomon code that has a sparsest and balanced generator matrix. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.", "title": "" }, { "docid": "3ba011d181a4644c8667b139c63f50ff", "text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.", "title": "" }, { "docid": "38ca6f23f3910eac7085940240a92b03", "text": "Region growing and edge detection are two popular and common techniques used for image segmentation. Region growing is preferred over edge detection methods because it is more robust against low contrast problems and effectively addresses the connectivity issues faced by edge detectors. Edgebased techniques, on the other hand, can significantly reduce useless information while preserving the important structural properties in an image. Recent studies have shown that combining region growing and edge methods for segmentation will produce much better results. This paper proposed using edge information to automatically select seed pixels and guide the process of region growing in segmenting geometric objects from an image. The geometric objects are songket motifs from songket patterns. Songket motifs are the main elements that decorate songket pattern. The beauty of songket lies in the elaborate design of the patterns and combination of motifs that are intricately woven on the cloth. After experimenting on thirty songket pattern images, the proposed method achieved promising extraction of the songket motifs.", "title": "" }, { "docid": "842cd58edd776420db869e858be07de4", "text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.", "title": "" }, { "docid": "1ca8130cf3f0f1788196bd4bc4ec45a0", "text": "PURPOSE\nTo examine the feasibility and preliminary benefits of an integrative cognitive behavioral therapy (CBT) with adolescents with inflammatory bowel disease and anxiety.\n\n\nDESIGN AND METHODS\nNine adolescents participated in a CBT program at their gastroenterologist's office. Structured diagnostic interviews, self-report measures of anxiety and pain, and physician-rated disease severity were collected pretreatment and post-treatment.\n\n\nRESULTS\nPostintervention, 88% of adolescents were treatment responders, and 50% no longer met criteria for their principal anxiety disorder. Decreases were demonstrated in anxiety, pain, and disease severity.\n\n\nPRACTICE IMPLICATIONS\nAnxiety screening and a mental health referral to professionals familiar with medical management issues is important.", "title": "" }, { "docid": "aa93e26585f7220c3d528328e5d35080", "text": "Sexual orientation is one of the largest sex differences in humans. The vast majority of the population is heterosexual, that is, they are attracted to members of the opposite sex. However, a small but significant proportion of people are bisexual or homosexual and experience attraction to members of the same sex. The origins of the phenomenon have long been the subject of scientific study. In this chapter, we will review the evidence that sexual orientation has biological underpinnings and consider the involvement of epigenetic mechanisms. We will first discuss studies that show that sexual orientation has a genetic component. These studies show that sexual orientation is more concordant in monozygotic twins than in dizygotic ones and that male sexual orientation is linked to several regions of the genome. We will then highlight findings that suggest a link between sexual orientation and epigenetic mechanisms. In particular, we will consider the case of women with congenital adrenal hyperplasia (CAH). These women were exposed to high levels of testosterone in utero and have much higher rates of nonheterosexual orientation compared to non-CAH women. Studies in animal models strongly suggest that the long-term effects of hormonal exposure (such as those experienced by CAH women) are mediated by epigenetic mechanisms. We conclude by describing a hypothetical framework that unifies genetic and epigenetic explanations of sexual orientation and the continued challenges facing sexual orientation research.", "title": "" }, { "docid": "61f079cb59505d9bf1de914330dd852e", "text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.", "title": "" }, { "docid": "05fae4c840b1ee242a16a9db5eee4fb5", "text": "Hardware technologies for trusted computing, or trusted execution environments (TEEs), have rapidly matured over the last decade. In fact, TEEs are at the brink of widespread commoditization with the recent introduction of Intel Software Guard Extensions (Intel SGX). Despite such rapid development of TEE, software technologies for TEE significantly lag behind their hardware counterpart, and currently only a select group of researchers have the privilege of accessing this technology. To address this problem, we develop an open source platform, called OpenSGX, that emulates Intel SGX hardware components at the instruction level and provides new system software components necessarily required for full TEE exploration. We expect that the OpenSGX framework can serve as an open platform for SGX research, with the following contributions. First, we develop a fully functional, instruction-compatible emulator of Intel SGX for enabling the exploration of software/hardware design space, and development of enclave programs. OpenSGX provides a platform for SGX development, meaning that it provides not just emulation but also operating system components, an enclave program loader/packager, an OpenSGX user library, debugging, and performance monitoring. Second, to show OpenSGX’s use cases, we applied OpenSGX to protect sensitive information (e.g., directory) of Tor nodes and evaluated their potential performance impacts. Therefore, we believe OpenSGX has great potential for broader communities to spark new research on soon-to-becommodity Intel SGX.", "title": "" }, { "docid": "272be5fede7ede10ebfd368cabcd437b", "text": "Penetration testing is widely used to help ensure the security of web applications. Using penetration testing, testers discover vulnerabilities by simulating attacks on a target web application. To do this efficiently, testers rely on automated techniques that gather input vector information about the target web application and analyze the application’s responses to determine whether an attack was successful. Techniques for performing these steps are often incomplete, which can leave parts of the web application untested and vulnerabilities undiscovered. This paper proposes a new approach to penetration testing that addresses the limitations of current techniques. The approach incorporates two recently developed analysis techniques to improve input vector identification and detect when attacks have been successful against a web application. This paper compares the proposed approach against two popular penetration testing tools for a suite of web applications with known and unknown vulnerabilities. The evaluation results show that the proposed approach performs a more thorough penetration testing and leads to the discovery of more vulnerabilities than both the tools. Copyright q 2011 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "4d12a470a2f678142091dd5232050235", "text": "Learning a deep model from small data is yet an opening and challenging problem. We focus on one-shot classification by deep learning approach based on a small quantity of training samples. We proposed a novel deep learning approach named Local Contrast Learning (LCL) based on the key insight about a human cognitive behavior that human recognizes the objects in a specific context by contrasting the objects in the context or in her/his memory. LCL is used to train a deep model that can contrast the recognizing sample with a couple of contrastive samples randomly drawn and shuffled. On one-shot classification task on Omniglot, the deep model based LCL with 122 layers and 1.94 millions of parameters, which was trained on a tiny dataset with only 60 classes and 20 samples per class, achieved the accuracy 97.99% that outperforms human and state-of-the-art established by Bayesian Program Learning (BPL) trained on 964 classes. LCL is a fundamental idea which can be applied to alleviate parametric model’s overfitting resulted by lack of training samples.", "title": "" } ]
scidocsrr
ced2236fd03478cdab09c79e822799e3
What recommenders recommend: an analysis of recommendation biases and possible countermeasures
[ { "docid": "b41a8bbd52a0c6a25cb1a102eb5a2f8b", "text": "Although the broad social and business success of recommender systems has been achieved across several domains, there is still a long way to go in terms of user satisfaction. One of the key dimensions for significant improvement is the concept of unexpectedness. In this article, we propose a method to improve user satisfaction by generating unexpected recommendations based on the utility theory of economics. In particular, we propose a new concept of unexpectedness as recommending to users those items that depart from what they would expect from the system - the consideration set of each user. We define and formalize the concept of unexpectedness and discuss how it differs from the related notions of novelty, serendipity, and diversity. In addition, we suggest several mechanisms for specifying the users’ expectations and propose specific performance metrics to measure the unexpectedness of recommendation lists. We also take into consideration the quality of recommendations using certain utility functions and present an algorithm for providing users with unexpected recommendations of high quality that are hard to discover but fairly match their interests. Finally, we conduct several experiments on “real-world” datasets and compare our recommendation results with other methods. The proposed approach outperforms these baseline methods in terms of unexpectedness and other important metrics, such as coverage, aggregate diversity and dispersion, while avoiding any accuracy loss.", "title": "" }, { "docid": "e88ad42145c63dd2aeff6c1f64f4b4c7", "text": "Recommender systems are in the center of network science, and they are becoming increasingly important in individual businesses for providing efficient, personalized services and products to users. Previous research in the field of recommendation systems focused on improving the precision of the system through designing more accurate recommendation lists. Recently, the community has been paying attention to diversity and novelty of recommendation lists as key characteristics of modern recommender systems. In many cases, novelty and precision do not go hand in hand, and the accuracy--novelty dilemma is one of the challenging problems in recommender systems, which needs efforts in making a trade-off between them.\n In this work, we propose an algorithm for providing novel and accurate recommendation to users. We consider the standard definition of accuracy and an effective self-information--based measure to assess novelty of the recommendation list. The proposed algorithm is based on item popularity, which is defined as the number of votes received in a certain time interval. Wavelet transform is used for analyzing popularity time series and forecasting their trend in future timesteps. We introduce two filtering algorithms based on the information extracted from analyzing popularity time series of the items. The popularity-based filtering algorithm gives a higher chance to items that are predicted to be popular in future timesteps. The other algorithm, denoted as a novelty and population-based filtering algorithm, is to move toward items with low popularity in past timesteps that are predicted to become popular in the future. The introduced filters can be applied as adds-on to any recommendation algorithm. In this article, we use the proposed algorithms to improve the performance of classic recommenders, including item-based collaborative filtering and Markov-based recommender systems. The experiments show that the algorithms could significantly improve both the accuracy and effective novelty of the classic recommenders.", "title": "" } ]
[ { "docid": "8ae1ef032c0a949aa31b3ca8bc024cb5", "text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital", "title": "" }, { "docid": "6eed03674521ecf9a558ab0059fc167f", "text": "University professors traditionally struggle to incorporate software testing into their course curriculum. Worries include double-grading for correctness of both source and test code and finding time to teach testing as a topic. Test-driven development (TDD) has been suggested as a possible solution to improve student software testing skills and to realize the benefits of testing. According to most existing studies, TDD improves software quality and student productivity. This paper surveys the current state of TDD experiments conducted exclusively at universities. Similar surveys compare experiments in both the classroom and industry, but none have focused strictly on academia.", "title": "" }, { "docid": "23305a36194ad3c9b6b3f667c79bd273", "text": "Evidence used to reconstruct the morphology and function of the brain (and the rest of the central nervous system) in fossil hominin species comes from the fossil and archeological records. Although the details provided about human brain evolution are scarce, they benefit from interpretations informed by interspecific comparative studies and, in particular, human pathology studies. In recent years, new information has come to light about fossil DNA and ontogenetic trajectories, for which pathology research has significant implications. We briefly describe and summarize data from the paleoarcheological and paleoneurological records about the evolution of fossil hominin brains, including behavioral data most relevant to brain research. These findings are brought together to characterize fossil hominin taxa in terms of brain structure and function and to summarize brain evolution in the human lineage.", "title": "" }, { "docid": "e9796b98d8f0bc81e1720be5431d2024", "text": "Flexible structures may fall victim to excessive levels of vibration under the action of wind, adversely affecting serviceability and occupant comfort. To ensure the functional performance of flexible structures, various design modifications are possible, ranging from alternative structural systems to the utilization of passive and active control devices. This paper presents an overview of state-of-the-art measures to reduce structural response of buildings, including a summary of recent work in aerodynamic tailoring and a discussion of auxiliary damping devices for mitigating the wind-induced motion of structures. In addition, some discussion of the application of such devices to improve structural resistance to seismic events is also presented, concluding with detailed examples of the application of auxiliary damping devices in Australia, Canada, China, Japan, and the United States.", "title": "" }, { "docid": "70b1e0badf7505e480af00014572140c", "text": "Title of Dissertation: Simulation-Based Algorithms for Markov Decision Processes Ying He, Doctor of Philosophy, 2002 Dissertation directed by: Professor Steven I. Marcus Department of Electrical & Computer Engineering Professor Michael C. Fu Department of Decision & Information Technologies Problems of sequential decision making under uncertainty are common in manufacturing, computer and communication systems, and many such problems can be formulated as Markov Decision Processes (MDPs). Motivated by a capacity expansion and allocation problem in semiconductor manufacturing, we formulate a fab-level decision making problem using a finite-horizon transient MDP model that can integrate life cycle dynamics of the fab and provide a trade-off between immediate and future benefits and costs. However, for large and complicated systems formulated as MDPs, the classical methodology to compute optimal policies, dynamic programming, suffers from the so-called “curse of dimensionality” (computational requirement increases exponentially with number of states /controls) and “curse of modeling” (an explicit model for the cost structure and/or the transition probabilities is not available). In problem settings to which our approaches apply, instead of the explicit transition probabilities, outputs are available from either a simulation model or from the actual system. Our methodology is first to find the structure of optimal policies for some special cases, and then to use the structure to construct parameterized heuristic policies for more general cases and implement simulationbased algorithms to determine parameters of the heuristic policies. For the fab-level decision-making problem, we analyze the structure of the optimal policy for a special “one-machine, two-product” case, and discuss the applicability of simulation-based algorithms. We develop several simulation-based algorithms for MDPs to overcome the difficulties of “curse of dimensionality” and “curse of modeling”, considering both theoretical and practical issues. First, we develop a simulation-based policy iteration algorithm for average cost problems under a unichain assumption, relaxing the common recurrent state assumption. Second, for weighted cost problems, we develop a new two-timescale simulation-based gradient algorithms based on perturbation analysis, provide a theoretical convergence proof, and compare it with two recently proposed simulation-based gradient algorithms. Third, we propose two new Simultaneous Perturbation Stochastic Approximation (SPSA) algorithms for weighted cost problems and verify their effectiveness via simulation; then, we consider a general SPSA algorithm for function minimization and show its convergence under a weaker assumption: the function does not have to be differentiable. To Yingjiu and my parents ...", "title": "" }, { "docid": "7639c7333339605c677da0a766618c1b", "text": "This paper presents a general theoretical framework for ensemble methods of constructing signiicantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argue that the ensemble method presented has several properties: 1) It eeciently uses all the networks of a population-none of the networks need be discarded. 2) It eeciently uses all the available data for training without over-tting. 3) It inherently performs regularization by smoothing in functional space which helps to avoid over-tting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct estimators in a population. 7) The optimal parameters of the ensemble estimator are given in closed form. Experimental results are provided which show that the ensemble method dramatically improves neural network performance on diicult real-world optical character recognition tasks.", "title": "" }, { "docid": "c0204869607a36bf85452fad89153b9c", "text": "Weather factors such as temperature and rainfall in residential areas and tourist destinations affect traffic flow on the surrounding roads. In this study, we attempt to find new knowledge between traffic congestion and weather by using big data processing technology. Changes in traffic congestion due to the weather are evaluated by using multiple linear regression analysis to create a prediction model and forecast traffic congestion on a daily basis. For the regression analysis, we use 48 weather forecasting factors and six dummy variables to express the days of the week. The final multiple linear regression model is then proposed based on the three analytical steps of (i) the creation of the full regression model, (ii) the removal of the variables, and (iii) residual analysis. We find that the R-squared value of the proposed model has an explanatory power of 0.6555. To verify its predictability, the proposed model then evaluates traffic congestion in July and August 2014 by comparing predicted traffic congestion with actual traffic congestion. By using the mean absolute percentage error valuation method, we show that the final multiple linear regression model has a prediction accuracy of 84.8%.", "title": "" }, { "docid": "e9474d646b9da5e611475f4cdfdfc30e", "text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.", "title": "" }, { "docid": "4fa73e04ccc8620c12aaea666ea366a6", "text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike", "title": "" }, { "docid": "342bcd2509b632480c4f4e8059cfa6a1", "text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.", "title": "" }, { "docid": "ae19bd4334434cfb8c5ac015dc8d3bd4", "text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.", "title": "" }, { "docid": "934ca8aa2798afd6e7cd4acceeed839a", "text": "This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence.", "title": "" }, { "docid": "9acc94dd0f1cb229f15b2f833965e197", "text": "Loitering is a suspicious behavior that often leads to criminal actions, such as pickpocketing and illegal entry. Tracking methods can determine suspicious behavior based on trajectory, but require continuous appearance and are difficult to scale up to multi-camera systems. Using the duration of appearance of features works on multiple cameras, but does not consider major aspects of loitering behavior, such as repeated appearance and trajectory of candidates. We introduce an entropy model that maps the location of a person's features on a heatmap. It can be used as an abstraction of trajectory tracking across multiple surveillance cameras. We evaluate our method over several datasets and compare it to other loitering detection methods. The results show that our approach has similar results to state of the art, but can provide additional interesting candidates.", "title": "" }, { "docid": "f700b168c98d235a7fb76581cc24717f", "text": "It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found that audio-visual approach based on lipsync inconsistency detection was not able to distinguish Deepfake videos. The best performing method, which is based on visual quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality Deepfakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.", "title": "" }, { "docid": "8a1b37cf4d0632270f83a0826535c38a", "text": "Magnetic resonance imaging (MRI) examinations provide high-resolution information about the anatomic structure of the kidneys and are used to measure total kidney volume (TKV) in patients with Autosomal Dominant Polycystic Kidney Disease (ADPKD). Height-adjusted TKV (HtTKV) has become the gold-standard imaging biomarker for ADPKD progression at early stages of the disease when estimated glomerular filtration rate (eGFR) is still normal. However, HtTKV does not take advantage of the wealth of information provided by MRI. Here we tested whether image texture features provide additional insights into the ADPKD kidney that may be used as complementary information to existing biomarkers. A retrospective cohort of 122 patients from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease (CRISP) study was identified who had T2-weighted MRIs and eGFR values over 70 mL/min/1.73m2 at the time of their baseline scan. We computed nine distinct image texture features for each patient. The ability of each feature to predict subsequent progression to CKD stage 3A, 3B, and 30% reduction in eGFR at eight-year follow-up was assessed. A multiple linear regression model was developed incorporating age, baseline eGFR, HtTKV, and three image texture features identified by stability feature selection (Entropy, Correlation, and Energy). Including texture in a multiple linear regression model (predicting percent change in eGFR) improved Pearson correlation coefficient from -0.51 (using age, eGFR, and HtTKV) to -0.70 (adding texture). Thus, texture analysis offers an approach to refine ADPKD prognosis and should be further explored for its utility in individualized clinical decision making and outcome prediction.", "title": "" }, { "docid": "97adbe6b157cd5d411788d18520612a3", "text": "MicroProteins (miPs) are short, usually single-domain proteins that, in analogy to miRNAs, heterodimerize with their targets and exert a dominant-negative effect. Recent bioinformatic attempts to identify miPs have resulted in a list of potential miPs, many of which lack the defining characteristics of a miP. In this opinion article, we clearly state the characteristics of a miP as evidenced by known proteins that fit the definition; we explain why modulatory proteins misrepresented as miPs do not qualify as true miPs. We also discuss the evolutionary history of miPs, and how the miP concept can extend beyond transcription factors (TFs) to encompass different non-TF proteins that require dimerization for full function.", "title": "" }, { "docid": "c3a9ccc724f388399c25938a33123bd5", "text": "Using a unique high-frequency futures dataset, we characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. We find that news produces conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. Equity markets, moreover, react differently to news depending on the stage of the business cycle, which explains the low correlation between stock and bond returns when averaged over the cycle. Hence our results qualify earlier work suggesting that bond markets react most strongly to macroeconomic news; in particular, when conditioning on the state of the economy, the equity and foreign Journal of International Economics 73 (2007) 251–277 www.elsevier.com/locate/econbase ☆ This work was supported by the National Science Foundation, the Guggenheim Foundation, the BSI Gamma Foundation, and CREATES. For useful comments we thank the Editor and referees, seminar participants at the Bank for International Settlements, the BSI Gamma Foundation, the Symposium of the European Central Bank/Center for Financial Studies Research Network, the NBER International Finance and Macroeconomics program, and the American Economic Association Annual Meetings, as well as Rui Albuquerque, Annika Alexius, Boragan Aruoba, Anirvan Banerji, Ben Bernanke, Robert Connolly, Jeffrey Frankel, Lingfeng Li, Richard Lyons, Marco Pagano, Paolo Pasquariello, and Neng Wang. ⁎ Corresponding author. Department of Economics, University of Pennsylvania, 3718 Locust Walk Philadelphia, PA 19104-6297, United States. Tel.: +1 215 898 1507; fax: +1 215 573 4217. E-mail addresses: t-andersen@kellogg.nwu.edu (T.G. Andersen), boller@econ.duke.edu (T. Bollerslev), fdiebold@sas.upenn.edu (F.X. Diebold), vega@simon.rochester.edu (C. Vega). 0022-1996/$ see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jinteco.2007.02.004 exchange markets appear equally responsive. Finally, we also document important contemporaneous links across all markets and countries, even after controlling for the effects of macroeconomic news. © 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "13476dc47793a50200c97ec896b92cf2", "text": "Many promising therapeutic agents are limited by their inability to reach the systemic circulation, due to the excellent barrier properties of biological membranes, such as the stratum corneum (SC) of the skin or the sclera/cornea of the eye and others. The outermost layer of the skin, the SC, is the principal barrier to topically-applied medications. The intact SC thus provides the main barrier to exogenous substances, including drugs. Only drugs with very specific physicochemical properties (molecular weight < 500 Da, adequate lipophilicity, and low melting point) can be successfully administered transdermally. Transdermal delivery of hydrophilic drugs and macromolecular agents of interest, including peptides, DNA, and small interfering RNA is problematic. Therefore, facilitation of drug penetration through the SC may involve by-pass or reversible disruption of SC molecular architecture. Microneedles (MNs), when used to puncture skin, will by-pass the SC and create transient aqueous transport pathways of micron dimensions and enhance the transdermal permeability. These micropores are orders of magnitude larger than molecular dimensions, and, therefore, should readily permit the transport of hydrophilic macromolecules. Various strategies have been employed by many research groups and pharmaceutical companies worldwide, for the fabrication of MNs. This review details various types of MNs, fabrication methods and, importantly, investigations of clinical safety of MN.", "title": "" }, { "docid": "7e647cac9417bf70acd8c0b4ee0faa9b", "text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.", "title": "" }, { "docid": "3ab8e2a7235d5100b8b65fbf9a088404", "text": "In multi-label classification in the big data age, the number of classes can be in thousands, and obtaining sufficient training data for each class is infeasible. Zero-shot learning aims at predicting a large number of unseen classes using only labeled data from a small set of classes and external knowledge about class relations. However, previous zero-shot learning models passively accept labeled data collected beforehand, relinquishing the opportunity to select the proper set of classes to inquire labeled data and optimize the performance of unseen class prediction. To resolve this issue, we propose an active class selection strategy to intelligently query labeled data for a parsimonious set of informative classes. We demonstrate two desirable probabilistic properties of the proposed method that can facilitate unseen classes prediction. Experiments on 4 text datasets demonstrate that the active zero-shot learning algorithm is superior to a wide spectrum of baselines. We indicate promising future directions at the end of this paper.", "title": "" } ]
scidocsrr
b3657ac03c5a8b7ff7c08e358d39c2c4
High-order Graph-based Neural Dependency Parsing
[ { "docid": "497088def9f5f03dcb32e33d1b6fcb64", "text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.", "title": "" }, { "docid": "c7fa2e7615a2767ca39d951f1ecf835e", "text": "We explore the application of neural language models to machine translation. We develop a new model that combines the neural probabilistic language model of Bengio et al., rectified linear units, and noise-contrastive estimation, and we incorporate it into a machine translation system both by reranking k-best lists and by direct integration into the decoder. Our large-scale, large-vocabulary experiments across four language pairs show that our neural language model improves translation quality by up to 1.1 Bleu.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "5ebb65f075fd00130e6684b86b9ab235", "text": "While machine learning systems have recently achieved impressive, (super)human-level performance in several tasks, they have often relied on unnatural amounts of supervision – e.g. large numbers of labeled images or continuous scores in video games. In contrast, human learning is largely unsupervised, driven by observation and interaction with the world. Emulating this type of learning in machines is an open challenge, and one that is critical for general artificial intelligence. Here, we explore prediction of future frames in video sequences as an unsupervised learning rule. A key insight here is that in order to be able to predict how the visual world will change over time, an agent must have at least some implicit model of object structure and the possible transformations objects can undergo. To this end, we have designed several models capable of accurate prediction in complex sequences. Our first model consists of a recurrent extension to the standard autoencoder framework. Trained end-to-end to predict the movement of synthetic stimuli, we find that the model learns a representation of the underlying latent parameters of the 3D objects themselves. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. In addition, we explore the use of an adversarial loss, as in a Generative Adversarial Network, illustrating its complementary effects to traditional pixel losses for the task of next-frame prediction.", "title": "" }, { "docid": "5d775c669636860d7cbf987f1e998440", "text": "Recent changes in the Music Encoding Initiative (MEI) have transformed it into an extensible platform from which new notation encoding schemes can be produced. This paper introduces MEI as a document-encoding framework, and illustrates how it can be extended to encode new types of notation, eliminating the need for creating specialized and potentially incompatible notation encoding standards.", "title": "" }, { "docid": "d652a2ffb4708b76d8fa70d7a452ae9f", "text": "If we are to achieve natural human–robot interaction, we may need to complement current vision and speech interfaces. Touch may provide us with an extra tool in this quest. In this paper we demonstrate the role of touch in interaction between a robot and a human. We show how infrared sensors located on robots can be easily used to detect and distinguish human interaction, in this case interaction with individual children. This application of infrared sensors potentially has many uses; for example, in entertainment or service robotics. This system could also benefit therapy or rehabilitation, where the observation and recording of movement and interaction is important. In the long term, this technique might enable robots to adapt to individuals or individual types of user. c © 2006 Published by Elsevier B.V.", "title": "" }, { "docid": "3e128a5632f5ada623846f18e79444af", "text": "Given the resources needed to launch a retail store on the Internet or change an existing online storefront design, it is important to allocate product development resources to interface features that actually improve store traffic and sales. We identified features that impact store traffic and sales using regression models of 1996 store traffic and dollar sales as dependent variables and interface design features such as number of links into the store, hours of promotional ads, number of products, and store navigation features as the independent variables. Product list navigation features that reduce the time to purchase products online account for 61% of the variance in monthly sales. Other factors explaining the variance in monthly sales include: number of hyperlinks into the store (10%), hours of promotion (4%) and customer service feedback (1%). These findings demonstrate that the user interface is an essential link between the customer and the retail store in Web-based shopping environments.", "title": "" }, { "docid": "b2a8b979f4bd96a28746b090bca2a567", "text": "Gradient-based policy search is an alternative to value-function-based methods for reinforcement learning in non-Markovian domains. One apparent drawback of policy search is its requirement that all actions be \\on-policy\"; that is, that there be no explicit exploration. In this paper, we provide a method for using importance sampling to allow any well-behaved directed exploration policy during learning. We show both theoretically and experimentally that using this method can achieve dramatic performance improvements. During this work, Nicolas Meuleau was at the MIT Arti cial Intelligence laboratory, supported in part by a research grant from NTT; Leonid Peshkin by grants from NSF and NTT; and Kee-Eung Kim in part by AFOSR/RLF 30602-95-1-0020.", "title": "" }, { "docid": "1c6078d68891b6600727a82841812666", "text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.", "title": "" }, { "docid": "6b4efbb3572eeb09536e2ec82825f2fb", "text": "Well-designed games are good motivators by nature, as they imbue players with clear goals and a sense of reward and fulfillment, thus encouraging them to persist and endure in their quests. Recently, this motivational power has started to be applied to non- game contexts, a practice known as Gamification. This adds gaming elements to non-game processes, motivating users to adopt new behaviors, such as improving their physical condition, working more, or learning something new. This paper describes an experiment in which game-like elements were used to improve the delivery of a Master's level College course, including scoring, levels, leaderboards, challenges and badges. To assess how gamification impacted the learning experience, we compare the gamified course to its non-gamified version from the previous year, using different performance measures. We also assessed student satisfaction as compared to other regular courses in the same academic context. Results were very encouraging, showing significant increases ranging from lecture attendance to online participation, proactive behaviors and perusing the course reference materials. Moreover, students considered the gamified instance to be more motivating, interesting and easier to learn as compared to other courses. We finalize by discussing the implications of these results on the design of future gamified learning experiences.", "title": "" }, { "docid": "d5665efd0e4a91e9be4c84fecd5fd4ad", "text": "Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference. In this paper we propose Thundervolt, a new framework that enables aggressive voltage underscaling of high-performance DNN accelerators without compromising classification accuracy even in the presence of high timing error rates. Using post-synthesis timing simulations of a DNN acceleratormodeled on theGoogle TPU,we show that Thundervolt enables between 34%-57% energy savings on stateof-the-art speech and image recognition benchmarks with less than 1% loss in classification accuracy and no performance loss. Further, we show that Thundervolt is synergistic with and can further increase the energy efficiency of commonly used run-timeDNNpruning techniques like Zero-Skip.", "title": "" }, { "docid": "4fbde6cd9d511072680a4f20f6674acf", "text": "A 50-year-old man developed numerous pustules and bullae on the trunk and limbs 15 days after anal fissure surgery. The clinicopathological diagnosis was iododerma induced by topical povidone-iodine sitz baths postoperatively. Complete resolution occurred within 3 weeks using systemic corticosteroids and forced diuresis.", "title": "" }, { "docid": "fddf2c0ce952f3889207c05026c086ed", "text": "How we design and evaluate for emotions depends crucially on what we take emotions to be. In affective computing, affect is often taken to be another kind of information discrete units or states internal to an individual that can be transmitted in a loss-free manner from people to computational systems and back. While affective computing explicitly challenges the primacy of rationality in cognitivist accounts of human activity, at a deeper level it often relies on and reproduces the same information-processing model of cognition. Drawing on cultural, social, and interactional critiques of cognition which have arisen in HCI, as well as anthropological and historical accounts of emotion, we explore an alternative perspective on emotion as interaction: dynamic, culturally mediated, and socially constructed and experienced. We demonstrate how this model leads to new goals for affective systems instead of sensing and transmitting emotion, systems should support human users in understanding, interpreting, and experiencing emotion in its full complexity and ambiguity. In developing from emotion as objective, externally measurable unit to emotion as experience, evaluation, too, alters focus from externally tracking the circulation of emotional information to co-interpreting emotions as they are made in interaction.", "title": "" }, { "docid": "e97247d7b42875782164719ddf202a3c", "text": "This work, set in the context of the apparel industry, proposes an action-oriented disclosure tool to help solve the sustainability challenges of complex fast-fashion supply chains (SCs). In a search for effective disclosure, it focusses on actions towards sustainability instead of the measurements and indicators of its impacts. We applied qualitative and quantitative content analysis to the sustainability reporting of the world’s two largest fast-fashion companies in three phases. First, we searched for the challenges that the organisations report they are currently facing. Second, we introduced the United Nations’ Sustainable Development Goals (SDGs) framework to overcome the voluntary reporting drawback of ‘choosing what to disclose’, and revealed orphan issues. This broadened the scope from internal corporate challenges to issues impacting the ecosystems in which companies operate. Third, we analysed the reported sustainability actions and decomposed them into topics, instruments, and actors. The results showed that fast-fashion reporting has a broadly developed analysis base, but lacks action orientation. This has led us to propose the ‘Fast-Fashion Sustainability Scorecard’ as a universal disclosure framework that shifts the focus from (i) reporting towards action; (ii) financial performance towards sustainable value creation; and (iii) corporate boundaries towards value creation for the broader SC ecosystem.", "title": "" }, { "docid": "74bcc177a94ff57a847fb1677da5f032", "text": "The resurgence of effort within computational semantics has led to increased interest in various types of relation extraction and semantic parsing. While various manually annotated resources exist for enabling this work, these materials have been developed with different standards and goals in mind. In an effort to develop better general understanding across these resources, we provide a summary overview of the standards underlying ACE, ERE, TAC-KBP Slot-filling, and FrameNet.", "title": "" }, { "docid": "8e4bd52e3b10ea019241679541c25c9d", "text": "Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterize projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardized so all dimensions have equal weight. The known effort values of the nearest neighbors to the new project are then used as the basis for the prediction. The process is automated using a PC-based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques.", "title": "" }, { "docid": "b5c2e36e805f3ca96cde418137ed0239", "text": "PURPOSE\nTo report a novel method for measuring the degree of inferior oblique muscle overaction and to investigate the correlation with other factors.\n\n\nDESIGN\nCross-sectional diagnostic study.\n\n\nMETHODS\nOne hundred and forty-two eyes (120 patients) were enrolled in this study. Subjects underwent a full orthoptic examination and photographs were obtained in the cardinal positions of gaze. The images were processed using Photoshop and analyzed using the ImageJ program to measure the degree of inferior oblique muscle overaction. Reproducibility or interobserver variability was assessed by Bland-Altman plots and by calculation of the intraclass correlation coefficient (ICC). The correlation between the degree of inferior oblique muscle overaction and the associated factors was estimated with linear regression analysis.\n\n\nRESULTS\nThe mean angle of inferior oblique muscle overaction was 17.8 ± 10.1 degrees (range, 1.8-54.1 degrees). The 95% limit of agreement of interobserver variability for the degree of inferior oblique muscle overaction was ±1.76 degrees, and ICC was 0.98. The angle of inferior oblique muscle overaction showed significant correlation with the clinical grading scale (R = 0.549, P < .001) and with hypertropia in the adducted position (R = 0.300, P = .001). The mean angles of inferior oblique muscle overaction classified into grades 1, 2, 3, and 4 according to the clinical grading scale were 10.5 ± 9.1 degrees, 16.8 ± 7.8 degrees, 24.3 ± 8.8 degrees, and 40.0 ± 12.2 degrees, respectively (P < .001).\n\n\nCONCLUSIONS\nWe describe a new method for measuring the degree of inferior oblique muscle overaction using photographs of the cardinal positions. It has the potential to be a diagnostic tool that measures inferior oblique muscle overaction with minimal observer dependency.", "title": "" }, { "docid": "52ab79410044bd29c11cdd8352d10a6e", "text": "Fashion markets are synonymous with rapid change and, as a result, commercial success or failure in those markets is largely determined by the organisation’s flexibility and responsiveness. Responsiveness is characterised by short time-to-market, the ability to scale up (or down) quickly and the rapid incorporation of consumer preferences into the design process. In this paper it is argued that conventional organisational structures and forecast-driven supply chains are not adequate to meet the challenges of volatile and turbulent demand which typify fashion markets today. Instead, the requirement is for the creation of an agile organisation embedded within an agile supply chain INTRODUCTION Fashion markets have long attracted the interest of researchers. More often the focus of their work was the psychology and sociology of fashion and with the process by which fashions were adopted across populations (see for example Wills and Midgley, 1973). In parallel with this, a body of work has developed seeking to identify cycles in fashions (e.g. Carman, 1966). Much of this earlier work was intended to create insights and even tools to help improve the demand forecasting of fashion products. However, the reality that is now gradually being accepted both by those who work in the industry and those who study it, is that the demand for fashion products cannot be forecast. Instead, we need to recognise that fashion markets are complex open systems that frequently demonstrate high levels of ‘chaos’. In such conditions managerial effort may be better expended on devising strategies", "title": "" }, { "docid": "9b71d11e2096008bc3603c62d89e452e", "text": "Abstract In the present study biodiesel was synthesized from Waste Cook Oil (WCO) by three-step method and regressive analyzes of the process was done. The raw oil, containing 1.9wt% Free Fatty Acid (FFA) and viscosity was 47.6mm/s. WCO was collected from local restaurant of Sylhet city in Bangladesh. Transesterification method gives lower yield than three-step method. In the three-step method, the first step is saponification of the oil followed by acidification to produce FFA and finally esterification of FFA to produce biodiesel. In the saponification reaction, various reaction parameters such as oil to sodium hydroxide molar ratio and reaction time were optimized and the oil to NaOH molar ratio was 1:2, In the esterification reaction, the reaction parameters such as methanol to FFA molar ratio, catalyst concentration and reaction temperature were optimized. Silica gel was used during esterification reaction to adsorb water produced in the reaction. Hence the reaction rate was increased and finally the FFA was reduced to 0.52wt%. A factorial design was studied for esterification reaction based on yield of biodiesel. Finally various properties of biodiesel such as FFA, viscosity, specific gravity, cetane index, pour point, flash point etc. were measured and compared with biodiesel and petro-diesel standard. The reaction yield was 79%.", "title": "" }, { "docid": "645a1ad9ab07eee096180e08e6f1fdff", "text": "In the light of evidence from about 200 studies showing gender symmetry in perpetration of partner assault, research can now focus on why gender symmetry is predominant and on the implications of symmetry for primary prevention and treatment of partner violence. Progress in such research is handicapped by a number of problems: (1) Insufficient empirical research and a surplus of discussion and theory, (2) Blinders imposed by commitment to a single causal factor theory-patriarchy and male dominance-in the face of overwhelming evidence that this is only one of a multitude of causes, (3) Research purporting to investigate gender differences but which obtains data on only one gender, (4) Denial of research grants to projects that do not assume most partner violence is by male perpetrators, (5) Failure to investigate primary prevention and treatment programs for female offenders, and (6) Suppression of evidence on female perpetration by both researchers and agencies.", "title": "" }, { "docid": "780095276d7ac3cae1b95b7a1ceee8b3", "text": "This work presents a systematic study toward the design and first demonstration of high-performance n-type monolayer tungsten diselenide (WSe2) field effect transistors (FET) by selecting the contact metal based on understanding the physics of contact between metal and monolayer WSe2. Device measurements supported by ab initio density functional theory (DFT) calculations indicate that the d-orbitals of the contact metal play a key role in forming low resistance ohmic contacts with monolayer WSe2. On the basis of this understanding, indium (In) leads to small ohmic contact resistance with WSe2 and consequently, back-gated In-WSe2 FETs attained a record ON-current of 210 μA/μm, which is the highest value achieved in any monolayer transition-metal dichalcogenide- (TMD) based FET to date. An electron mobility of 142 cm(2)/V·s (with an ON/OFF current ratio exceeding 10(6)) is also achieved with In-WSe2 FETs at room temperature. This is the highest electron mobility reported for any back gated monolayer TMD material till date. The performance of n-type monolayer WSe2 FET was further improved by Al2O3 deposition on top of WSe2 to suppress the Coulomb scattering. Under the high-κ dielectric environment, electron mobility of Ag-WSe2 FET reached ~202 cm(2)/V·s with an ON/OFF ratio of over 10(6) and a high ON-current of 205 μA/μm. In tandem with a recent report of p-type monolayer WSe2 FET ( Fang , H . et al. Nano Lett. 2012 , 12 , ( 7 ), 3788 - 3792 ), this demonstration of a high-performance n-type monolayer WSe2 FET corroborates the superb potential of WSe2 for complementary digital logic applications.", "title": "" }, { "docid": "7fe99b63d2b3d94918e4b2f536053b1c", "text": "Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict which nodes are likely to deliver a content or bring it closer to the destination. One promising way of predicting future contact opportunities is to aggregate contacts seen in the past to a social graph and use metrics from complex network analysis (e.g., centrality and similarity) to assess the utility of a node to carry a piece of content. This aggregation presents an inherent tradeoff between the amount of time-related information lost during this mapping and the predictive capability of complex network analysis in this context. In this paper, we use two recent DTN routing algorithms that rely on such complex network analysis, to show that contact aggregation significantly affects the performance of these protocols. We then propose simple contact mapping algorithms that demonstrate improved performance up to a factor of 4 in delivery ratio, and robustness to various connectivity scenarios for both protocols.", "title": "" }, { "docid": "f5ce4a13a8d081243151e0b3f0362713", "text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the", "title": "" } ]
scidocsrr
861754719a5b8722c1e900ffcce1da5c
Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings
[ { "docid": "69d65a994d5b5c412ee6b8a266cb9b31", "text": "This paper describes our system used in the Aspect Based Sentiment Analysis Task 4 at the SemEval-2014. Our system consists of two components to address two of the subtasks respectively: a Conditional Random Field (CRF) based classifier for Aspect Term Extraction (ATE) and a linear classifier for Aspect Term Polarity Classification (ATP). For the ATE subtask, we implement a variety of lexicon, syntactic and semantic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances in ATE, ranking 1st (among 28 submissions) and 2rd (among 27 submissions) for the restaurant and laptop domain respectively.", "title": "" }, { "docid": "03b3d8220753570a6b2f21916fe4f423", "text": "Recent systems have been developed for sentiment classification, opinion recogni­ tion, and opinion analysis (e.g., detect­ ing polarity and strength). We pursue an­ other aspect of opinion analysis: identi­ fying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Con­ ditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source iden­ tification as a sequence tagging task, Au­ toSlog learns extraction patterns. Our re­ sults show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with precision and recall using a head noun matching measure, and precision and recall using an overlap measure.", "title": "" }, { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" } ]
[ { "docid": "7ccbb730f1ce8eca687875c632520545", "text": "Increasing cost of the fertilizers with lesser nutrient use efficiency necessitates alternate means to fertilizers. Soil is a storehouse of nutrients and energy for living organisms under the soil-plant-microorganism system. These rhizospheric microorganisms are crucial components of sustainable agricultural ecosystems. They are involved in sustaining soil as well as crop productivity under organic matter decomposition, nutrient transformations, and biological nutrient cycling. The rhizospheric microorganisms regulate the nutrient flow in the soil through assimilating nutrients, producing biomass, and converting organically bound forms of nutrients. Soil microorganisms play a significant role in a number of chemical transformations of soils and thus, influence the availability of macroand micronutrients. Use of plant growth-promoting microorganisms (PGPMs) helps in increasing yields in addition to conventional plant protection. The most important PGPMs are Azospirillum, Azotobacter, Bacillus subtilis, B. mucilaginosus, B. edaphicus, B. circulans, Paenibacillus spp., Acidithiobacillus ferrooxidans, Pseudomonas, Burkholderia, potassium, phosphorous, zinc-solubilizing V.S. Meena (*) Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India Indian Council of Agricultural Research – Vivekananda Institute of Hill Agriculture, Almora 263601, Uttarakhand, India e-mail: vijayssac.bhu@gmail.com; vijay.meena@icar.gov.in I. Bahadur • B.R. Maurya Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India A. Kumar Department of Botany, MMV, Banaras Hindu University, Varanasi 221005, India R.K. Meena Department of Plant Sciences, School of Life Sciences, University of Hyderabad, Hyderabad 500046, TG, India S.K. Meena Division of Soil Science and Agricultural Chemistry, Indian Agriculture Research Institute, New Delhi 110012, India J.P. Verma Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi 22100, Uttar Pradesh, India # Springer India 2016 V.S. Meena et al. (eds.), Potassium Solubilizing Microorganisms for Sustainable Agriculture, DOI 10.1007/978-81-322-2776-2_1 1 microorganisms, or SMART microbes; these are eco-friendly and environmentally safe. The rhizosphere is the important area of soil influenced by plant roots. It is composed of huge microbial populations that are somehow different from the rest of the soil population, generally denominated as the “rhizosphere effect.” The rhizosphere is the small region of soil that is immediately near to the root surface and also affected by root exudates.", "title": "" }, { "docid": "546ce79bcfa2c2c456036e864a7162f8", "text": "The estimation of effort involved in developing a software product plays an important role in determining the success or failure of the product. Project managers require a reliable approach for software effort estimation. It is especially important during the early stage of the software development life cycle. An accurate software effort estimation is a major concern in current industries. In this paper, the main goal is to estimate the effort required to develop various software projects using class point approach. Then optimization of the effort parameters is achieved using adaptive regression based Multi-Layer Perceptron (ANN) technique to obtain better accuracy. Furthermore, a comparative analysis of software effort estimation using Multi-Layer Perceptron (ANN) and Radial Basis Function Network (RBFN) has been provided. By estimating the software projects accurately, we can have softwares with acceptable quality within budget and on planned schedules.", "title": "" }, { "docid": "b8ea508a39c9ff83cd663f4a0d68c283", "text": "For decades—even prior to its inception—AI has aroused both fear and excitement as humanity has contemplated creating machines like ourselves. Unfortunately, the misconception that “intelligent” artifacts should necessarily be human-like has largely blinded society to the fact that we have been achieving AI for some time. Although AI that surpasses human ability grabs headlines (think of Watson, Deep Mind, or alphaGo), AI has been a standard part of the industrial repertoire since at least the 1980s, with expert systems checking circuit boards and credit card transactions. Machine learning (ML) strategies for generating AI have also long been used, such as genetic algorithms for nding solutions to intractable computational problems like scheduling, and neural networks not only to model and understand human learning but also for basic industrial control, monitoring, and classi cation. In the 1990s, probabilistic and Bayesian methods revolutionized ML and opened the door to one of the most pervasive AI abilities now available: searching through massive troves of data. Innovations in AI and ML algorithms have extended our capacity to nd information in texts, allowing us to search photographs as well as both recorded and live video and audio. We can translate, transcribe, read lips, read emotions (including lying), forge signatures and other handwriting, and forge video. Yet, the downside of these bene ts is ever present. As we write this, allegations are circulating that the Standardizing Ethical Design for Artifi cial Intelligence and Autonomous Systems", "title": "" }, { "docid": "457f10c4c5d5b748a4f35abd89feb519", "text": "Document image binarization is an important step in the document image analysis and recognition pipeline. H-DIBCO 2014 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2014 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures. This paper reports on the contest details including the evaluation measures used as well as the performance of the 7 submitted methods along with a short description of each method.", "title": "" }, { "docid": "c64b13db5a4c35861b06ec53c5c73946", "text": "In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.", "title": "" }, { "docid": "106f80b025d0f48cb80718bc82573961", "text": "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly available1.", "title": "" }, { "docid": "f032d36e081d2b5a4b0408b8f9b77954", "text": "BACKGROUND\nMalnutrition is still highly prevalent in developing countries. Schoolchildren may also be at high nutritional risk, not only under-five children. However, their nutritional status is poorly documented, particularly in urban areas. The paucity of information hinders the development of relevant nutrition programs for schoolchildren. The aim of this study carried out in Ouagadougou was to assess the nutritional status of schoolchildren attending public and private schools.\n\n\nMETHODS\nThe study was carried out to provide baseline data for the implementation and evaluation of the Nutrition Friendly School Initiative of WHO. Six intervention schools and six matched control schools were selected and a sample of 649 schoolchildren (48% boys) aged 7-14 years old from 8 public and 4 private schools were studied. Anthropometric and haemoglobin measurements, along with thyroid palpation, were performed. Serum retinol was measured in a random sub-sample of children (N = 173). WHO criteria were used to assess nutritional status. Chi square and independent t-test were used for proportions and mean comparisons between groups.\n\n\nRESULTS\nMean age of the children (48% boys) was 11.5 ± 1.2 years. Micronutrient malnutrition was highly prevalent, with 38.7% low serum retinol and 40.4% anaemia. The prevalence of stunting was 8.8% and that of thinness, 13.7%. The prevalence of anaemia (p = 0.001) and vitamin A deficiency (p < 0.001) was significantly higher in public than private schools. Goitre was not detected. Overweight/obesity was low (2.3%) and affected significantly more children in private schools (p = 0.009) and younger children (7-9 y) (p < 0.05). Thinness and stunting were significantly higher in peri-urban compared to urban schools (p < 0.05 and p = 0.004 respectively). Almost 15% of the children presented at least two nutritional deficiencies.\n\n\nCONCLUSION\nThis study shows that malnutrition and micronutrient deficiencies are also widely prevalent in schoolchildren in cities, and it underlines the need for nutrition interventions to target them.", "title": "" }, { "docid": "a00201271997f398ec8e5eb4160fbe2e", "text": "We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the full-detection approaches in terms of time performance optimization. A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.", "title": "" }, { "docid": "5215c4302ac93191dca1e8993f2ceac9", "text": "This paper presents the results of the WMT10 and MetricsMATR10 shared tasks,1 which included a translation task, a system combination task, and an evaluation task to investigate new MT metrics. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon’s Mechanical Turk.", "title": "" }, { "docid": "a5e4199c16668f66656474f4eeb5d663", "text": "Advances in information technology, particularly in the e-business arena, are enabling firms to rethink their supply chain strategies and explore new avenues for inter-organizational cooperation. However, an incomplete understanding of the value of information sharing and physical flow coordination hinder these efforts. This research attempts to help fill these gaps by surveying prior research in the area, categorized in terms of information sharing and flow coordination. We conclude by highlighting gaps in the current body of knowledge and identifying promising areas for future research. Subject Areas: e-Business, Inventory Management, Supply Chain Management, and Survey Research.", "title": "" }, { "docid": "afbd52acb39600e8a0804f2140ebf4fc", "text": "This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationallyweak one. Bywrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required serverclient workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.", "title": "" }, { "docid": "18aa08888e4b2b412f154e47891b034d", "text": "Roughly 1.3 billion people in developing countries still live without access to reliable electricity. As expanding access using current technologies will accelerate global climate change, there is a strong need for novel solutions that displace fossil fuels and are financially viable for developing regions. A novel DC microgrid solution that is geared at maximizing efficiency and reducing system installation cost is described in this paper. Relevant simulation and experimental results, as well as a proposal for undertaking field-testing of the technical and economic viability of the microgrid system are presented.", "title": "" }, { "docid": "863202feb1410b177c6bb10ccc1fa43d", "text": "Multimedia retrieval plays an indispensable role in big data utilization. Past efforts mainly focused on single-media retrieval. However, the requirements of users are highly flexible, such as retrieving the relevant audio clips with one query of image. So challenges stemming from the “media gap,” which means that representations of different media types are inconsistent, have attracted increasing attention. Cross-media retrieval is designed for the scenarios where the queries and retrieval results are of different media types. As a relatively new research topic, its concepts, methodologies, and benchmarks are still not clear in the literature. To address these issues, we review more than 100 references, give an overview including the concepts, methodologies, major challenges, and open issues, as well as build up the benchmarks, including data sets and experimental results. Researchers can directly adopt the benchmarks to promptly evaluate their proposed methods. This will help them to focus on algorithm design, rather than the time-consuming compared methods and results. It is noted that we have constructed a new data set XMedia, which is the first publicly available data set with up to five media types (text, image, video, audio, and 3-D model). We believe this overview will attract more researchers to focus on cross-media retrieval and be helpful to them.", "title": "" }, { "docid": "b81b29c232fb9cb5dcb2dd7e31003d77", "text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.", "title": "" }, { "docid": "c28b48557a4eda0d29200170435f2935", "text": "An important role is reserved for nuclear imaging techniques in the imaging of neuroendocrine tumors (NETs). Somatostatin receptor scintigraphy (SRS) with (111)In-DTPA-octreotide is currently the most important tracer in the diagnosis, staging and selection for peptide receptor radionuclide therapy (PRRT). In the past decade, different positron-emitting tomography (PET) tracers have been developed. The largest group is the (68)Gallium-labeled somatostatin analogs ((68)Ga-SSA). Several studies have demonstrated their superiority compared to SRS in sensitivity and specificity. Furthermore, patient comfort and effective dose are favorable for (68)Ga-SSA. Other PET targets like β-[(11)C]-5-hydroxy-L-tryptophan ((11)C-5-HTP) and 6-(18)F-L-3,4-dihydroxyphenylalanine ((18)F-DOPA) were developed recently. For insulinomas, glucagon-like peptide-1 receptor imaging is a promising new technique. The evaluation of response after PRRT and other therapies is a challenge. Currently, the official follow-up is performed with radiological imaging techniques. The role of nuclear medicine may increase with the newest tracers for PET. In this review, the different nuclear imaging techniques and tracers for the imaging of NETs will be discussed.", "title": "" }, { "docid": "da2f41ac808a5092eddf5edbcc12b94f", "text": "The use of the social media sites are growing rapidly to interact with the communities and to share the ideas among others. It may happen that most of the people dislike the ideas of others person views and make the use of the offensive language in their posts. Due to these offensive terms, many people especially youth and teenagers try to adopt such language and spread over the social media sites which may significantly affect the others people innocent minds. As offensive terms increasingly use by the people in highly manner, it is difficult to find or classify such offensive terms in real day to day life. To overcome from these problem, the proposed system analyze the offensive language and can classify the offensive sentence on a particular topic discussion using the support vector machine (SVM) as supervised classification in the data mining. The proposed system also can find the potential user by means of whom the offensive language spread among others and define the comparative analysis of SVM with Naive Bayes technique.", "title": "" }, { "docid": "58a2d35904f92d880ce40abbb2474873", "text": "Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.", "title": "" }, { "docid": "e00295dc86476d1d350d11068439fe87", "text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.", "title": "" }, { "docid": "c0281e28801214c6f40ca46443f65c25", "text": "Smart homes have become increasingly popular for IoT products and services with a lot of promises for improving the quality of life of individuals. Nevertheless, the heterogeneous, dynamic, and Internet-connected nature of this environment adds new concerns as private data becomes accessible, often without the householders' awareness. This accessibility alongside with the rising risks of data security and privacy breaches, makes smart home security a critical topic that deserves scrutiny. In this paper, we present an overview of the privacy and security challenges directed towards the smart home domain. We also identify constraints, evaluate solutions, and discuss a number of challenges and research issues where further investigation is required.", "title": "" }, { "docid": "6275c7fcf34e7f596c8943330071369a", "text": "0 7 4 0 7 4 5 9 / 0 0 / $ 1 0 . 0 0 © 2 0 0 1 I E E E While such techniques1 form the foundation for many contemporary software engineering practices, requirements analysis has to involve more than understanding and modeling the functions, data, and interfaces for a new system. In addition, the requirements engineer needs to explore alternatives and evaluate their feasibility and desirability with respect to business goals. For instance, suppose your task is to build a system to schedule meetings. First, you might want to explore whether the system should do most of the scheduling work or only record meetings. Then you might want to evaluate these requirements with respect to technical objectives (such as response time) and business objectives (such as meeting effectiveness, low costs, or system usability). Once you select an alternative to best meet overall objectives, you can further refine the meaning of terms such as “meeting,” “participant,” or “scheduling conflict.” You can also define the basic functions the system will support. The need to explore alternatives and evaluate them with respect to business objectives has led to research on goal-oriented analysis.2,3 We argue here that goal-oriented analysis complements and strengthens traditional requirements analysis techniques by offering a means for capturing and evaluating alternative ways of meeting business goals. The remainder of this article details the five main steps that comprise goal-oriented analysis. These steps include goal analysis, softgoal analysis, softgoal correlation analysis, goal correlation analysis, and evaluation of alterfeature", "title": "" } ]
scidocsrr
dc7825dc7a3d9da17b5958af4df5afda
Achieving Flexible and Self-Contained Data Protection in Cloud Computing
[ { "docid": "347c3929efc37dee3230189e576f14ab", "text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.", "title": "" } ]
[ { "docid": "288bf12e3949a568b1f7f0aad1f2d365", "text": "Process mining can be seen as the “missing link” between data mining and business process management. The lion0s share of process mining research has been devoted to the discovery of procedural process models from event logs. However, often there are predefined constraints that (partially) describe the normative or expected process, e.g., “activity A should be followed by B” or “activities A and B should never be both executed”. A collection of such constraints is called a declarative process model. Although it is possible to discover such models based on event data, this paper focuses on aligning event logs and predefined declarative process models. Discrepancies between log and model are mediated such that observed log traces are related to paths in the model. The resulting alignments provide sophisticated diagnostics that pinpoint where deviations occur and how severe they are. Moreover, selected parts of the declarative process model can be used to clean and repair the event log before applying other process mining techniques. Our alignment-based approach for preprocessing and conformance checking using declarative process models has been implemented in ProM and has been evaluated using both synthetic logs and real-life logs from a Dutch hospital. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c5b050d6fff4e5ce4d4d79c10625e33f", "text": "Quadratic differentials naturally define analytic orientation fields on planar surfaces. We propose to model orientation fields of fingerprints by specifying quadratic differentials. Models for all fingerprint classes such as arches, loops and whorls are laid out. These models are parametrised by few, geometrically interpretable parameters which are invariant under Euclidean motions. We demonstrate their ability in adapting to given, observed orientation fields, and we compare them to existing models using the fingerprint images of the NIST Special Database 4. We also illustrate that these model allow for extrapolation into unobserved regions. This goes beyond the scope of earlier models for the orientation field as those are restricted to the observed planar fingerprint region. Within the framework of quadratic differentials we are able to verify analytically Penrose's formula for the singularities on a palm [L. S. Penrose, \"Dermatoglyphics\"' Scientific American, vol. 221, no.~6, pp. 73--84, 1969]. Potential applications of these models are the use of their parameters as indices of large fingerprint databases, as well as the definition of intrinsic coordinates for single fingerprint images.", "title": "" }, { "docid": "f20391d5eb79b32f06d31d27ad51bb6c", "text": "Fanconi anemia (FA) is a recessively inherited disease characterized by multiple symptoms including growth retardation, skeletal abnormalities, and bone marrow failure. The FA diagnosis is complicated due to the fact that the clinical manifestations are both diverse and variable. A chromosomal breakage test using a DNA cross-linking agent, in which cells from an FA patient typically exhibit an extraordinarily sensitive response, has been considered the gold standard for the ultimate diagnosis of FA. In the majority of FA patients the test results are unambiguous, although in some cases the presence of hematopoietic mosaicism may complicate interpretation of the data. However, some diagnostic overlap with other syndromes has previously been noted in cases with Nijmegen breakage syndrome. Here we present results showing that misdiagnosis may also occur with patients suffering from two of the three currently known cohesinopathies, that is, Roberts syndrome (RBS) and Warsaw breakage syndrome (WABS). This complication may be avoided by scoring metaphase chromosomes-in addition to chromosomal breakage-for spontaneously occurring premature centromere division, which is characteristic for RBS and WABS, but not for FA.", "title": "" }, { "docid": "893c7a1694596d0c8d58b819500ff9f9", "text": "A recently introduced deep neural network (DNN) has achieved some unprecedented gains in many challenging automatic speech recognition (ASR) tasks. In this paper deep neural network hidden Markov model (DNN-HMM) acoustic models is introduced to phonotactic language recognition and outperforms artificial neural network hidden Markov model (ANN-HMM) and Gaussian mixture model hidden Markov model (GMM-HMM) acoustic model. Experimental results have confirmed that phonotactic language recognition system using DNN-HMM acoustic model yields relative equal error rate reduction of 28.42%, 14.06%, 18.70% and 12.55%, 7.20%, 2.47% for 30s, 10s, 3s comparing with the ANN-HMM and GMM-HMM approaches respectively on National Institute of Standards and Technology language recognition evaluation (NIST LRE) 2009 tasks.", "title": "" }, { "docid": "a441f01dae68134b419aa33f1f9588a6", "text": "In this work we present a technique for using natural language to help reinforcement learning generalize to unseen environments using neural machine translation techniques. These techniques are then integrated into policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, and show that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.", "title": "" }, { "docid": "12915285ce8f1dd1f902562fd8c7500d", "text": "Expanding view of minimal invasive surgery horizon reveals new practice areas for surgeons and patients. Laparoscopic inguinal hernia repair is an example in progress wondered by many patients and surgeons. Advantages in laparoscopic repair motivate surgeons to discover this popular field. In addition, patients search the most convenient surgical method for themselves today. Laparoscopic approaches to inguinal hernia surgery have become popular as a result of the development of experience about different laparoscopic interventions, and these techniques are increasingly used these days. As other laparoscopic surgical methods, experience is the most important point in order to obtain good results. This chapter aims to show technical details, pitfalls and the literature results about two methods that are commonly used in laparoscopic inguinal hernia repair.", "title": "" }, { "docid": "0a34ed8b01c6c700e7bb8bb15644590f", "text": "Almost all automatic semantic role labeling (SRL) systems rely on a preliminary parsing step that derives a syntactic structure from the sentence being analyzed. This makes the choice of syntactic representation an essential design decision. In this paper, we study the influence of syntactic representation on the performance of SRL systems. Specifically, we compare constituent-based and dependencybased representations for SRL of English in the FrameNet paradigm. Contrary to previous claims, our results demonstrate that the systems based on dependencies perform roughly as well as those based on constituents: For the argument classification task, dependencybased systems perform slightly higher on average, while the opposite holds for the argument identification task. This is remarkable because dependency parsers are still in their infancy while constituent parsing is more mature. Furthermore, the results show that dependency-based semantic role classifiers rely less on lexicalized features, which makes them more robust to domain changes and makes them learn more efficiently with respect to the amount of training data.", "title": "" }, { "docid": "352c61af854ffc6dab438e7a1be56fcb", "text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.", "title": "" }, { "docid": "18b32aa0ffd8a3a7b84f9768d57b5cde", "text": "In this paper we propose a recognition system of medical concepts from free text clinical reports. Our approach tries to recognize also concepts which are named with local terminology, with medical writing scripts, short words, abbreviations and even spelling mistakes. We consider a clinical terminology ontology (Snomed-CT), as a dictionary of concepts. In a first step we obtain an embedding model using word2vec methodology from a big corpus database of clinical reports. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space, and so the geometrical similarity can be considered a measure of semantic relation. We have considered 615513 emergency clinical reports from the Hospital \"Rafael Méndez\" in Lorca, Murcia. In these reports there are a lot of local language of the emergency domain, medical writing scripts, short words, abbreviations and even spelling mistakes. With the model obtained we represent the words and sentences as vectors, and by applying cosine similarity we identify which concepts of the ontology are named in the text. Finally, we represent the clinical reports (EHR) like a bag of concepts, and use this representation to search similar documents. The paper illustrates 1) how we build the word2vec model from the free text clinical reports, 2) How we extend the embedding from words to sentences, and 3) how we use the cosine similarity to identify concepts. The experimentation, and expert human validation, shows that: a) the concepts named in the text with the ontology terminology are well recognized, and b) others concepts that are not named with the ontology terminology are also recognized, obtaining a high precision and recall measures.", "title": "" }, { "docid": "4080a61019e992a89b9120de611ee844", "text": "An emotional version of Sapir-Whorf hypothesis suggests that differences in language emotionalities influence differences among cultures no less than conceptual differences. Conceptual contents of languages and cultures to significant extent are determined by words and their semantic differences; these could be borrowed among languages and exchanged among cultures. Emotional differences, as suggested in the paper, are related to grammar and mostly cannot be borrowed. Conceptual and emotional mechanisms of languages are considered here along with their functions in the mind and cultural evolution. A fundamental contradiction in human mind is considered: language evolution requires reduced emotionality, but “too low” emotionality makes language “irrelevant to life,” disconnected from sensory-motor experience. Neural mechanisms of these processes are suggested as well as their mathematical models: the knowledge instinct, the language instinct, the dual model connecting language and cognition, dynamic logic, neural modeling fields. Mathematical results are related to cognitive science, linguistics, and psychology. Experimental evidence and theoretical arguments are discussed. Approximate equations for evolution of human minds and cultures are obtained. Their solutions identify three types of cultures: \"conceptual\"-pragmatic cultures, in which emotionality of language is reduced and differentiation overtakes synthesis resulting in fast evolution at the price of uncertainty of values, self doubts, and internal crises; “traditional-emotional” cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation; and “multi-cultural” societies combining fast cultural evolution and stability. Unsolved problems and future theoretical and experimental directions are discussed.", "title": "" }, { "docid": "a62dc7e25b050addad1c27d92deee8b7", "text": "Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.", "title": "" }, { "docid": "2b8d90c11568bb8b172eca20a48fd712", "text": "INTRODUCTION\nCancer incidence and mortality estimates for 25 cancers are presented for the 40 countries in the four United Nations-defined areas of Europe and for the European Union (EU-27) for 2012.\n\n\nMETHODS\nWe used statistical models to estimate national incidence and mortality rates in 2012 from recently-published data, predicting incidence and mortality rates for the year 2012 from recent trends, wherever possible. The estimated rates in 2012 were applied to the corresponding population estimates to obtain the estimated numbers of new cancer cases and deaths in Europe in 2012.\n\n\nRESULTS\nThere were an estimated 3.45 million new cases of cancer (excluding non-melanoma skin cancer) and 1.75 million deaths from cancer in Europe in 2012. The most common cancer sites were cancers of the female breast (464,000 cases), followed by colorectal (447,000), prostate (417,000) and lung (410,000). These four cancers represent half of the overall burden of cancer in Europe. The most common causes of death from cancer were cancers of the lung (353,000 deaths), colorectal (215,000), breast (131,000) and stomach (107,000). In the European Union, the estimated numbers of new cases of cancer were approximately 1.4 million in males and 1.2 million in females, and around 707,000 men and 555,000 women died from cancer in the same year.\n\n\nCONCLUSION\nThese up-to-date estimates of the cancer burden in Europe alongside the description of the varying distribution of common cancers at both the regional and country level provide a basis for establishing priorities to cancer control actions in Europe. The important role of cancer registries in disease surveillance and in planning and evaluating national cancer plans is becoming increasingly recognised, but needs to be further advocated. The estimates and software tools for further analysis (EUCAN 2012) are available online as part of the European Cancer Observatory (ECO) (http://eco.iarc.fr).", "title": "" }, { "docid": "fcd98a7540dd59e74ea71b589c255adb", "text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.", "title": "" }, { "docid": "4a75586965854ba2cba2fed18528e72b", "text": "Although there have been some promising results in computer lipreading, there has been a paucity of data on which to train automatic systems. However the recent emergence of the TCDTIMIT corpus, with around 6000 words, 59 speakers and seven hours of recorded audio-visual speech, allows the deployment of more recent techniques in audio-speech such as Deep Neural Networks (DNNs) and sequence discriminative training. In this paper we combine the DNN with a Hidden Markov Model (HMM) to the, so called, hybrid DNN-HMM configuration which we train using a variety of sequence discriminative training methods. This is then followed with a weighted finite state transducer. The conclusion is that the DNN offers very substantial improvement over a conventional classifier which uses a Gaussian Mixture Model (GMM) to model the densities even when optimised with Speaker Adaptive Training. Sequence adaptive training offers further improvements depending on the precise variety employed but those improvements are of the order of 10% improvement in word accuracy. Putting these two results together implies that lipreading is moving from something of rather esoteric interest to becoming a practical reality in the foreseeable future.", "title": "" }, { "docid": "7438ff346fa26661822a3a96c13c6d6e", "text": "As in any new technology adoption in organizations, big data solutions (BDS) also presents some security threat and challenges, especially due to the characteristics of big data itself the volume, velocity and variety of data. Even though many security considerations associated to the adoption of BDS have been publicized, it remains unclear whether these publicized facts have any actual impact on the adoption of the solutions. Hence, it is the intent of this research-in-progress to examine the security determinants by focusing on the influence that various technological factors in security, organizational security view and security related environmental factors have on BDS adoption. One technology adoption framework, the TOE (technological-organizational-environmental) framework is adopted as the main conceptual research framework. This research will be conducted using a Sequential Explanatory Mixed Method approach. Quantitative method will be used for the first part of the research, specifically using an online questionnaire survey. The result of this first quantitative process will then be further explored and complemented with a case study. Results generated from both quantitative and qualitative phases will then be triangulated and a cross-study synthesis will be conducted to form the final result and discussion.", "title": "" }, { "docid": "c675a2f1fed4ccb5708be895190b02cd", "text": "Decompilation is important for many security applications; it facilitates the tedious task of manual malware reverse engineering and enables the use of source-based security tools on binary code. This includes tools to find vulnerabilities, discover bugs, and perform taint tracking. Recovering high-level control constructs is essential for decompilation in order to produce structured code that is suitable for human analysts and sourcebased program analysis techniques. State-of-the-art decompilers rely on structural analysis, a pattern-matching approach over the control flow graph, to recover control constructs from binary code. Whenever no match is found, they generate goto statements and thus produce unstructured decompiled output. Those statements are problematic because they make decompiled code harder to understand and less suitable for program analysis. In this paper, we present DREAM, the first decompiler to offer a goto-free output. DREAM uses a novel patternindependent control-flow structuring algorithm that can recover all control constructs in binary programs and produce structured decompiled code without any goto statement. We also present semantics-preserving transformations that can transform unstructured control flow graphs into structured graphs. We demonstrate the correctness of our algorithms and show that we outperform both the leading industry and academic decompilers: Hex-Rays and Phoenix. We use the GNU coreutils suite of utilities as a benchmark. Apart from reducing the number of goto statements to zero, DREAM also produced more compact code (less lines of code) for 72.7% of decompiled functions compared to Hex-Rays and 98.8% compared to Phoenix. We also present a comparison of Hex-Rays and DREAM when decompiling three samples from Cridex, ZeusP2P, and SpyEye malware families.", "title": "" }, { "docid": "e0f7c82754694084c6d05a2d37be3048", "text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.", "title": "" }, { "docid": "1a732de3138d5771bea1590bb36f4db6", "text": "Implanted sensors and actuators in the human body promise in-situ health monitoring and rapid advancements in personalized medicine. We propose a new paradigm where such implants may communicate wirelessly through a technique called as galvanic coupling, which uses weak electrical signals and the conduction properties of body tissues. While galvanic coupling overcomes the problem of massive absorption of RF waves in the body, the unique intra-body channel raises several questions on the topology of the implants and the external (i.e., on skin) data collection nodes. This paper makes the first contributions towards (i) building an energy-efficient topology through optimal placement of data collection points/relays using measurement-driven tissue channel models, and (ii) balancing the energy consumption over the entire implant network so that the application needs are met. We achieve this via a two-phase iterative clustering algorithm for the implants and formulate an optimization problem that decides the position of external data-gathering points. Our theoretical results are validated via simulations and experimental studies on real tissues, with demonstrated increase in the network lifetime.", "title": "" }, { "docid": "d2d84d12216464e361f417c397212e63", "text": "Academic search engines and digital libraries provide convenient online search and access facilities for scientific publications. However, most existing systems do not include books in their collections although several books are freely available online. Academic books are different from papers in terms of their length, contents and structure. We argue that accounting for academic books is important in understanding and assessing scientific impact. We introduce an open-book search engine that extracts and indexes metadata, contents, and bibliography from online PDF book documents. To the best of our knowledge, no previous work gives a systematical study on building a search engine for books.\n We propose a hybrid approach for extracting title and authors from a book that combines results from CiteSeer, a rule based extractor, and a SVM based extractor, leveraging web knowledge. For \"table of contents\" recognition, we propose rules based on multiple regularities based on numbering and ordering. In addition, we study bibliography extraction and citation parsing for a large dataset of books. Finally, we use the multiple fields available in books to rank books in response to search queries. Our system can effectively extract metadata and contents from large collections of online books and provides efficient book search and retrieval facilities.", "title": "" } ]
scidocsrr
2ea568f59e106cacc0f641e706e5cbe4
An In-depth Comparison of Subgraph Isomorphism Algorithms in Graph Databases
[ { "docid": "b307d2577dcdd13236446c2938e36b73", "text": "We invesrigare new appmaches for frequent graph-based patrem mining in graph darasers andpmpose a novel ofgorirhm called gSpan (graph-based,Tubsmrure parrern mining), which discovers frequenr subsrrucrures z h o u r candidate generorion. &an builds a new lexicographic or. der among graphs, and maps each graph to a unique minimum DFS code as irs canonical label. Based on rhis lexicographic orde,: &an adopts rhe deprh-jrsr search srraregy ro mine frequenr cannecred subgraphs eflciently. Our performance study shows rhar gSpan subsianriolly outperforms previous algorithm, somerimes by an order of magnirude.", "title": "" } ]
[ { "docid": "a35aa35c57698d2518e3485ec7649c66", "text": "The review paper describes the application of various image processing techniques for automatic detection of glaucoma. Glaucoma is a neurodegenerative disorder of the optic nerve, which causes partial loss of vision. Large number of people suffers from eye diseases in rural and semi urban areas all over the world. Current diagnosis of retinal disease relies upon examining retinal fundus image using image processing. The key image processing techniques to detect eye diseases include image registration, image fusion, image segmentation, feature extraction, image enhancement, morphology, pattern matching, image classification, analysis and statistical measurements. KeywordsImage Registration; Fusion; Segmentation; Statistical measures; Morphological operation; Classification Full Text: http://www.ijcsmc.com/docs/papers/November2013/V2I11201336.pdf", "title": "" }, { "docid": "2943f1d374a6a63ef1b140a83e5a8caf", "text": "Gill morphometric and gill plasticity of the air-breathing striped catfish (Pangasianodon hypophthalmus) exposed to different temperatures (present day 27°C and future 33°C) and different air saturation levels (92% and 35%) during 6weeks were investigated using vertical sections to estimate the respiratory lamellae surface areas, harmonic mean barrier thicknesses, and gill component volumes. Gill respiratory surface area (SA) and harmonic mean water - blood barrier thicknesses (HM) of the fish were strongly affected by both environmental temperature and oxygen level. Thus initial values for 27°C normoxic fish (12.4±0.8g) were 211.8±21.6mm2g-1 and 1.67±0.12μm for SA and HM respectively. After 5weeks in same conditions or in the combinations of 33°C and/or PO2 of 55mmHg, this initial surface area scaled allometrically with size for the 33°C hypoxic group, whereas branchial SA was almost eliminated in the 27°C normoxic group, with other groups intermediate. In addition, elevated temperature had an astounding effect on growth with the 33°C group growing nearly 8-fold faster than the 27°C fish.", "title": "" }, { "docid": "d52bfde050e6535645c324e7006a50e7", "text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.", "title": "" }, { "docid": "fab399a613acab4965fc29dd178ecb80", "text": "Maritime transportation is accountable for 2.7% of the worlds CO emissions and the liner shipping industry is committed to a slow steaming policy to provide low cost and environmentally conscious global transport of goods without compromising the level of service. The potential for making cost effective and energy efficient liner shipping networks using operations research is huge and neglected. The implementation of logistic planning tools based upon operations research has enhanced performance of both airlines, railways and general transportation companies, but within the field of liner shipping very little operations research has been done. We believe that access to domain knowledge and data is an entry barrier for researchers to approach the important liner shipping network design problem. This paper presents a thorough description of the liner shipping domain applied to network design along with a rich integer programming model based on the services, that constitute the fixed schedule of a liner shipping company. The model may be relaxed as well as decomposed. The design of a benchmark suite of data instances to reflect the business structure of a global liner shipping network is discussed. The paper is motivated by providing easy access to the domain and the data sources of liner shipping for operations researchers in general. A set of data instances with offset in real world data is presented and made available upon request. Future work is to provide computational results for the instances.", "title": "" }, { "docid": "344be59c5bb605dec77e4d7bd105d899", "text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.", "title": "" }, { "docid": "aaf69cb42fc9d17cf0ae3b80a55f12d6", "text": "Bringing Blockchain technology and business process management together, we follow the Design Science Research approach and design, implement, and evaluate a Blockchain prototype for crossorganizational workflow management together with a German bank. For the use case of a documentary letter of credit we describe the status quo of the process, identify areas of improvement, implement a Blockchain solution, and compare both workflows. The prototype illustrates that the process, as of today paper-based and with high manual effort, can be significantly improved. Our research reveals that a tamper-proof process history for improved auditability, automation of manual process steps and the decentralized nature of the system can be major advantages of a Blockchain solution for crossorganizational workflow management. Further, our research provides insights how Blockchain technology can be used for business process management in general.", "title": "" }, { "docid": "adcf1d64887caa6c0811878460018a31", "text": "For many networking applications, recent data is more significant than older data, motivating the need for sliding window solutions. Various capabilities, such as DDoS detection and load balancing, require insights about multiple metrics including Bloom filters, per-flow counting, count distinct and entropy estimation. In this work, we present a unified construction that solves all the above problems in the sliding window model. Our single solution offers a better space to accuracy tradeoff than the state-of-the-art for each of these individual problems! We show this both analytically and by running multiple real Internet backbone and datacenter packet traces.", "title": "" }, { "docid": "4387549562fe2c0833b002d73d9a8330", "text": "Complex numbers have long been favoured for digital signal processing, yet complex representations rarely appear in deep learning architectures. RNNs, widely used to process time series and sequence information, could greatly benefit from complex representations. We present a novel complex gate recurrent cell. When used together with norm-preserving state transition matrices, our complex gated RNN exhibits excellent stability and convergence properties. We demonstrate competitive performance of our complex gated RNN on the synthetic memory and adding task, as well as on the real-world task of human motion prediction.", "title": "" }, { "docid": "1468c2bc1073f5f72226fddf4c3bc0ad", "text": "To maximize network lifetime in Wireless Sensor Networks (WSNs) the paths for data transfer are selected in such a way that the total energy consumed along the path is minimized. To support high scalability and better data aggregation, sensor nodes are often grouped into disjoint, non overlapping subsets called clusters. Clusters create hierarchical WSNs which incorporate efficient utilization of limited resources of sensor nodes and thus extends network lifetime. The objective of this paper is to present a state of the art survey on clustering algorithms reported in the literature of WSNs. Our paper presents a taxonomy of energy efficient clustering algorithms in WSNs. And also present timeline and description of LEACH and Its descendant in WSNs.", "title": "" }, { "docid": "5d3275250a345b5f8c8a14a394025a31", "text": "Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach.", "title": "" }, { "docid": "cb62164bc5a582be0c45df28d8ebb797", "text": "Android rooting enables device owners to freely customize their own devices and run useful apps that require root privileges. While useful, rooting weakens the security of Android devices and opens the door for malware to obtain privileged access easily. Thus, several rooting prevention mechanisms have been introduced by vendors, and sensitive or high-value mobile apps perform rooting detection to mitigate potential security exposures on rooted devices. However, there is a lack of understanding whether existing rooting prevention and detection methods are effective. To fill this knowledge gap, we studied existing Android rooting methods and performed manual and dynamic analysis on 182 selected apps, in order to identify current rooting detection methods and evaluate their effectiveness. Our results suggest that these methods are ineffective. We conclude that reliable methods for detecting rooting must come from integrity-protected kernels or trusted execution environments, which are difficult to bypass.", "title": "" }, { "docid": "f4b271c7ee8bfd9f8aa4d4cf84c4efd4", "text": "Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. MIT Autonomous Vehicle", "title": "" }, { "docid": "f3e4892a0cc4bfe895d4b3c26440ee9a", "text": "A compact dual band-notched ultra-wideband (UWB) multiple-input multiple-output (MIMO) antenna with high isolation is designed on a FR4 substrate (27 × 30 × 0.8 mm3). To improve the input impedance matching and increase the isolation for the frequencies ≥ 4.0 GHz, the two antenna elements with compact size of 5.5 × 11 mm2 are connected to the two protruded ground parts, respectively. A 1/3 λ rectangular metal strip producing a 1.0 λ loop path with the corresponding antenna element is used to obtain the notched frequency from 5.15 to 5.85 GHz. For the rejected band of 3.30-3.70 GHz, a 1/4 λ open slot is etched into the radiator. Moreover, the two protruded ground parts are connected by a compact metal strip to reduce the mutual coupling for the band of 3.0-4.0 GHz. The simulated and measured results show a bandwidth with |S11| ≤ -10 dB, |S21| ≤ -20 dB and frequency ranged from 3.0 to 11.0 GHz excluding the two rejected bands, is achieved, and all the measured and calculated results show the proposed UWB MIMO antenna is a good candidate for UWB MIMO systems.", "title": "" }, { "docid": "eac322eae08da165b436308336aac37a", "text": "The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM-based scheduling, and to identify how it should be incorporated into current practice. The analysis of the current scheduling processes identifies significant discrepancies between the overall and the detailed levels of scheduling. The overall scheduling process is described as an individual endeavor with limited and unsystematic sharing of knowledge within and between projects. Thus, the reuse of scheduling data and experiences are inadequate, preventing continuous improvements of the overall schedules. Besides, the overall scheduling process suffers from lack of information, caused by uncoordinated and unsynchronized overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve overall scheduling, which in turn, can secure a better starting point of the LPS. The study points to the necessity of involving subcontractors and manufactures in the earliest phases of the project in order to create project specific information for the overall schedule. In addition, the design process should be prioritized and coordinated with each craft, a process library should be introduced to promote transfer of knowledge and continuous improvements, and information flow between design and scheduling processes must change from push to pull.", "title": "" }, { "docid": "814c69ae155f69ee481255434039b00c", "text": "The introduction of semantics on the web will lead to a new generation of services based on content rather than on syntax. Search engines will provide topic-based searches, retrieving resources conceptually related to the user informational need. Queries will be expressed in several ways, and will be mapped on the semantic level defining topics that must be retrieved from the web. Moving towards this new Web era, effective semantic search engines will provide means for successful searches avoiding the heavy burden experimented by users in a classical query-string based search task. In this paper we propose a search engine based on web resource semantics. Resources to be retrieved are semantically annotated using an existing open semantic elaboration platform and an ontology is used to describe the knowledge domain into which perform queries. Ontology navigation provides semantic level reasoning in order to retrieve meaningful resources with respect to a given information request.", "title": "" }, { "docid": "8d7e8ee0f6305d50276d25ce28bcdf9c", "text": "The advancement of visual sensing has introduced better capturing of the discrete information from a complex, crowded scene for assisting in the analysis. However, after reviewing existing system, we find that majority of the work carried out till date is associated with significant problems in modeling event detection as well as reviewing abnormality of the given scene. Therefore, the proposed system introduces a model that is capable of identifying the degree of abnormality for an event captured on the crowded scene using unsupervised training methodology. The proposed system contributes to developing a novel region-wise repository to extract the contextual information about the discrete-event for a given scene. The study outcome shows highly improved the balance between the computational time and overall accuracy as compared to the majority of the standard research work emphasizing on event", "title": "" }, { "docid": "ecf7446713dc92394c16241aa31a8dba", "text": "Accelerated graphics cards, or Graphics Processing Units (GPUs), have become ubiquitous in recent years. On the right kinds of problems, GPUs greatly surpass CPUs in terms of raw performance. However, because they are difficult to program, GPUs are used only for a narrow class of special-purpose applications; the raw processing power made available by GPUs is unused most of the time.\n This paper presents an extension to a Java JIT compiler that executes suitable code on the GPU instead of the CPU. Both static and dynamic features are used to decide whether it is feasible and beneficial to off-load a piece of code on the GPU. The paper presents a cost model that balances the speedup available from the GPU against the cost of transferring input and output data between main memory and GPU memory. The cost model is parameterized so that it can be applied to different hardware combinations. The paper also presents ways to overcome several obstacles to parallelization inherent in the design of the Java bytecode language: unstructured control flow, the lack of multi-dimensional arrays, the precise exception semantics, and the proliferation of indirect references.", "title": "" }, { "docid": "21c84ab0fb698ad2619e0afc6db44e1a", "text": "Nanoscale windows in graphene (nanowindows) have the ability to switch between open and closed states, allowing them to become selective, fast, and energy-efficient membranes for molecular separations. These special pores, or nanowindows, are not electrically neutral due to passivation of the carbon edges under ambient conditions, becoming flexible atomic frameworks with functional groups along their rims. Through computer simulations of oxygen, nitrogen, and argon permeation, here we reveal the remarkable nanowindow behavior at the atomic scale: flexible nanowindows have a thousand times higher permeability than conventional membranes and at least twice their selectivity for oxygen/nitrogen separation. Also, weakly interacting functional groups open or close the nanowindow with their thermal vibrations to selectively control permeation. This selective fast permeation of oxygen, nitrogen, and argon in very restricted nanowindows suggests alternatives for future air separation membranes. Graphene with nanowindows can have 1000 times higher permeability and four times the selectivity for air separation than conventional membranes, Vallejos-Burgos et al. reveal by molecular simulation, due to flexibility at the nanoscale and thermal vibrations of the nanowindows' functional groups.", "title": "" }, { "docid": "93e43e11c10e39880c68d2fb0fccd634", "text": "In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.", "title": "" }, { "docid": "651db77789c5f5edaa933534255c88d6", "text": "Abstract: Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work. Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work.", "title": "" } ]
scidocsrr
a54d1e9f745295cc76b789e03f97e8b6
The Demographics of Mail Search and their Application to Query Suggestion
[ { "docid": "99f93328d19ac240378c5cfe08cf9f9e", "text": "Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 \"latent\" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.", "title": "" }, { "docid": "57ba9e280303078261d4384dd9407f92", "text": "People often repeat Web searches, both to find new information on topics they have previously explored and to re-find information they have seen in the past. The query associated with a repeat search may differ from the initial query but can nonetheless lead to clicks on the same results. This paper explores repeat search behavior through the analysis of a one-year Web query log of 114 anonymous users and a separate controlled survey of an additional 119 volunteers. Our study demonstrates that as many as 40% of all queries are re-finding queries. Re-finding appears to be an important behavior for search engines to explicitly support, and we explore how this can be done. We demonstrate that changes to search engine results can hinder re-finding, and provide a way to automatically detect repeat searches and predict repeat clicks.", "title": "" } ]
[ { "docid": "cf8915016c6a6d6537fbd368238c81f3", "text": "A 5-year-old boy was followed up with migratory spermatic cord and a perineal tumour at the paediatric department after birth. He was born by Caesarean section at 38 weeks in viviparity. Weight at birth was 3650 g. Although a meningocele in the sacral region was found by MRI, there were no symptoms in particular and no other deformity was found. When he was 4 years old, he presented to our department with the perinal tumour. On examination, a slender scrotum-like tumour covering the centre of the perineal lesion, along with inflammation and ulceration around the skin of the anus, was observed. Both testes and scrotums were observed in front of the tumour (Figure 1a). An excision of the tumour and Z-plasty of the perineal lesion were performed. The subcutaneous tissue consisted of adipose tissue-like lipoma and was resected along with the tumour (Figure 1b). A Z-plasty was carefully performed in order to maintain the lefteright symmetry of the", "title": "" }, { "docid": "af9c94a8d4dcf1122f70f5d0b90a247f", "text": "New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today's large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.", "title": "" }, { "docid": "7d0ebf939deed43253d5360e325c3e8e", "text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.", "title": "" }, { "docid": "53dc606897bd6388c729cc8138027b31", "text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.", "title": "" }, { "docid": "b1e4fb97e4b1d31e4064f174e50f17d3", "text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.", "title": "" }, { "docid": "58d19a5460ce1f830f7a5e2cb1c5ebca", "text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.", "title": "" }, { "docid": "a48a88e3e6e35779392f5dea132d49f2", "text": "Community detection emerged as an important exploratory task in complex networks analysis across many scientific domains. Many methods have been proposed to solve this problem, each one with its own mechanism and sometimes with a different notion of community. In this article, we bring most common methods in the literature together in a comparative approach and reveal their performances in both real-world networks and synthetic networks. Surprisingly, many of those methods discovered better communities than the declared ground-truth communities in terms of some topological goodness features, even on benchmarking networks with built-in communities. We illustrate different structural characteristics that these methods could identify in order to support users to choose an appropriate method according to their specific requirements on different structural qualities.", "title": "" }, { "docid": "d0ec144c5239b532987157a64d499f61", "text": "(1) Disregard pseudo-queries that do not retrieve their pseudo-relevant document in the top nrank. (2) Select the top nneg retrieved documents are negative training examples. General Approach: Generate mock interaction embeddings and filter training examples down to those the most nearly match a set of template query-document pairs (given a distance function). Since interaction embeddings specific to what a model “sees,” interaction filters are model-specific.", "title": "" }, { "docid": "37482eea1f087101011ba48ac8923ecb", "text": "Routers classify packets to determine which flow they belong to, and to decide what service they should receive. Classification may, in general, be based on an arbitrary number of fields in the packet header. Performing classification quickly on an arbitrary number of fields is known to be difficult, and has poor worst-case performance. In this paper, we consider a number of classifiers taken from real networks. We find that the classifiers contain considerable structure and redundancy that can be exploited by the classification algorithm. In particular, we find that a simple multi-stage classification algorithm, called RFC (recursive flow classification), can classify 30 million packets per second in pipelined hardware, or one million packets per second in software.", "title": "" }, { "docid": "f1f424a703eefaabe8c704bd07e21a21", "text": "It is more convincing for users to have their own 3-D body shapes in the virtual fitting room when they shop clothes online. However, existing methods are limited for ordinary users to efficiently and conveniently access their 3-D bodies. We propose an efficient data-driven approach and develop an android application for 3-D body customization. Users stand naturally and their photos are taken from front and side views with a handy phone camera. They can wear casual clothes like a short-sleeved/long-sleeved shirt and short/long pants. First, we develop a user-friendly interface to semi-automatically segment the human body from photos. Then, the segmented human contours are scaled and translated to the ones under our virtual camera configurations. Through this way, we only need one camera to take photos of human in two views and do not need to calibrate the camera, which satisfy the convenience requirement. Finally, we learn body parameters that determine the 3-D body from dressed-human silhouettes with cascaded regressors. The regressors are trained using a database containing 3-D naked and dressed body pairs. Body parameters regression only costs 1.26 s on an android phone, which ensures the efficiency of our method. We invited 12 volunteers for tests, and the mean absolute estimation error for chest/waist/hip size is 2.89/1.93/2.22 centimeters. We additionally use 637 synthetic data to evaluate the main procedures of our approach.", "title": "" }, { "docid": "b9dfc489ff1bf6907929a450ea614d0b", "text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.", "title": "" }, { "docid": "3c5e3f2fe99cb8f5b26a880abfe388f8", "text": "Facial point detection is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since facial shapes vary significantly with facial expressions, poses or occlusion. In this paper, we address this problem by proposing a discriminative deep face shape model that is constructed based on an augmented factorized three-way Restricted Boltzmann Machines model. Specifically, the discriminative deep model combines the top-down information from the embedded face shape patterns and the bottom up measurements from local point detectors in a unified framework. In addition, along with the model, effective algorithms are proposed to perform model learning and to infer the true facial point locations from their measurements. Based on the discriminative deep face shape model, 68 facial points are detected on facial images in both controlled and “in-the-wild” conditions. Experiments on benchmark data sets show the effectiveness of the proposed facial point detection algorithm against state-of-the-art methods.", "title": "" }, { "docid": "0f2023682deaf2eb70c7becd8b3375dd", "text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.", "title": "" }, { "docid": "4653c085c5b91107b5eb637e45364943", "text": "Legged locomotion excels when terrains become too rough for wheeled systems or open-loop walking pattern generators to succeed, i.e., when accurate foot placement is of primary importance in successfully reaching the task goal. In this paper we address the scenario where the rough terrain is traversed with a static walking gait, and where for every foot placement of a leg, the location of the foot placement was selected irregularly by a planning algorithm. Our goal is to adjust a smooth walking pattern generator with the selection of every foot placement such that the COG of the robot follows a stable trajectory characterized by a stability margin relative to the current support triangle. We propose a novel parameterization of the COG trajectory based on the current position, velocity, and acceleration of the four legs of the robot. This COG trajectory has guaranteed continuous velocity and acceleration profiles, which leads to continuous velocity and acceleration profiles of the leg movement, which is ideally suited for advanced model-based controllers. Pitch, yaw, and ground clearance of the robot are easily adjusted automatically under any terrain situation. We evaluate our gait generation technique on the Little-Dog quadruped robot when traversing complex rocky and sloped terrains.", "title": "" }, { "docid": "8bda640f73c3941272739a57a5d02353", "text": "Researchers strive to understand eating behavior as a means to develop diets and interventions that can help people achieve and maintain a healthy weight, recover from eating disorders, or manage their diet and nutrition for personal wellness. A major challenge for eating-behavior research is to understand when, where, what, and how people eat. In this paper, we evaluate sensors and algorithms designed to detect eating activities, more specifically, when people eat. We compare two popular methods for eating recognition (based on acoustic and electromyography (EMG) sensors) individually and combined. We built a data-acquisition system using two off-the-shelf sensors and conducted a study with 20 participants. Our preliminary results show that the system we implemented can detect eating with an accuracy exceeding 90.9% while the crunchiness level of food varies. We are developing a wearable system that can capture, process, and classify sensor data to detect eating in real-time.", "title": "" }, { "docid": "23d26c14a9aa480b98bcaa633fc378e5", "text": "In this paper we present novel sensory feedbacks named ”King-Kong Effects” to enhance the sensation of walking in virtual environments. King Kong Effects are inspired by special effects in movies in which the incoming of a gigantic creature is suggested by adding visual vibrations/pulses to the camera at each of its steps. In this paper, we propose to add artificial visual or tactile vibrations (King-Kong Effects or KKE) at each footstep detected (or simulated) during the virtual walk of the user. The user can be seated, and our system proposes to use vibrotactile tiles located under his/her feet for tactile rendering, in addition to the visual display. We have designed different kinds of KKE based on vertical or lateral oscillations, physical or metaphorical patterns, and one or two peaks for heal-toe contacts simulation. We have conducted different experiments to evaluate the preferences of users navigating with or without the various KKE. Taken together, our results identify the best choices for future uses of visual and tactile KKE, and they suggest a preference for multisensory combinations. Our King-Kong effects could be used in a variety of VR applications targeting the immersion of a user walking in a 3D virtual scene.", "title": "" }, { "docid": "d0c8a1faccfa3f0469e6590cc26097c8", "text": "This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.", "title": "" }, { "docid": "2a0b81bbe867a5936dafc323d8563970", "text": "Social network analysis has gained significant attention in recent years, largely due to the success of online social networking and media-sharing sites, and the consequent availability of a wealth of social network data. In spite of the growing interest, however, there is little understanding of the potential business applications of mining social networks. While there is a large body of research on different problems and methods for social network mining, there is a gap between the techniques developed by the research community and their deployment in real-world applications. Therefore the potential business impact of these techniques is still largely unexplored.\n In this article we use a business process classification framework to put the research topics in a business context and provide an overview of what we consider key problems and techniques in social network analysis and mining from the perspective of business applications. In particular, we discuss data acquisition and preparation, trust, expertise, community structure, network dynamics, and information propagation. In each case we present a brief overview of the problem, describe state-of-the art approaches, discuss business application examples, and map each of the topics to a business process classification framework. In addition, we provide insights on prospective business applications, challenges, and future research directions. The main contribution of this article is to provide a state-of-the-art overview of current techniques while providing a critical perspective on business applications of social network analysis and mining.", "title": "" }, { "docid": "2faf7fedadfd8b24c4740f7100cf5fec", "text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "title": "" } ]
scidocsrr
b490516e04fd2917c9498057d4e20ff7
Architectures for deep neural network based acoustic models defined over windowed speech waveforms
[ { "docid": "d12a47e1b72532a3c2c028620eba44d6", "text": "Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5% relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.", "title": "" } ]
[ { "docid": "68118c94d8e00031a7c9996ab282881f", "text": "A cascadable power-on-reset (POR) delay element consuming nanowatt of peak power was developed to be used in very compact power-on-reset pulse generator (POR-PG) circuits. Operation principles and features of the POR delay element were presented in this paper. The delay element was designed, and fabricated in a 0.5µm 2P3M CMOS process. It was determined from simulation as well as measurement results that the delay element works wide supply voltage ranges between 1.8 volt and 5 volt and supply voltage rise times between 100nsec and 1msec allowing wide dynamic range POR-PG circuits. It also has very small silicon footprint. Layout size of a single POR delay element was 35µm x 55µm in 0.5µm CMOS process.", "title": "" }, { "docid": "f4cf5ac351005975bc8244497a45bc70", "text": "This paper demonstrates the co-optimization of all critical device parameters of perpendicular magnetic tunnel junctions (pMTJ) in 1 Gbit arrays with an equivalent bitcell size of 22 F2 at the 28 nm logic node for embedded STT-MRAM. Through thin-film tuning and advanced etching of sub-50 nm (diameter) pMTJ, high device performance and reliability were achieved simultaneously, including TMR = 150 %, Hc > 1350 Oe, Heff <; 100 Oe, Δ = 85, Ic (35 ns) = 94 μA, Vbreakdown = 1.5 V, and high endurance (> 1012 write cycles). Reliable switching with small temporal variations (<; 5 %) was obtained down to 10 ns. In addition, tunnel barrier integrity and high temperature device characteristics were investigated in order to ensure reliable STT-MRAM operation.", "title": "" }, { "docid": "596949afaabdbcc68cd8bda175400f30", "text": "We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.", "title": "" }, { "docid": "308e06ce00b1dfaf731b1a91e7c56836", "text": "OBJECTIVE\nTo systematically review the literature regarding how statistical process control--with control charts as a core tool--has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application.\n\n\nDATA SOURCES\nOriginal articles found in relevant databases, including Web of Science and Medline, covering the period 1966 to June 2004.\n\n\nSTUDY SELECTION\nFrom 311 articles, 57 empirical studies, published between 1990 and 2004, met the inclusion criteria.\n\n\nMETHODS\nA standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically.\n\n\nRESULTS\nStatistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogs 11 approaches to such smart application, including risk adjustment and data stratification.\n\n\nCONCLUSION\nStatistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health.", "title": "" }, { "docid": "232a9a83cea93e5d8cdfb6eff0c1c256", "text": "We present a method for performing hierarchical object detection in images guided by a deep reinforcement learning agent. The key idea is to focus on those parts of the image that contain richer information and zoom on them. We train an intelligent agent that, given an image window, is capable of deciding where to focus the attention among five different predefined region candidates (smaller windows). This procedure is iterated providing a hierarchical image analysis.We compare two different candidate proposal strategies to guide the object search: with and without overlap. Moreover, our work compares two different strategies to extract features from a convolutional neural network for each region proposal: a first one that computes new feature maps for each region proposal, and a second one that computes the feature maps for the whole image to later generate crops for each region proposal. Experiments indicate better results for the overlapping candidate proposal strategy and a loss of performance for the cropped image features due to the loss of spatial resolution. We argue that, while this loss seems unavoidable when working with large amounts of object candidates, the much more reduced amount of region proposals generated by our reinforcement learning agent allows considering to extract features for each location without sharing convolutional computation among regions. Source code and models are available at https://imatge-upc.github.io/detection-2016-nipsws/.", "title": "" }, { "docid": "eec15a5d14082d625824452bd070ec38", "text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.", "title": "" }, { "docid": "0f111ec5556abf9bfbfcaeefaab61da1", "text": "The rise of Natural Language Processing (NLP) opened new possibilities for various applications that were not applicable before. A morphological-rich language such as Arabic introduces a set of features, such as roots, that would assist the progress of NLP. Many tools were developed to capture the process of root extraction (stemming). Stemmers have improved many NLP tasks without explicit knowledge about its stemming accuracy. In this paper, a study is conducted to evaluate various Arabic stemmers. The study is done as a series of comparisons using a manually annotated dataset, which shows the efficiency of Arabic stemmers, and points out potential improvements to existing stemmers. The paper also presents enhanced root extractors by using light stemmers as a preprocessing phase.", "title": "" }, { "docid": "e458ba119fe15f17aa658c5b42a21e2b", "text": "In this paper, with the help of controllable active near-infrared (NIR) lights, we construct near-infrared differential (NIRD) images. Based on reflection model, NIRD image is believed to contain the lighting difference between images with and without active NIR lights. Two main characteristics based on NIRD images are exploited to conduct spoofing detection. Firstly, there exist obviously spoofing media around the faces in most conditions, which reflect incident lights in almost the same way as the face areas do. We analyze the pixel consistency between face and non-face areas and employ context clues to distinguish the spoofing images. Then, lighting feature, extracted only from face areas, is utilized to detect spoofing attacks of deliberately cropped medium. Merging the two features, we present a face spoofing detection system. In several experiments on self collected datasets with different spoofing media, we demonstrate the excellent results and robustness of proposed method.", "title": "" }, { "docid": "0efecea75d3821a5710f3de91986f119", "text": "Atherosclerosis is a chronic inflammatory disease, and is the primary cause of heart disease and stroke in Western countries. Derivatives of cannabinoids such as delta-9-tetrahydrocannabinol (THC) modulate immune functions and therefore have potential for the treatment of inflammatory diseases. We investigated the effects of THC in a murine model of established atherosclerosis. Oral administration of THC (1 mg kg-1 per day) resulted in significant inhibition of disease progression. This effective dose is lower than the dose usually associated with psychotropic effects of THC. Furthermore, we detected the CB2 receptor (the main cannabinoid receptor expressed on immune cells) in both human and mouse atherosclerotic plaques. Lymphoid cells isolated from THC-treated mice showed diminished proliferation capacity and decreased interferon-γ secretion. Macrophage chemotaxis, which is a crucial step for the development of atherosclerosis, was also inhibited in vitro by THC. All these effects were completely blocked by a specific CB2 receptor antagonist. Our data demonstrate that oral treatment with a low dose of THC inhibits atherosclerosis progression in the apolipoprotein E knockout mouse model, through pleiotropic immunomodulatory effects on lymphoid and myeloid cells. Thus, THC or cannabinoids with activity at the CB2 receptor may be valuable targets for treating atherosclerosis.", "title": "" }, { "docid": "00ec0bc711e38e6e5a3281dbd71d02f9", "text": "The magnitude of recent combat blast injuries sustained by forces fighting in Afghanistan has escalated to new levels with more troops surviving higher-energy trauma. The most complex and challenging injury pattern is the emerging frequency of high-energy IED casualties presenting in extremis with traumatic bilateral lower extremity amputations with and without pelvic and perineal blast involvement. These patients require a coordinated effort of advanced trauma and surgical care from the point of injury through definitive management. Early survival is predicated upon a balance of life-saving damage control surgery and haemostatic resuscitation. Emergent operative intervention is critical with timely surgical hemostasis, adequate wound decontamination, revision amputations, and pelvic fracture stabilization. Efficient index surgical management is paramount to prevent further physiologic insult, and a team of orthopaedic and general surgeons operating concurrently may effectively achieve this. Despite the extent and complexity, these are survivable injuries but long-term followup is necessary.", "title": "" }, { "docid": "3e177f8b02a5d67c7f4d93ce601c4539", "text": "This research proposes an approach for text classification that uses a simple neural network called Dynamic Text Classifier Neural Network (DTCNN). The neural network uses as input vectors of words with variable dimension without information loss called Dynamic Token Vectors (DTV). The proposed neural network is designed for the classification of large and short text into categories. The learning process combines competitive and Hebbian learning. Due to the combination of these learning rules the neural network is able to work in a supervised or semi-supervised mode. In addition, it provides transparency in the classification. The network used in this paper is quite simple, and that is what makes enough for its task. The results of evaluation the proposed method shows an improvement in the text classification problem using the DTCNN compared to baseline approaches.", "title": "" }, { "docid": "9de4cfbd662dc9ba2621722b7aef7bac", "text": "The centromere position is an important feature in analyzing chromosomes and to make karyogram. In the field of chromosome analysis the accurate determination centromere from the segmented chromosome image is a challenging task. Karyogram is an arrangement of 46 chromosomes, for finding out many genetic disorders, various abnormalities and cancers. There exist so many algorithms to detect centromere positions, but most of the algorithms cannot apply for all chromosomes because of their orientation in metaphase. Here we propose a novel algorithm that associates with some rules based on morphological features of chromosome, a GLM mask and rotation procedure. The algorithm is tested on publically available database (LK1) and images collected from RCC Trivandrum.", "title": "" }, { "docid": "fdab4af34adebd0d682134f3cf13d794", "text": "Threat evaluation (TE) is a process used to assess the threat values (TVs) of air-breathing threats (ABTs), such as air fighters, that are approaching defended assets (DAs). This study proposes an automatic method for conducting TE using radar information when ABTs infiltrate into territory where DAs are located. The method consists of target asset (TA) prediction and TE. We divide a friendly territory into discrete cells based on the effective range of anti-aircraft missiles. The TA prediction identifies the TA of each ABT by predicting the ABT’s movement through cells in the territory via a Markov chain, and the cell transition is modeled by neural networks. We calculate the TVs of the ABTs based on the TA prediction results. A simulation-based experiment revealed that the proposed method outperformed TE based on the closest point of approach or the radial speed vector methods. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4e7443088eedf5e6199959a06ebc420c", "text": "The development of computational-intelligence based strategies for electronic markets has been the focus of intense research. In order to be able to design efficient and effective automated trading strategies, one first needs to understand the workings of the market, the strategies that traders use and their interactions as well as the patterns emerging as a result of these interactions. In this paper, we develop an agent-based model of the FX market which is the market for the buying and selling of currencies. Our agent-based model of the FX market (ABFXM) comprises heterogeneous trading agents which employ a strategy that identifies and responds to periodic patterns in the price time series. We use the ABFXM to undertake a systematic exploration of its constituent elements and their impact on the stylized facts (statistical patterns) of transactions data. This enables us to identify a set of sufficient conditions which result in the emergence of the stylized facts similarly to the real market data, and formulate a model which closely approximates the stylized facts. We use a unique high frequency dataset of historical transactions data which enables us to run multiple simulation runs and validate our approach and draw comparisons and conclusions for each market setting.", "title": "" }, { "docid": "7115c9872b05a20efeaafaaed7c2e173", "text": "Today, bibliographic digital libraries play an important role in helping members of academic community search for novel research. In particular, author disambiguation for citations is a major problem during the data integration and cleaning process, since author names are usually very ambiguous. For solving this problem, we proposed two kinds of correlations between citations, namely, Topic Correlation and Web Correlation, to exploit relationships between citations, in order to identify whether two citations with the same author name refer to the same individual. The topic correlation measures the similarity between research topics of two citations; while the Web correlation measures the number of co-occurrence in web pages. We employ a pair-wise grouping algorithm to group citations into clusters. The results of experiments show that the disambiguation accuracy has great improvement when using topic correlation and Web correlation, and Web correlation provides stronger evidences about the authors of citations.", "title": "" }, { "docid": "785b42fe7765d415dcfef09a6142aa6f", "text": "In this paper a first approach for digital media forensics is presented to determine the used microphones and the environments of recorded digital audio samples by using known audio steganalysis features. Our first evaluation is based on a limited exemplary test set of 10 different audio reference signals recorded as mono audio data by four microphones in 10 different rooms with 44.1 kHz sampling rate and 16 bit quantisation. Note that, of course, a generalisation of the results cannot be achieved. Motivated by the syntactical and semantical analysis of information and in particular by known audio steganalysis approaches, a first set of specific features are selected for classification to evaluate, whether this first feature set can support correct classifications. The idea was mainly driven by the existing steganalysis features and the question of applicability within a first and limited test set. In the tests presented in this paper, an inter-device analysis with different device characteristics is performed while intra-device evaluations (identical microphone models of the same manufacturer) are not considered. For classification the data mining tool WEKA with K-means as a clustering and Naive Bayes as a classification technique are applied with the goal to evaluate their classification in regard to the classification accuracy on known audio steganalysis features. Our results show, that for our test set, the used classification techniques and selected steganalysis features, microphones can be better classified than environments. These first tests show promising results but of course are based on a limited test and training set as well a specific test set generation. Therefore additional and enhanced features with different test set generation strategies are necessary to generalise the findings.", "title": "" }, { "docid": "4551c05bbf8969d310d548d5a773f584", "text": "Optical testing of advanced CMOS circuits successfully exploits the near-infrared photon emission by hot-carriers in transistor channels (see EMMI (Ng et al., 1999) and PICA (Kash and Tsang, 1997) (Song et al., 2005) techniques). However, due to the continuous scaling of features size and supply voltage, spontaneous emission is becoming fainter and optical circuit diagnostics becomes more challenging. Here we present the experimental characterization of hot-carrier luminescence emitted by transistors in four CMOS technologies from two different manufacturers. Aim of the research is to gain a better perspective on emission trends and dependences on technological parameters. In particular, we identify luminescence changes due to short-channel effects (SCE) and we ascertain that, for each technology node, there are two operating regions, for short- and long-channels. We highlight the emission reduction of p-FETs compared to n-FETs, due to a \"red-shift\" (lower energy) of the hot-carrier distribution. Eventually, we give perspectives about emission trends in actual and future technology nodes, showing that luminescence dramatically decreases with voltage, but it recovers strength when moving from older to more advanced technology generations. Such results extend the applicability of optical testing techniques, based on present single-photon detectors, to future low-voltage chips", "title": "" }, { "docid": "c76f44cd62651b068de9bdb5eec80f23", "text": "Currently, audience measurement reports of television programs are only available after a significant period of time, for example as a daily report. This paper proposes an architecture for real time measurement of television audience. Real time measurement can give channel owners and advertisers important information that can positively impact their business. We show that television viewership can be captured by set top box devices which detect the channel logo and transmit the viewership data to a server over internet. The server processes the viewership data and displays it in real time on a web-based dashboard. In addition, it has facility to display charts of hourly and location-wise viewership trends and online TRP (Television Rating Points) reports. The server infrastructure consists of in-memory database, reporting and charting libraries and J2EE based application server.", "title": "" }, { "docid": "a1f4b4c6e98e6b5e8b7f939318a5e808", "text": "A new hardware scheme for computing the transition and control matrix of a parallel cyclic redundancy checksum is proposed. This opens possibilities for parallel high-speed cyclic redundancy checksum circuits that reconfigure very rapidly to new polynomials. The area requirements are lower than those for a realization storing a precomputed matrix. An additional simplification arises as only the polynomial needs to be supplied. The derived equations allow the width of the data to be processed in parallel to be selected independently of the degree of the polynomial. The new design has been simulated and outperforms a recently proposed architecture significantly in speed, area, and energy efficiency.", "title": "" }, { "docid": "77f83ada0854e34ac60c725c21671434", "text": "OBJECTIVES\nThis subanalysis of the TNT (Treating to New Targets) study investigates the effects of intensive lipid lowering with atorvastatin in patients with coronary heart disease (CHD) with and without pre-existing chronic kidney disease (CKD).\n\n\nBACKGROUND\nCardiovascular disease is a major cause of morbidity and mortality in patients with CKD.\n\n\nMETHODS\nA total of 10,001 patients with CHD were randomized to double-blind therapy with atorvastatin 80 mg/day or 10 mg/day. Patients with CKD were identified at baseline on the basis of an estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m(2) using the Modification of Diet in Renal Disease equation. The primary efficacy outcome was time to first major cardiovascular event.\n\n\nRESULTS\nOf 9,656 patients with complete renal data, 3,107 had CKD at baseline and demonstrated greater cardiovascular comorbidity than those with normal eGFR (n = 6,549). After a median follow-up of 5.0 years, 351 patients with CKD (11.3%) experienced a major cardiovascular event, compared with 561 patients with normal eGFR (8.6%) (hazard ratio [HR] = 1.35; 95% confidence interval [CI] 1.18 to 1.54; p < 0.0001). Compared with atorvastatin 10 mg, atorvastatin 80 mg reduced the relative risk of major cardiovascular events by 32% in patients with CKD (HR = 0.68; 95% CI 0.55 to 0.84; p = 0.0003) and 15% in patients with normal eGFR (HR = 0.85; 95% CI 0.72 to 1.00; p = 0.049). Both doses of atorvastatin were well tolerated in patients with CKD.\n\n\nCONCLUSIONS\nAggressive lipid lowering with atorvastatin 80 mg was both safe and effective in reducing the excess of cardiovascular events in a high-risk population with CKD and CHD.", "title": "" } ]
scidocsrr
07191f5cf39dd695b5e3a2c034217899
Ontologies in Ubiquitous Computing
[ { "docid": "a172c51270d6e334b50dcc6233c54877", "text": "m U biquitous computing enhances computer use by making many computers available throughout the physical environment, while making them effectively invisible to the user. This article explains what is new and different about the computer science involved in ubiquitous computing. First, it provides a brief overview of ubiquitous computing, then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g., chips), network protocols, interaction substrates (e.g., software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science. Since we started this work at Xerox Palo Alto Research Center (PARC) in 1988 a few places have begun work on this possible next-generation computing environment in which each person is continually interacting with hundreds of nearby wirelessly interconnected computers. The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user. To bring computers to this point while retaining their power will require radically new kinds of computers of all sizes and shapes to be available to each person. I call this future world \"Ubiquitous Comput ing\" (Ubicomp) [27]. The research method for ubiquitous computing is standard experimental computer science: the construction of working prototypes of the necessai-y infrastructure in sufficient quantity to debug the viability of the systems in everyday use; ourselves and a few colleagues serving as guinea pigs. This is", "title": "" } ]
[ { "docid": "a5ed1ebf973e3ed7ea106e55795e3249", "text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.", "title": "" }, { "docid": "4071b0a0f3887a5ad210509e6ad5498a", "text": "Nowadays, the IoT is largely dependent on sensors. The IoT devices are embedded with sensors and have the ability to communicate. A variety of sensors play a key role in networked devices in IoT. In order to facilitate the management of such sensors, this paper investigates how to use SNMP protocol, which is widely used in network device management, to implement sensors information management of IoT system. The principles and implement details to setup the MIB file, agent and manager application are discussed. A prototype system is setup to validate our methods. The test results show that because of its easy use and strong expansibility, SNMP is suitable and a bright way for sensors information management of IoT system.", "title": "" }, { "docid": "0ecaccc94977a15cbaee4aaa08509295", "text": "This paper reviews the use of socially interactive robots to assist in the therapy of children with autism. The extent to which the robots were successful in helping the children in their social, emotional, and communication deficits was investigated. Child-robot interactions were scrutinized with respect to the different target behaviours that are to be elicited from a child during therapy. These behaviours were thoroughly examined with respect to a child's development needs. Most importantly, experimental data from the surveyed works were extracted and analyzed in terms of the target behaviours and how each robot was used during a therapy session to achieve these behaviours. The study concludes by categorizing the different therapeutic roles that these robots were observed to play, and highlights the important design features that enable them to achieve high levels of effectiveness in autism therapy.", "title": "" }, { "docid": "d41694f90694df023e62f4f6777beadf", "text": "In many randomised trials researchers measure a continuous variable at baseline and again as an outcome assessed at follow up. Baseline measurements are common in trials of chronic conditions where researchers want to see whether a treatment can reduce pre-existing levels of pain, anxiety, hypertension, and the like. Statistical comparisons in such trials can be made in several ways. Comparison of follow up (posttreatment) scores will give a result such as “at the end of the trial, mean pain scores were 15 mm (95% confidence interval 10 to 20 mm) lower in the treatment group.” Alternatively a change score can be calculated by subtracting the follow up score from the baseline score, leading to a statement such as “pain reductions were 20 mm (16 to 24 mm) greater on treatment than control.” If the average baseline scores are the same in each group the estimated treatment effect will be the same using these two simple approaches. If the treatment is effective the statistical significance of the treatment effect by the two methods will depend on the correlation between baseline and follow up scores. If the correlation is low using the change score will add variation and the follow up score is more likely to show a significant result. Conversely, if the correlation is high using only the follow up score will lose information and the change score is more likely to be significant. It is incorrect, however, to choose whichever analysis gives a more significant finding. The method of analysis should be specified in the trial protocol. Some use change scores to take account of chance imbalances at baseline between the treatment groups. However, analysing change does not control for baseline imbalance because of regression to the mean : baseline values are negatively correlated with change because patients with low scores at baseline generally improve more than those with high scores. A better approach is to use analysis of covariance (ANCOVA), which, despite its name, is a regression method. In effect two parallel straight lines (linear regression) are obtained relating outcome score to baseline score in each group. They can be summarised as a single regression equation: follow up score = constant + a◊baseline score + b◊group where a and b are estimated coefficients and group is a binary variable coded 1 for treatment and 0 for control. The coefficient b is the effect of interest—the estimated difference between the two treatment groups. In effect an analysis of covariance adjusts each patient’s follow up score for his or her baseline score, but has the advantage of being unaffected by baseline differences. If, by chance, baseline scores are worse in the treatment group, the treatment effect will be underestimated by a follow up score analysis and overestimated by looking at change scores (because of regression to the mean). By contrast, analysis of covariance gives the same answer whether or not there is baseline imbalance. As an illustration, Kleinhenz et al randomised 52 patients with shoulder pain to either true or sham acupuncture. Patients were assessed before and after treatment using a 100 point rating scale of pain and function, with lower scores indicating poorer outcome. There was an imbalance between groups at baseline, with better scores in the acupuncture group (see table). Analysis of post-treatment scores is therefore biased. The authors analysed change scores, but as baseline and change scores are negatively correlated (about r = − 0.25 within groups) this analysis underestimates the effect of acupuncture. From analysis of covariance we get: follow up score = 24 + 0.71◊baseline score + 12.7◊group (see figure). The coefficient for group (b) has a useful interpretation: it is the difference between the mean change scores of each group. In the above example it can be interpreted as “pain and function score improved by an estimated 12.7 points more on average in the treatment group than in the control group.” A 95% confidence interval and P value can also be calculated for b (see table). The regression equation provides a means of prediction: a patient with a baseline score of 50, for example, would be predicted to have a follow up score of 72.2 on treatment and 59.5 on control. An additional advantage of analysis of covariance is that it generally has greater statistical power to detect a treatment effect than the other methods. For example, a trial with a correlation between baseline and follow", "title": "" }, { "docid": "5d37d539295ca48aed86853406aa9d71", "text": "-Finger print recognition is more popular attending system mostly used in many offices as it provides more accuracy. Machinery also system software based finger print recognition systems are mostly used. But its real time monitoring and remote intimation is not performed until now if wrong person is entering. Instant reporting to officer is necessary for maintaining absence/presence of staff members. This automatic reporting is necessary as officer may be remotely available. So, fingerprint identification based attendance system is proposed with real time remote monitoring. Proposed system requires Finger print sensor, data acquisition system for it, Processor (ARM 11), Ethernet/Wi-Fi Interface for Internet access and Smart phone for monitoring. WhatsApp is generally used by most of peoples and is easily accessible to all so generally preferred in this work. ARM 11 is necessary as it requires the Internet connection for What’ s App data transfer.", "title": "" }, { "docid": "e4892dfe4da663c4044a78a8892010a8", "text": "Turkey has been undertaking many projects to integrate Information and Communication Technology (ICT) sources into practice in the teaching-learning process in educational institutions. This research study sheds light on the use of ICT tools in primary schools in the social studies subject area, by considering various variables which affect the success of the implementation of the use of these tools. A survey was completed by 326 teachers who teach fourth and fifth grade at primary level. The results showed that although teachers are willing to use ICT resources and are aware of the existing potential, they are facing problems in relation to accessibility to ICT resources and lack of in-service training opportunities.", "title": "" }, { "docid": "f1b96f805cbca7eaefdc1b5b0fa316c3", "text": "This paper presents a comprehensive overview of the literature on the types, effects, conditions and user of Open 6 Government Data (OGD). The review analyses 101 academic studies about OGD which discuss at least one of the four factors 7 of OGD utilization: the different types of utilization, the effects of utilization, the key conditions, and the different users. Our 8 analysis shows that the majority of studies focus on the OGD provisions while assuming, but not empirically testing, various 9 forms of utilization. The paper synthesizes the hypothesized relations in a multi-dimensional framework of OGD utilization. 10 Based on the framework we suggest four future directions for research: 1) investigate the link between type of utilization and 11 type of users (e.g. journalists, citizens) 2) investigate the link between type of user and type of effect (e.g. societal, economic and 12 good governance benefits) 3) investigate the conditions that moderate OGD effects (e.g. policy, data quality) and 4) establishing 13 a causal link between utilization and OGD outcomes. 14", "title": "" }, { "docid": "365b95202095942c4b2b43a5e6f6e04e", "text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.", "title": "" }, { "docid": "1969bf5a07349cc5a9b498e0437e41fe", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "798f8c412ac3fbe1ab1b867bc8ce68d0", "text": "We introduce a new mobile system framework, SenSec, which uses passive sensory data to ensure the security of applications and data on mobile devices. SenSec constantly collects sensory data from accelerometers, gyroscopes and magnetometers and constructs the gesture model of how a user uses the device. SenSec calculates the sureness that the mobile device is being used by its owner. Based on the sureness score, mobile devices can dynamically request the user to provide active authentication (such as a strong password), or disable certain features of the mobile devices to protect user's privacy and information security. In this paper, we model such gesture patterns through a continuous n-gram language model using a set of features constructed from these sensors. We built mobile application prototype based on this model and use it to perform both user classification and user authentication experiments. User studies show that SenSec can achieve 75% accuracy in identifying the users and 71.3% accuracy in detecting the non-owners with only 13.1% false alarms.", "title": "" }, { "docid": "0da78253d26ddba2b17dd76c4b4c697a", "text": "In this work, a portable real-time wireless health monitoring system is developed. The system is used for remote monitoring of patients' heart rate and oxygen saturation in blood. The system was designed and implemented using ZigBee wireless technologies. All pulse oximetry data are transferred within a group of wireless personal area network (WPAN) to database computer server. The sensor modules were designed for low power operation with a program that can adjust power management depending on scenarios of power source and current power operation. The sensor unit consists of (1) two types of LEDs and photodiode packed in Velcro strip that is facing to a patient's fingertip; (2) Microcontroller unit for interfacing with ZigBee module, processing pulse oximetry data and storing some data before sending to base PC; (3) ZigBee module for communicating the data of pulse oximetry, ZigBee module gets all commands from microcontroller unit and it has a complete ZigBee stack inside and (4) Base node for receiving and storing the data before sending to PC.", "title": "" }, { "docid": "ed63ebf895f1f37ba9b788c36b8e6cfc", "text": "Melanocyte stem cells (McSCs) and mouse models of hair graying serve as useful systems to uncover mechanisms involved in stem cell self-renewal and the maintenance of regenerating tissues. Interested in assessing genetic variants that influence McSC maintenance, we found previously that heterozygosity for the melanogenesis associated transcription factor, Mitf, exacerbates McSC differentiation and hair graying in mice that are predisposed for this phenotype. Based on transcriptome and molecular analyses of Mitfmi-vga9/+ mice, we report a novel role for MITF in the regulation of systemic innate immune gene expression. We also demonstrate that the viral mimic poly(I:C) is sufficient to expose genetic susceptibility to hair graying. These observations point to a critical suppressor of innate immunity, the consequences of innate immune dysregulation on pigmentation, both of which may have implications in the autoimmune, depigmenting disease, vitiligo.", "title": "" }, { "docid": "533b8bf523a1fb69d67939607814dc9c", "text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.", "title": "" }, { "docid": "66e00cb4593c1bc97a10e0b80dcd6a8f", "text": "OBJECTIVE\nTo determine the probable factors responsible for stress among undergraduate medical students.\n\n\nMETHODS\nThe qualitative descriptive study was conducted at a public-sector medical college in Islamabad, Pakistan, from January to April 2014. Self-administered open-ended questionnaires were used to collect data from first year medical students in order to study the factors associated with the new environment.\n\n\nRESULTS\nThere were 115 students in the study with a mean age of 19±6.76 years. Overall, 35(30.4%) students had mild to moderate physical problems, 20(17.4%) had severe physical problems and 60(52.2%) did not have any physical problem. Average stress score was 19.6±6.76. Major elements responsible for stress identified were environmental factors, new college environment, student abuse, tough study routines and personal factors.\n\n\nCONCLUSIONS\nMajority of undergraduate students experienced stress due to both academic and emotional factors.", "title": "" }, { "docid": "f6553bf60969c422a07e1260a35b10c9", "text": "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.", "title": "" }, { "docid": "dcf8cacaa3f64d30cd46de1da2e880b7", "text": "Here we discussed different dielectric substrate frequently used in microstrip patch antenna to enhance overall efficiency of antenna. Various substrates like foam, duroid, benzocyclobutane, roger 4350, epoxy, FR4, Duroid 6010 are in use to achieve better gain and bandwidth. A dielectric substrate is a insulator which is a main constituent of the microstrip structure, where a thicker substrate is considered because it has direct proportionality with bandwidth whereas dielectric constant is inversely proportional to bandwidth as lower the relative permittivity better the fringing is achieved. Another factor that impact directly is loss tangent it shows inverse relation with efficiency the dilemma is here is that substrate with lower loss tangent is costlier. A clear pros and cons are discussed here of different substrates for judicious selection. A substrate gives mechanical strength to the antenna.", "title": "" }, { "docid": "a2514f994292481d0fe6b37afe619cb5", "text": "The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking. 1. History of Information Hiding The idea of communicating secretly is as old as communication itself. In this section, we briefly discuss the historical development of information hiding techniques such as steganography/ watermarking. Early steganography was messy. Before phones, before mail, before horses, messages were sent on foot. If you wanted to hide a message, you had two choices: have the messenger memorize it, or hide it on the messenger. While information hiding techniques have received a tremendous attention recently, its application goes back to Greek times. According to Greek historian Herodotus, the famous Greek tyrant Histiaeus, while in prison, used unusual method to send message to his son-in-law. He shaved the head of a slave to tattoo a message on his scalp. Histiaeus then waited until the hair grew back on slave’s head prior to sending him off to his son-inlaw. The second story also came from Herodotus, which claims that a soldier named Demeratus needed to send a message to Sparta that Xerxes intended to invade Greece. Back then, the writing medium was written on wax-covered tablet. Demeratus removed the wax from the tablet, wrote the secret message on the underlying wood, recovered the tablet with wax to make it appear as a blank tablet and finally sent the document without being detected. Invisible inks have always been a popular method of steganography. Ancient Romans used to write between lines using invisible inks based on readily available substances such as fruit juices, urine and milk. When heated, the invisible inks would darken, and become legible. Ovid in his “Art of Love” suggests using milk to write invisibly. Later chemically affected sympathetic inks were developed. Invisible inks were used as recently as World War II. Modern invisible inks fluoresce under ultraviolet light and are used as anti-counterfeit devices. For example, \"VOID\" is printed on checks and other official documents in an ink that appears under the strong ultraviolet light used for photocopies. The monk Johannes Trithemius, considered one of the founders of modern cryptography, had ingenuity in spades. His three volume work Steganographia, written around 1500, describes an extensive system for concealing secret messages within innocuous texts. On its surface, the book seems to be a magical text, and the initial reaction in the 16th century was so strong that Steganographia was only circulated privately until publication in 1606. But less than five years ago, Jim Reeds of AT&T Labs deciphered mysterious codes in the third volume, showing that Trithemius' work is more a treatise on cryptology than demonology. Reeds' fascinating account of the code breaking process is quite readable. One of Trithemius' schemes was to conceal messages in long invocations of the names of angels, with the secret message appearing as a pattern of letters within the words. For example, as every other letter in every other word: padiel aporsy mesarpon omeuas peludyn malpreaxo which reveals \"prymus apex.\" Another clever invention in Steganographia was the \"Ave Maria\" cipher. The book contains a series of tables, each of which has a list of words, one per letter. To code a message, the message letters are replaced by the corresponding words. If the tables are used in order, one table per letter, then the coded message will appear to be an innocent prayer. The earliest actual book on steganography was a four hundred page work written by Gaspari Schott in 1665 and called Steganographica. Although most of the ideas came from Trithemius, it was a start. Further development in the field occurred in 1883, with the publication of Auguste Kerchoffs’ Cryptographie militaire. Although this work was mostly about cryptography, it describes some principles that are worth keeping in mind when designing a new steganographic system.", "title": "" }, { "docid": "436369a1187f436290ae9b61f3e9ee1e", "text": "In this paper we propose a sub-band energy based end-ofutterance algorithm that is capable of detecting the time instant when the user has stopped speaking. The proposed algorithm finds the time instant at which many enough sub-band spectral energy trajectories fall and stay for a pre-defined fixed time below adaptive thresholds, i.e. a non-speech period is detected after the end of the utterance. With the proposed algorithm a practical speech recognition system can give timely feedback for the user, thereby making the behaviour of the speech recognition system more predictable and similar across different usage environments and noise conditions. The proposed algorithm is shown to be more accurate and noise robust than the previously proposed approaches. Experiments with both isolated command word recognition and continuous digit recognition in various noise conditions verify the viability of the proposed approach with an average proper endof-utterance detection rate of around 94% in both cases, representing 43% error rate reduction over the most competitive previously published method.", "title": "" }, { "docid": "49f1d3ebaf3bb3e575ac3e40101494d9", "text": "This paper discusses the current status of research on fraud detection undertaken a.s part of the European Commissionfunded ACTS ASPECT (Advanced Security for Personal Communications Technologies) project, by Royal Holloway University of London. Using a recurrent neural network technique, we uniformly distribute prototypes over Toll Tickets. sampled from the U.K. network operator, Vodafone. The prototypes, which continue to adapt to cater for seasonal or long term trends, are used to classify incoming Toll Tickets to form statistical behaviour proFdes covering both the short and long-term past. These behaviour profiles, maintained as probability distributions, comprise the input to a differential analysis utilising a measure known as the HeUinger distance[5] between them as an alarm criteria. Fine tuning the system to minimise the number of false alarms poses a significant ask due to the low fraudulent/non fraudulent activity ratio. We benefit from using unsupervised learning in that no fraudulent examples ate requited for training. This is very relevant considering the currently secure nature of GSM where fraud scenarios, other than Subscription Fraud, have yet to manifest themselves. It is the aim of ASPECT to be prepared for the would-be fraudster for both GSM and UMTS, Introduction When a mobile originated phone call is made or various inter-call criteria are met he cells or switches that a mobile phone is communicating with produce information pertaining to the call attempt. These data records, for billing purposes, are referred to as Toll Tickets. Toll Tickets contain a wealth of information about the call so that charges can be made to the subscriber. By considering well studied fraud indicators these records can also be used to detect fraudulent activity. By this we mean i terrogating a series of recent Toll Tickets and comparing a function of the various fields with fixed criteria, known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to investigation by the network operator. Some xample fraud indicators are that of a new subscriber making long back-to-back international calls being indicative of direct call selling or short back-to-back calls to a single land number indicating an attack on a PABX system. Sometimes geographical information deduced from the cell sites visited in a call can indicate cloning. This can be detected through setting a velocity trap. Fixed trigger criteria can be set to catch such extremes of activity, but these absolute usage criteria cannot trap all types of fraud. An alternative approach to the problem is to perform a differential analysis. Here we develop behaviour profiles relating to the mobile phone’s activity and compare its most recent activities with a longer history of its usage. Techniques can then be derived to determine when the mobile phone’s behaviour changes ignificantly. One of the most common indicators of fraud is a significant change in behaviour. The performance expectations of such a system must be of prime concern when developing any fraud detection strategy. To implement a real time fraud detection tool on the Vodafone network in the U.K, it was estimated that, on average, the system would need to be able to process around 38 Toll Tickets per second. This figure varied with peak and off-peak usage and also had seasonal trends. The distribution of the times that calls are made and the duration of each call is highly skewed. Considering all calls that are made in the U.K., including the use of supplementary services, we found the average call duration to be less than eight seconds, hardly time to order a pizza. In this paper we present one of the methods developed under ASPECT that tackles the problem of skewed distributions and seasonal trends using a recurrent neural network technique that is based around unsupervised learning. We envisage this technique would form part of a larger fraud detection suite that also comprises a rule based fraud detection tool and a neural network fraud detection tool that uses supervised learning on a multi-layer perceptron. Each of the systems has its strengths and weaknesses but we anticipate that the hybrid system will combine their strengths. 9 From: AAAI Technical Report WS-97-07. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.", "title": "" } ]
scidocsrr
24ace342e14da55eed4eaf17c8b148a7
Kinect v2 Sensor-Based Mobile Terrestrial Laser Scanner for Agricultural Outdoor Applications
[ { "docid": "5cd68b483657180231786dc5a3407c85", "text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.", "title": "" } ]
[ { "docid": "f0d3a2b2f3ca6223cab0e222da21fb54", "text": "We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing.", "title": "" }, { "docid": "c3cc032538a10ab2f58ff45acb6d16d0", "text": "How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.", "title": "" }, { "docid": "a38ccb15c9fed692ca72c162a5205694", "text": "In this paper, we utilize tags in Twitter (the hashtags) as an indicator of events. We first study the properties of hashtags for event detection. Based on several observations, we proposed three attributes of hashtags, including (1) instability for temporal analysis, (2) Twitter meme possibility to distinguish social events from virtual topics or memes, and (3) authorship entropy for mining the most contributed authors. Based on these attributes, breaking events are discovered with hashtags, which cover a wide range of social events among different languages in the real world.", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "09eb96a9be1c8ee56503881e0fd936d5", "text": "Essential oils are volatile, natural, complex mixtures of compounds characterized by a strong odour and formed by aromatic plants as secondary metabolites. The chemical composition of the essential oil obtained by hydrodistillation from the whole plant of Pulicaria inuloides grown in Yemen and collected at full flowering stage were analyzed by Gas chromatography-Mass spectrometry (GC-MS). Several oil components were identified based upon comparison of their mass spectral data with those of reference compounds. The main components identified in the oil were 47.34% of 2-Cyclohexen-1-one, 2-methyl-5-(1-methyl with Hexadecanoic acid (CAS) (12.82%) and Ethane, 1,2-diethoxy(9.613%). In this study, mineral contents of whole plant of P. inuloides were determined by atomic absorption spectroscopy. Highest level of K, Mg, Na, Fe and Ca of 159.5, 29.5, 14.2, 13.875 and 5.225 mg/100 g were found in P. inuloides.", "title": "" }, { "docid": "7b82678399bf90fd3b08e85f5a3fc39d", "text": "Language and vision provide complementary information. Integrating both modalities in a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple and effective method that learns a language-to-vision mapping and uses its output visual predictions to build multimodal representations. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently reconstructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped (or imagined) vectors not only help to fuse multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more human-like judgments. Ultimately, the present work sheds light on fundamental questions of natural language understanding concerning the fusion of vision and language such as the plausibility of more associative and reconstructive approaches.", "title": "" }, { "docid": "350c899dbd0d9ded745b70b6f5e97d19", "text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.", "title": "" }, { "docid": "55054ba2753651c2f7fc93d1448e0cfe", "text": "There is an industry need for wideband baluns to operate across several decades of bandwidth covering the HF, VHF, and UHF spectrum. For readers unfamiliar with the term \"balun,\" it is a compound word that combines the terms balanced and unbalanced. This is in reference to the conversion between a balanced source and an unbalanced load, often requiring an impedance transformation of some type. It's common in literature to see the terms \"balanced\" and \"unbalanced\" used interchangeably with the terms \"differential\" and \"single-ended,\" and this article will also share this naming convention. These devices are particularly useful in network matching applications and can be constructed at low cost and a relatively small bill of materials. Wideband baluns first found widespread use converting the balanced load of a dipole antenna to the unbalanced output of a single-ended amplifier. These devices can also be found in solid-state differential circuits such as amplifiers and mixers where network matching is required to achieve the maximum power transfer to the load. In the design of RF power amplifiers, wideband baluns play a critical role in an amplifier's performance, including its input and output impedances, gain flatness, linearity, power efficiency, and many other performance characteristics.This article describes the theory of operation, design procedure, and measured results of the winning wideband balun presented at the 2013 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2013), sponsored by the MTT-17 Technical Coordinating Committee on HF-VHF-UHF technology. The wideband balun was designed to deliver a 4:1 impedance transformation, converting a balanced 100 Ω source to an unbalanced 25 Ω load. It was constructed using a multiaperture ferrite core and a pair of bifilar wires with four parallel turns.", "title": "" }, { "docid": "cda00f4a71564c5dc1ebb99a26d41dbb", "text": "A new therapeutic approach to the rehabilitation of movement after stroke, termed constraint-induced (CI) movement therapy, has been derived from basic research with monkeys given somatosensory deafferentation. CI movement therapy consists of a family of therapies; their common element is that they induce stroke patients to greatly increase the use of an affected upper extremity for many hours a day over a period of 10 to 14 consecutive days. The signature intervention involves motor restriction of the contralateral upper extremity in a sling and training of the affected arm. The therapies result in large changes in amount of use of the affected arm in the activities of daily living outside of the clinic that have persisted for the 2 years measured to date. Patients who will benefit from Cl therapy can be identified before the beginning of treatment.", "title": "" }, { "docid": "dc33d2edcfb124af607bcb817589f6e9", "text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.", "title": "" }, { "docid": "dd40063dd10027f827a65976261c8683", "text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.", "title": "" }, { "docid": "22348f1441faa116cce4b05c45848380", "text": "In this paper we propose a method for matching the scales of 3D point clouds. 3D point sets of the same scene obtained by 3D reconstruction techniques usually differ in scale. To match scales, we estimate the ratio of scales of two given 3D point clouds. By performing PCA of spin images over different scales of two point clouds, two sets of cumulative contribution rate curves are generated. Such sets of curves can be considered to characterize the scale of the given 3D point clouds. To find the scale ratio of two point clouds, we register the two sets of curves by using a variant of ICP that estimates the ratio of scales. Simulations with the Stanford bunny and experimental results with 3D reconstructions of artificial and real scenes demonstrate that the ratio of any 3D point clouds can be effectively used for scale matching.", "title": "" }, { "docid": "70a94ef8bf6750cdb4603b34f0f1f005", "text": "What does this paper demonstrate. We show that a very simple 2D architecture (in the sense that it does not make any assumption or reasoning about the 3D information of the object) generally used for object classification, if properly adapted to the specific task, can provide top performance also for pose estimation. More specifically, we demonstrate how a 1-vs-all classification framework based on a Fisher Vector (FV) [1] pyramid or convolutional neural network (CNN) based features [2] can be used for pose estimation. In addition, suppressing neighboring viewpoints during training seems key to get good results.", "title": "" }, { "docid": "cd98932832d8821a98032ae6bbef2576", "text": "An open-loop stereophonic acoustic echo suppression (SAES) method without preprocessing is presented for teleconferencing systems, where the Wiener filter in the short-time Fourier transform (STFT) domain is employed. Instead of identifying the echo path impulse responses with adaptive filters, the proposed algorithm estimates the echo spectra from the stereo signals using two weighting functions. The spectral modification technique originally proposed for noise reduction is adopted to remove the echo from the microphone signal. Moreover, a priori signal-to-echo ratio (SER) based Wiener filter is used as the gain function to achieve a trade-off between musical noise reduction and computational load for real-time operations. Computer simulation shows the effectiveness and the robustness of the proposed method in several different scenarios.", "title": "" }, { "docid": "4f059822d0da0ada039b11c1d65c7c32", "text": "Lead time reduction is a key concern of many industrial buyers of capital facilities given current economic conditions. Supply chain initiatives in manufacturing settings have led owners to expect that dramatic reductions in lead time are possible in all phases of their business, including the delivery of capital materials. Further, narrowing product delivery windows and increasing pressure to be first-tomarket create significant external pressure to reduce lead time. In this paper, a case study is presented in which an owner entered the construction supply chain to procure and position key long-lead materials. The materials were held at a position in the supply chain selected to allow some flexibility for continued customization, but dramatic reduction in the time-to-site. Simulation was used as a tool to consider time-to-site tradeoffs for multiple inventory locations so as to better match the needs of the construction effort.", "title": "" }, { "docid": "156b2c39337f4fe0847b49fa86dc094b", "text": "The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology.", "title": "" }, { "docid": "2d774ec62cdac08997cb8b86e73fe015", "text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.", "title": "" }, { "docid": "5c96222feacb0454d353dcaa1f70fb83", "text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1", "title": "" }, { "docid": "7c6d2ede54f0445e852b8f9da95fca32", "text": "In this paper we apply Conformal Prediction (CP) to the k -Nearest Neighbours Regression (k -NNR) algorithm and propose ways of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. The regions produced by any Conformal Predictor are automatically valid, however their tightness and therefore usefulness depends on the nonconformity measure used by each CP. In effect a nonconformity measure evaluates how strange a given example is compared to a set of other examples based on some traditional machine learning algorithm. We define six novel nonconformity measures based on the k -Nearest Neighbours Regression algorithm and develop the corresponding CPs following both the original (transductive) and the inductive CP approaches. A comparison of the predictive regions produced by our measures with those of the typical regression measure suggests that a major improvement in terms of predictive region tightness is achieved by the new measures.", "title": "" }, { "docid": "006793685095c0772a1fe795d3ddbd76", "text": "Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. In this paper, we introduce the ”Legislation Network” as a novel approach to address several quite challenging issues for identifying and quantifying the complexity inside the Legal Domain. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Among other issues, we have performed a temporal analysis of the evolution of the Legislation Network, as well as a robust resilience test to assess its vulnerability under specific cases that may lead to possible breakdowns. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.", "title": "" } ]
scidocsrr
d7ce4517a8cd27f74a65cfabfe120039
LightBox: SGX-assisted Secure Network Functions at Near-native Speed
[ { "docid": "2f2801e502492a648a0758b6e33fe19d", "text": "Intel is developing the Intel® Software Guard Extensions (Intel® SGX) technology, an extension to Intel® Architecture for generating protected software containers. The container is referred to as an enclave. Inside the enclave, software’s code, data, and stack are protected by hardware enforced access control policies that prevent attacks against the enclave’s content. In an era where software and services are deployed over the Internet, it is critical to be able to securely provision enclaves remotely, over the wire or air, to know with confidence that the secrets are protected and to be able to save secrets in non-volatile memory for future use. This paper describes the technology components that allow provisioning of secrets to an enclave. These components include a method to generate a hardware based attestation of the software running inside an enclave and a means for enclave software to seal secrets and export them outside of the enclave (for example store them in non-volatile memory) such that only the same enclave software would be able un-seal them back to their original form.", "title": "" }, { "docid": "25a28d9319013ef1a38823d273098ebb", "text": "Many systems run rich analytics on sensitive data in the cloud, but are prone to data breaches. Hardware enclaves promise data confidentiality and secure execution of arbitrary computation, yet still suffer from access pattern leakage. We propose Opaque, a distributed data analytics platform supporting a wide range of queries while providing strong security guarantees. Opaque introduces new distributed oblivious relational operators that hide access patterns, and new query planning techniques to optimize these new operators. Opaque is implemented on Spark SQL with few changes to the underlying system. Opaque provides data encryption, authentication and computation verification with a performance ranging from 52% faster to 3.3x slower as compared to vanilla Spark SQL; obliviousness comes with a 1.6–46x overhead. Opaque provides an improvement of three orders of magnitude over state-of-the-art oblivious protocols, and our query optimization techniques improve performance by 2–5x.", "title": "" } ]
[ { "docid": "502d31f5f473f3e93ee86bdfd79e0d75", "text": "The call-by-need lambda calculus provides an equational framework for reasoning syntactically about lazy evaluation. This paper examines its operational characteristics.\n By a series of reasoning steps, we systematically unpack the standard-order reduction relation of the calculus and discover a novel abstract machine definition which, like the calculus, goes \"under lambdas.\" We prove that machine evaluation is equivalent to standard-order evaluation.\n Unlike traditional abstract machines, delimited control plays a significant role in the machine's behavior. In particular, the machine replaces the manipulation of a heap using store-based effects with disciplined management of the evaluation stack using control-based effects. In short, state is replaced with control.\n To further articulate this observation, we present a simulation of call-by-need in a call-by-value language using delimited control operations.", "title": "" }, { "docid": "0cd96187b257ee09060768650432fe6d", "text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.", "title": "" }, { "docid": "69b5c883c7145d2184f77c92e61b2542", "text": "WiFi network traffics will be expected to increase sharply in the coming years, since WiFi network is commonly used for local area connectivity. Unfortunately, there are difficulties in WiFi network research beforehand, since there is no common dataset between researchers on this area. Recently, AWID dataset was published as a comprehensive WiFi network dataset, which derived from real WiFi traces. The previous work on this AWID dataset was unable to classify Impersonation Attack sufficiently. Hence, we focus on optimizing the Impersonation Attack detection. Feature selection can overcome this problem by selecting the most important features for detecting an arbitrary class. We leverage Artificial Neural Network (ANN) for the feature selection and apply Stacked Auto Encoder (SAE), a deep learning algorithm as a classifier for AWID Dataset. Our experiments show that the reduced input features have significantly improved to detect the Impersonation Attack.", "title": "" }, { "docid": "43f200b97e2b6cb9c62bbbe71bed72e3", "text": "We compare nonreturn-to-zero (NRZ) with return-to-zero (RZ) modulation format for wavelength-division-multiplexed systems operating at data rates up to 40 Gb/s. We find that in 10-40-Gb/s dispersion-managed systems (single-mode fiber alternating with dispersion compensating fiber), NRZ is more adversely affected by nonlinearities, whereas RZ is more affected by dispersion. In this dispersion map, 10- and 20-Gb/s systems operate better using RZ modulation format because nonlinearity dominates. However, 40-Gb/s systems favor the usage of NRZ because dispersion becomes the key limiting factor at 40 Gb/s.", "title": "" }, { "docid": "040c577ee6146a72edfd664b9d6aa1ae", "text": "We focus on the role that community plays in the continuum of disaster preparedness, response and recovery, and we explore where community fits in conceptual frameworks concerning disaster decision-making. We offer an overview of models developed in the literature as well as insights drawn from research related to Hurricane Katrina. Each model illustrates some aspect of the spectrum of disaster preparedness and recovery, beginning with risk perception and vulnerability assessments, and proceeding to notions of resiliency and capacity building. Concepts like social resilience are related to theories of ‘‘social capital,’’ which stress the importance of social networks, reciprocity, and interpersonal trust. These allow individuals and groups to accomplish greater things than they could by their isolated efforts. We trace two contrasting notions of community to Tocqueville. On the one hand, community is simply an aggregation of individual persons, that is, a population. As individuals, they have only limited capacity to act effectively or make decisions for themselves, and they are strongly subject to administrative decisions that authorities impose on them. On the other hand, community is an autonomous actor, with its own interests, preferences, resources, and capabilities. This definition of community has also been embraced by community-based participatory researchers and has been thought to offer an approach that is more active and advocacy oriented. We conclude with a discussion of the strengths and weaknesses of community in disaster response and in disaster research.", "title": "" }, { "docid": "0552c786fe0030df69b2095d78c20485", "text": "In recent years, real-time processing and analytics systems for big data--in the context of Business Intelligence (BI)--have received a growing attention. The traditional BI platforms that perform regular updates on daily, weekly or monthly basis are no longer adequate to satisfy the fast-changing business environments. However, due to the nature of big data, it has become a challenge to achieve the real-time capability using the traditional technologies. The recent distributed computing technology, MapReduce, provides off-the-shelf high scalability that can significantly shorten the processing time for big data; Its open-source implementation such as Hadoop has become the de-facto standard for processing big data, however, Hadoop has the limitation of supporting real-time updates. The improvements in Hadoop for the real-time capability, and the other alternative real-time frameworks have been emerging in recent years. This paper presents a survey of the open source technologies that support big data processing in a real-time/near real-time fashion, including their system architectures and platforms.", "title": "" }, { "docid": "dcf4278becbc530d9648b5df4a64ec53", "text": "Variable speed operation is essential for large wind turbines in order to optimize the energy capture under variable wind speed conditions. Variable speed wind turbines require a power electronic interface converter to permit connection with the grid. The power electronics can be either partially-rated or fully-rated [1]. A popular interface method for large wind turbines that is based on a partiallyrated interface is the doubly-fed induction generator (DFIG) system [2]. In the DFIG system, the power electronic interface controls the rotor currents in order to control the electrical torque and thus the rotational speed. Because the power electronics only process the rotor power, which is typically less than 25% of the overall output power, the DFIG offers the advantages of speed control for a reduction in cost and power losses. This report presents a DFIG wind turbine system that is modeled in PLECS and Simulink. A full electrical model that includes the switching converter implementation for the rotor-side power electronics and a dq model of the induction machine is given. The aerodynamics of the wind turbine and the mechanical dynamics of the induction machine are included to extend the use of the model to simulating system operation under variable wind speed conditions. For longer simulations that include these slower mechanical and wind dynamics, an averaged PWM converter model is presented. The averaged electrical model offers improved simulation speed at the expense of neglecting converter switching detail.", "title": "" }, { "docid": "28f1b7635b777cf278cc8d53a5afafb9", "text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.", "title": "" }, { "docid": "9514201894e516d888c593dbade709bc", "text": "Code obfuscation is a technique to transform a program into an equivalent one that is harder to be reverse engineered and understood. On Android, well-known obfuscation techniques are shrinking, optimization, renaming, string encryption, control flow transformation, etc. On the other hand, adversaries may also maliciously use obfuscation techniques to hide pirated or stolen software. If pirated software were obfuscated, it would be difficult to detect software theft. To detect illegal software transformed by code obfuscation, one possible approach is to measure software similarity between original and obfuscated programs and determine whether the obfuscated version is an illegal copy of the original version. In this paper, we analyze empirically the effects of code obfuscation on Android app similarity analysis. The empirical measurements were done on five different Android apps with DashO obfuscator. Experimental results show that similarity measures at bytecode level are more effective than those at source code level to analyze software similarity.", "title": "" }, { "docid": "674d347526e5ea2677eec2f2b816935b", "text": "PATIENT\nMale, 70 • Male, 84.\n\n\nFINAL DIAGNOSIS\nAppendiceal mucocele and pseudomyxoma peritonei.\n\n\nSYMPTOMS\n-.\n\n\nMEDICATION\n-.\n\n\nCLINICAL PROCEDURE\n-.\n\n\nSPECIALTY\nSurgery.\n\n\nOBJECTIVE\nRare disease.\n\n\nBACKGROUND\nMucocele of the appendix is an uncommon cystic lesion characterized by distension of the appendiceal lumen with mucus. Most commonly, it is the result of epithelial proliferation, but it can also be caused by inflammation or obstruction of the appendix. When an underlying mucinous cystadenocarcinoma exists, spontaneous or iatrogenic rupture of the mucocele can lead to mucinous intraperitoneal ascites, a syndrome known as pseudomyxoma peritonei.\n\n\nCASE REPORT\nWe report 2 cases that represent the clinical extremities of this heterogeneous disease; an asymptomatic mucocele of the appendix in a 70-year-old female and a case of pseudomyxoma peritonei in an 84-year-old male. Subsequently, we review the current literature focusing to the optimal management of both conditions.\n\n\nCONCLUSIONS\nMucocele of the appendix is a rare disease, usually diagnosed on histopathologic examination of appendectomized specimens. Due to the existing potential for malignant transformation and pseudomyxoma peritonei caused by rupture of the mucocele, extensive preoperative evaluation and thorough intraoperative gastrointestinal and peritoneal examination is required.", "title": "" }, { "docid": "f38530be19fc66121fbce56552ade0ea", "text": "A fully integrated low-dropout-regulated step-down multiphase-switched-capacitor DC-DC converter (a.k.a. charge pump, CP) with a fast-response adaptive-phase (Fast-RAP) digital controller is designed using a 65-nm CMOS process. Different from conventional designs, a low-dropout regulator (LDO) with an NMOS power stage is used without the need for an additional stepup CP for driving. A clock tripler and a pulse divider are proposed to enable the Fast-RAP control. As the Fast-RAP digital controller is designed to be able to respond faster than the cascaded linear regulator, transient response will not be affected by the adaptive scheme. Thus, light-load efficiency is improved without sacrificing the response time. When the CP operates at 90 MHz with 80.3% CP efficiency, only small ripples would appear on the CP output with the 18-phase interleaving scheme, and be further attenuated at VOUT by the 50-mV dropout regulator with only 4.1% efficiency overhead and 6.5% area overhead. The output ripple is less than 2 mV for a load current of 20 mA.", "title": "" }, { "docid": "f515695b3d404d29a12a5e8e58a91fc0", "text": "One area of positive psychology analyzes subjective well-being (SWB), people's cognitive and affective evaluations of their lives. Progress has been made in understanding the components of SWB, the importance of adaptation and goals to feelings of well-being, the temperament underpinnings of SWB, and the cultural influences on well-being. Representative selection of respondents, naturalistic experience sampling measures, and other methodological refinements are now used to study SWB and could be used to produce national indicators of happiness.", "title": "" }, { "docid": "1b5655b91ccd844b5925d329456e3de8", "text": "In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models.", "title": "" }, { "docid": "f14f6d95f13ca6f92fe14c59e3ad0c81", "text": "The ever-increasing representativeness of software maintenance in the daily effort of software team requires initiatives for enhancing the activities accomplished to provide a good service for users who request a software improvement. This article presents a quantitative approach for evaluating software maintenance services based on cluster analysis techniques. The proposed approach provides a compact characterization of the services delivered by a maintenance organization, including characteristics such as service, waiting, and queue time. The ultimate goal is to help organizations to better understand, manage, and improve their current software maintenance process. We also report in this paper the usage of the proposed approach in a medium-sized organization throughout 2010. This case study shows that 72 software maintenance requests can be grouped in seven distinct clusters containing requests with similar characteristics. The in-depth analysis of the clusters found with our approach can foster the understanding of the nature of the requests and, consequently, it may improve the process followed by the software maintenance team.", "title": "" }, { "docid": "ac8a620e752144e3f4e20c16efb56ebc", "text": "or as ventricular fibrillation, the circulation must be restored promptly; otherwise anoxia will result in irreversible damage. There are two techniques that may be used to meet the emergency: one is to open the chest and massage the heart directly and the other is to accomplish the same end by a new method of closed-chest cardiac massage. The latter method is described in this communication. The closed-chest alternating current defibrillator ' that", "title": "" }, { "docid": "a387781a96a39448ca22b49154aaf80c", "text": "LEGO is a globally popular toy composed of colorful interlocking plastic bricks that can be assembled in many ways; however, this special feature makes designing a LEGO sculpture particularly challenging. Building a stable sculpture is not easy for a beginner; even an experienced user requires a good deal of time to build one. This paper provides a novel approach to creating a balanced LEGO sculpture for a 3D model in any pose, using centroid adjustment and inner engraving. First, the input 3D model is transformed into a voxel data structure. Next, the model’s centroid is adjusted to an appropriate position using inner engraving to ensure that the model stands stably. A model can stand stably without any struts when the center of mass is moved to the ideal position. Third, voxels are merged into layer-by-layer brick layout assembly instructions. Finally, users will be able to build a LEGO sculpture by following these instructions. The proposed method is demonstrated with a number of LEGO sculptures and the results of the physical experiments are presented.", "title": "" }, { "docid": "37af5d5ee2e4f6b94aa5c93d12f98017", "text": "This paper reviews prior research in management accounting innovations covering the period 1926-2008. Management accounting innovations refer to the adoption of “newer” or modern forms of management accounting systems such as activity-based costing, activity-based management, time-driven activity-based costing, target costing, and balanced scorecards. Although some prior reviews, covering the period until 2000, place emphasis on modern management accounting techniques, however, we believe that the time gap between 2000 and 2008 could entail many new or innovative accounting issues. We find that research in management accounting innovations has intensified during the period 2000-2008, with the main focus has been on explaining various factors associated with the implementation and the outcome of an innovation. In addition, research in management accounting innovations indicates the dominant use of sociological-based theories and increasing use of field studies. We suggest some directions for future research pertaining to management accounting innovations.", "title": "" }, { "docid": "0e514c165e362de91764f3ddd2a09e15", "text": "The authors examined how networks of teams integrate their efforts to succeed collectively. They proposed that integration processes used to align efforts among multiple teams are important predictors of multiteam performance. The authors used a multiteam system (MTS) simulation to assess how both cross-team and within-team processes relate to MTS performance over multiple performance episodes that differed in terms of required interdependence levels. They found that cross-team processes predicted MTS performance beyond that accounted for by within-team processes. Further, cross-team processes were more important for MTS effectiveness when there were high cross-team interdependence demands as compared with situations in which teams could work more independently. Results are discussed in terms of extending theory and applications from teams to multiteam systems.", "title": "" }, { "docid": "62cc85ab7517797f50ce5026fbc5617a", "text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.", "title": "" }, { "docid": "2b1649b47d2615f3e33c9506dabdc6c6", "text": "In 1994, amongst a tide of popular books on virtual reality, Grigore Burdea and Philippe Coiffet published a well researched review of the field. Their book, “Virtual Reality Technology,” was notable because it was the first to contain detailed information on force and tactile feedback, areas in which both the authors have conducted extensive research. The book became a classic, and although not intended as such was adopted as the textbook of choice for many university classes in virtual reality. This was due in part to its broad review of the virtual reality technologies based on a strong engineering and scientific focus. Almost ten years later and Burdea and Coiffet have returned with a second edition that builds on the success of the first. While the content of the second edition is largely the same as the first, with almost identical chapter headings, there is a change in focus towards making this more of an educational tool. From their introduction on, it is clear that the authors intend for this to be used as a textbook. Each chapter is filled with definitions, graphs and equations, and ends with a set of review questions. More significantly the book has an accompanying CD which contains a number of excellent video clips and a complete laboratory manual with instruction on how to build desktop VR interfaces using VRML and Java 3D libraries. The manual is a 120 page book with 18 programming assignments and further homework questions. This book provides the instructor with almost all the material they might need for a course in virtual reality. The content itself is well written and researched. The authors have taken the material of the first book and updated much of it to reflect a decade of growth in the VR field. A strong theme running through the book is the rising dominance of PC-based virtual reality platforms, particularly in the chapter on computing architectures. Readers will be exposed to discussion on graphics rendering pipelines, PC graphics architecture, and clusters. In the fast changing world of PC hardware some of the hardware mentioned has already become dated, but the content still gives an essential grounding in the technological principles. Discussion of hardware architectures is also complemented by chapters on input and display devices, modeling, and programming toolkits. These were also in the original addition, but have been updated to reflect the invention of devices such as the Phantom force-feedback arm, or new software toolkits such as Java 3D. Interestingly, rather than having a whole chapter on force feedback, this now becomes part of a more general chapter on output devices. Burdea’s own work on the Rutgers Master glove with force feedback is barely mentioned at all. As with any book on a field as rich as virtual reality it is impossible to cover all possible topics in significant depth. The authors handle this by providing hundreds of references to the relevant technical literature, enabling readers to study topics in as much depth as they are interested in. In the first book a separate bibliography and list of VR companies and laboratories was provided at the end of the book. In the second edition, references are provided at the end of each chapter. This makes each chapter more self contained and suitable for studying in almost any order, once the introduction has been read. In this way the book provides an ideal introduction to a student or researcher who will want to know where to find out more. Despite its considerable strengths there are a number of weaknesses the authors might want to address when they produce a third edition. Some of these are minor. For example, the first edition had a collection of color photographs showing a variety of VR technologies and environments. Unfortunately these are missing from the second edition, and although the many black and white pictures are excellent, there are aspects of the technology that can be best understood by seeing it in color. As a teaching tool, it would have been good for the authors to provide more code samples on the enclosed Presence, Vol. 12, No. 6, December 2003, 663–664", "title": "" } ]
scidocsrr
885084d8bfceb6c2ec9ab84e86f3b502
Online Controlled Experiments and A / B Tests
[ { "docid": "c2c056ae22c22e2a87b9eca39d125cc2", "text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.", "title": "" } ]
[ { "docid": "8da8ecae2ae9f49135dd3480992069f0", "text": "In this paper, we investigate the use of decentralized blockchain mechanisms for delivering transparent, secure, reliable, and timely energy flexibility, under the form of adaptation of energy demand profiles of Distributed Energy Prosumers, to all the stakeholders involved in the flexibility markets (Distribution System Operators primarily, retailers, aggregators, etc.). In our approach, a blockchain based distributed ledger stores in a tamper proof manner the energy prosumption information collected from Internet of Things smart metering devices, while self-enforcing smart contracts programmatically define the expected energy flexibility at the level of each prosumer, the associated rewards or penalties, and the rules for balancing the energy demand with the energy production at grid level. Consensus based validation will be used for demand response programs validation and to activate the appropriate financial settlement for the flexibility providers. The approach was validated using a prototype implemented in an Ethereum platform using energy consumption and production traces of several buildings from literature data sets. The results show that our blockchain based distributed demand side management can be used for matching energy demand and production at smart grid level, the demand response signal being followed with high accuracy, while the amount of energy flexibility needed for convergence is reduced.", "title": "" }, { "docid": "528e16d5e3c4f5e7edc77d8e5960ba4f", "text": "Nowadays, a large amount of documents is generated daily. These documents may contain some spelling errors which should be detected and corrected by using a proofreading tool. Therefore, the existence of automatic writing assistance tools such as spell-checkers/correctors could help to improve their quality. Spelling errors could be categorized into five categories. One of them is real-word errors, which are misspelled words that have been wrongly converted into another word in the language. Detection of such errors requires discourse analysis rather than just checking the word in a dictionary. We propose a discourse-aware discriminative model to improve the results of context-sensitive spell-checkers by reranking their resulted n-best list. We augment the proposed reranker into two existing context-sensitive spell-checker systems; one of them is based on statistical machine translation and the other one is based on language model. We choose the keywords of the whole document as contextual features of the model and improve the results of both systems by employing the features in a log-linear reranker system. We evaluated the system on two different languages: English and Persian. The results of the experiments in English language on the Wall street journal test set show improvements of 4.5% and 5.2% in detection and correction recall, respectively, in comparison to the baseline method. The mentioned improvement on recall metric was achieved with comparable precision. We also achieve state-of-the-art performance on the Persian language. .................................................................................................................................................................................", "title": "" }, { "docid": "94784bc9f04dbe5b83c2a9f02e005825", "text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.", "title": "" }, { "docid": "b414ed7d896bff259dc975bf16777fa7", "text": "We propose in this work a general procedure to efficient EM-based design of single-layer SIW interconnects, including their transitions to microstrip lines. Our starting point is developed by exploiting available empirical knowledge for SIW. We propose an efficient SIW surrogate model for direct EM design optimization in two stages: first optimizing the SIW width to achieve the specified low cutoff frequency, followed by the transition optimization to reduce reflections and extend the dominant mode bandwidth. Our procedure is illustrated by designing a SIW interconnect on a standard FR4-based substrate.", "title": "" }, { "docid": "fe70c7614c0414347ff3c8bce7da47e7", "text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.", "title": "" }, { "docid": "cd0f0c4e323a70596320cfa40178d469", "text": "In this paper we propose a novel, passive approach for detecting and tracking malicious flux service networks. Our detection system is based on passive analysis of recursive DNS (RDNS) traffic traces collected from multiple large networks. Contrary to previous work, our approach is not limited to the analysis of suspicious domain names extracted from spam emails or precompiled domain blacklists. Instead, our approach is able to detect malicious flux service networks in-the-wild, i.e., as they are accessed by users who fall victims of malicious content advertised through blog spam, instant messaging spam, social website spam, etc., beside email spam. We experiment with the RDNS traffic passively collected at two large ISP networks. Overall, our sensors monitored more than 2.5 billion DNS queries per day from millions of distinct source IPs for a period of 45 days. Our experimental results show that the proposed approach is able to accurately detect malicious flux service networks. Furthermore, we show how our passive detection and tracking of malicious flux service networks may benefit spam filtering applications.", "title": "" }, { "docid": "629b63889e43ee1fce3c6c850342428e", "text": "Purpose – This paper aims to survey the web sites of the academic libraries of the Association of Research Libraries (USA) regarding the adoption of Web 2.0 technologies. Design/methodology/approach – The websites of 100 member academic libraries of the Association of Research Libraries (USA) were surveyed. Findings – All libraries were found to be using various tools of Web 2.0. Blogs, microblogs, RSS, instant messaging, social networking sites, mashups, podcasts, and vodcasts were widely adopted, while wikis, photo sharing, presentation sharing, virtual worlds, customized webpage and vertical search engines were used less. Libraries were using these tools for sharing news, marketing their services, providing information literacy instruction, providing information about print and digital resources, and soliciting feedback of users. Originality/value – The paper is useful for future planning of Web 2.0 use in academic libraries.", "title": "" }, { "docid": "3d93c45e2374a7545c6dff7de0714352", "text": "Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interest. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest in object and the interest in the link structure of objects. Experiments with news-scale text data show that the interest in object and the interest in link structure have real requirements, and it is effective to recommend texts according to the angles. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "51adc790a11769186958d08179f81ed6", "text": "Background: Breast cancer is a major public health problem globally. The ongoing epidemiological, socio-cultural\nand demographic transition by accentuating the associated risk factors has disproportionately increased the incidence\nof breast cancer cases and resulting mortality in developing countries like India. Early diagnosis with rapid initiation\nof treatment reduces breast cancer mortality. Therefore awareness of breast cancer risk and a willingness to undergo\nscreening are essential. The objective of the present study was to assess the knowledge and practices relating to screening\nfor breast cancer among women in Delhi. Methods: Data were obtained from 222 adult women using a pretested selfadministered\nquestionnaire. Results: Rates for knowledge of known risk factors of breast cancer were: family history\nof breast cancer, 59.5%; smoking, 57.7%; old age, 56.3%; lack of physical exercise, 51.9%; lack of breastfeeding,\n48.2%; late menopause, 37.4%; and early menarche, 34.7%. Women who were aged < 30 and those who were unmarried\nregistered significantly higher knowledge scores (p ≤ 0.01). Breast self-examination (BSE) was regularly practiced\nat-least once a month by 41.4% of the participants. Some 48% knew mammography has a role in the early detection\nof breast cancer. Since almost three-fourths of the participants believed BSE could help in early diagnosis of breast\ncancer, which is not supported by evidence, future studies should explore the consequences of promoting BSE at the\npotential expense of screening mammography. Conclusion: Our findings highlight the need for awareness generation\namong adult women regarding risk factors and methods for early detection of breast cancer.", "title": "" }, { "docid": "93c24024349853033a60ce06aa2b700e", "text": "Mines deployed in post-war countries pose severe threats to civilians and hamper the reconstruction effort in war hit societies. In the scope of the EU FP7 TIRAMISU Project, a toolbox for humanitarian demining missions is being developed by the consortium members. In this article we present the FSR Husky, an affordable, lightweight and autonomous all terrain robotic system, developed to assist human demining operation teams. Intended to be easily deployable on the field, our robotic solution has the ultimate goal of keeping humans away from the threat, safeguarding their lives. A detailed description of the modular robotic system architecture is presented, and several real world experiments are carried out to validate the robot’s functionalities and illustrate continuous work in progress on minefield coverage, mine detection, outdoor localization, navigation, and environment perception. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4aee0c91e48b9a34be4591d36103c622", "text": "We construct a polyhedron that is topologically convex (i.e., has the graph of a convex polyhedron) yet has no vertex unfolding: no matter how we cut along the edges and keep faces attached at vertices to form a connected (hinged) surface, the surface necessarily unfolds with overlap.", "title": "" }, { "docid": "c56d09b3c08f2cb9cc94ace3733b1c54", "text": "In this paper, we describe our microblog realtime filtering system developed and submitted for the Text Retrieval Conference (TREC 2015) microblog track. We submitted six runs for two tasks related to real-time filtering by using various Information Retrieval (IR), and Machine Learning (ML) techniques to analyze the Twitter sample live stream and match relevant tweets corresponding to specific user interest profiles. Evaluation results demonstrate the effectiveness of our approach as we achieved 3 of the top 7 best scores among automatic submissions across all participants and obtained the best (or close to best) scores in more than 25% of the evaluated topics for the real-time mobile push notification task.", "title": "" }, { "docid": "396f0c39b5afbf6bee2f7168f23ecccb", "text": "This work describes a method for real-time motion detection using an active camera mounted on a padtilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation. Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pankilt angles between successive frames are as large as 3\".", "title": "" }, { "docid": "e3739a934ecd7b99f2d35a19f2aed5cf", "text": "We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations, the timing of information exchange, the amount of local information needed at each computation node, and the initial conditions for the algorithm. The class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to a shortest path problem the algorithm reduces to the algorithm originally implemented for routing of messages in the ARPANET.", "title": "" }, { "docid": "4f3177b303b559f341b7917683114257", "text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.", "title": "" }, { "docid": "cb8ffb03187583308eb8409d75a54172", "text": "Active Traffic Management (ATM) systems have been introduced by transportation agencies to manage recurrent and non-recurrent congestion. ATM systems rely on the interconnectivity of components made possible by wired and/or wireless networks. Unfortunately, this connectivity that supports ATM systems also provides potential system access points that results in vulnerability to cyberattacks. This is becoming more pronounced as ATM systems begin to integrate internet of things (IoT) devices. Hence, there is a need to rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore design concepts that provide stability and graceful degradation in the face of cyberattacks. In this research, a prototype ATM system along with a real-time cyberattack monitoring system were developed for a 1.5-mile section of I-66 in Northern Virginia. The monitoring system detects deviation from expected operation of an ATM system by comparing lane control states generated by the ATM system with lane control states deemed most likely by the monitoring system. This comparison provides the functionality to continuously monitor the system for abnormalities that would result from a cyberattack. In case of any deviation between two sets of states, the monitoring system displays the lane control states generated by the back-up data source. In a simulation experiment, the prototype ATM system and cyberattack monitoring system were subject to emulated cyberattacks. The evaluation results showed that the ATM system, when operating properly in the absence of attacks, improved average vehicle speed in the system to 60mph (a 13% increase compared to the baseline case without ATM). However, when subject to cyberattack, the mean speed reduced by 15% compared to the case with the ATM system and was similar to the baseline case. This illustrates that the effectiveness of the ATM system was negated by cyberattacks. The monitoring system however, allowed the ATM system to revert to an expected state with a mean speed of 59mph and reduced the negative impact of cyberattacks. These results illustrate the need to revisit ATM system design concepts as a means to protect against cyberattacks in addition to traditional system intrusion prevention approaches.", "title": "" }, { "docid": "9c507a2b1f57750d1b4ffeed6979a06f", "text": "Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.", "title": "" }, { "docid": "640ba15172b56373b3a6bdfe9f5f6cd4", "text": "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multiagent domains.", "title": "" }, { "docid": "04cdcf2234ffaafbd24eb20fb584cf5d", "text": "Grice (1957) drew a famous distinction between natural(N) and non-natural(NN) meaning, where what is meant(NN) is broadly equivalent to what is intentionally communicated. This paper argues that Grice’s dichotomy overlooks the fact that spontaneously occurring natural signs may be intentionally shown , and hence used in intentional communication. It also argues that some naturally occurring behaviours have a signalling function, and that the existence of such natural codes provides further evidence that Grice’s original distinction was not exhaustive. The question of what kind of information, in cognitive terms, these signals encode is also examined.", "title": "" }, { "docid": "e7bf372840efea55c632afd96840212d", "text": "The purpose of this systematic analysis of nursing simulation literature between 2000 -2007 was to determine how learning theory was used to design and assess learning that occurs in simulations. Out of the 120 articles in which designing nursing simulations was reported, 16 referenced learning or developmental theory as the basis of how and why they set up the simulation. Of the 16 articles that used a learning type of foundation, only two considered learning as a cognitive task. More research is needed that investigates the efficacy of simulation for improving student learning. The study concludes that most nursing faculty approach simulation from a teaching paradigm rather than a learning paradigm. For simulation to foster student learning there must be a fundamental shift from a teaching paradigm to a learning paradigm and a foundational learning theory to design and evaluate simulation should be used. Examples of how to match simulation with learning theory are included.", "title": "" } ]
scidocsrr
d7cc6a11815526daa38bb207ae0bc575
Emotional disorders: cluster 4 of the proposed meta-structure for DSM-V and ICD-11.
[ { "docid": "32fbccbe3b8795c0d2e2934acbdfcc06", "text": "Epidemiologic studies indicate that children exposed to early adverse experiences are at increased risk for the development of depression, anxiety disorders, or both. Persistent sensitization of central nervous system (CNS) circuits as a consequence of early life stress, which are integrally involved in the regulation of stress and emotion, may represent the underlying biological substrate of an increased vulnerability to subsequent stress as well as to the development of depression and anxiety. A number of preclinical studies suggest that early life stress induces long-lived hyper(re)activity of corticotropin-releasing factor (CRF) systems as well as alterations in other neurotransmitter systems, resulting in increased stress responsiveness. Many of the findings from these preclinical studies are comparable to findings in adult patients with mood and anxiety disorders. Emerging evidence from clinical studies suggests that exposure to early life stress is associated with neurobiological changes in children and adults, which may underlie the increased risk of psychopathology. Current research is focused on strategies to prevent or reverse the detrimental effects of early life stress on the CNS. The identification of the neurobiological substrates of early adverse experience is of paramount importance for the development of novel treatments for children, adolescents, and adults.", "title": "" } ]
[ { "docid": "83cfa05fc29b4eb4eb7b954ba53498f5", "text": "Smartphones, the devices we carry everywhere with us, are being heavily tracked and have undoubtedly become a major threat to our privacy. As “Tracking the trackers” has become a necessity, various static and dynamic analysis tools have been developed in the past. However, today, we still lack suitable tools to detect, measure and compare the ongoing tracking across mobile OSs. To this end, we propose MobileAppScrutinator, based on a simple yet efficient dynamic analysis approach, that works on both Android and iOS (the two most popular OSs today). To demonstrate the current trend in tracking, we select 140 most representative Apps available on both Android and iOS AppStores and test them with MobileAppScrutinator. In fact, choosing the same set of apps on both Android and iOS also enables us to compare the ongoing tracking on these two OSs. Finally, we also discuss the effectiveness of privacy safeguards available on Android and iOS. We show that neither Android nor iOS privacy safeguards in their present state are completely satisfying.", "title": "" }, { "docid": "2477e41b180e29112e9d10cecd021034", "text": "OBJECTIVE\nResearch in both animals and humans indicates that cannabidiol (CBD) has antipsychotic properties. The authors assessed the safety and effectiveness of CBD in patients with schizophrenia.\n\n\nMETHOD\nIn an exploratory double-blind parallel-group trial, patients with schizophrenia were randomized in a 1:1 ratio to receive CBD (1000 mg/day; N=43) or placebo (N=45) alongside their existing antipsychotic medication. Participants were assessed before and after treatment using the Positive and Negative Syndrome Scale (PANSS), the Brief Assessment of Cognition in Schizophrenia (BACS), the Global Assessment of Functioning scale (GAF), and the improvement and severity scales of the Clinical Global Impressions Scale (CGI-I and CGI-S).\n\n\nRESULTS\nAfter 6 weeks of treatment, compared with the placebo group, the CBD group had lower levels of positive psychotic symptoms (PANSS: treatment difference=-1.4, 95% CI=-2.5, -0.2) and were more likely to have been rated as improved (CGI-I: treatment difference=-0.5, 95% CI=-0.8, -0.1) and as not severely unwell (CGI-S: treatment difference=-0.3, 95% CI=-0.5, 0.0) by the treating clinician. Patients who received CBD also showed greater improvements that fell short of statistical significance in cognitive performance (BACS: treatment difference=1.31, 95% CI=-0.10, 2.72) and in overall functioning (GAF: treatment difference=3.0, 95% CI=-0.4, 6.4). CBD was well tolerated, and rates of adverse events were similar between the CBD and placebo groups.\n\n\nCONCLUSIONS\nThese findings suggest that CBD has beneficial effects in patients with schizophrenia. As CBD's effects do not appear to depend on dopamine receptor antagonism, this agent may represent a new class of treatment for the disorder.", "title": "" }, { "docid": "55928e118303b080d49a399da1f9dba3", "text": "This paper describes a customized database and a comprehensive set of queries that can be used for systematic benchmarking of relational database systems. Designing this database and a set of carefully tuned benchmarks represents a first attempt in developing a scientific methodology for performance evaluation of database management systems. We have used this database to perform a comparative evaluation of the database machine DIRECT, the \"university\" and \"commercial\" versions of the INGRES database system, the relational database system ORACLE, and the IDM 500 database machine. We present a subset of our measurements (for the single user case only), that constitute a preliminary performance evaluation of these systems.", "title": "" }, { "docid": "63d26f3336960c1d92afbd3a61a9168c", "text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.", "title": "" }, { "docid": "9c17dad32d130072b1d26b21b8c97896", "text": "A novel planar inverted-F antenna (PIFA) is designed in this paper. Compared to the previous PIFA, the proposed PIFA can enhance bandwidths and achieve multi-band which is loaded with a T-shaped ground plane and etched slots on ground plane and a rectangular patch. It covered 4 service bands, including GSM900, DCS1800, PCS1900 and ISM2450 under the criteria -7 dB return loss for the first band and -10 dB for the last bands. Process of designing and calculation of parameters are presented in detail. The simulation results showed that each band has good characteristics and the bandwidth has been greatly expanded.", "title": "" }, { "docid": "01f741144e6304915a6d086165bfe17d", "text": "The standardization and performance testing of analysis tools is a prerequisite to widespread adoption of genome-wide sequencing, particularly in the clinic. However, performance testing is currently complicated by the paucity of standards and comparison metrics, as well as by the heterogeneity in sequencing platforms, applications and protocols. Here we present the genome comparison and analytic testing (GCAT) platform to facilitate development of performance metrics and comparisons of analysis tools across these metrics. Performance is reported through interactive visualizations of benchmark and performance testing data, with support for data slicing and filtering. The platform is freely accessible at http://www.bioplanet.com/gcat.", "title": "" }, { "docid": "0dd4f05f9bd3d582b9fb9c64f00ed697", "text": "Today, among other challenges, teaching students how to write computer programs for the first time can be an important criterion for whether students in computing will remain in their program of study, i.e. Computer Science or Information Technology. Not learning to program a computer as a computer scientist or information technologist can be compared to a mathematician not learning algebra. For a mathematician this would be an extremely limiting situation. For a computer scientist, not learning to program imposes a similar severe limitation on the budding computer scientist. Therefore it is not a question as to whether programming should be taught rather it is a question of how to maximize aspects of teaching programming so that students are less likely to be discouraged when learning to program. Different criteria have been used to select first programming languages. Computer scientists have attempted to establish criteria for selecting the first programming language to teach a student. This paper examines the criteria used to select first programming languages and the issues that novices face when learning to program in an effort to create a more comprehensive model for selecting first programming languages.", "title": "" }, { "docid": "ade9860157680b2ca6820042f0cda302", "text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &", "title": "" }, { "docid": "57d0e046517cc669746d4ecda352dc3f", "text": "This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.", "title": "" }, { "docid": "829b910e2c73ee15866fc59de4884200", "text": "Shared-memory multiprocessors are frequently used as compute servers with multiple parallel applications executing at the same time. In such environments, the efficiency of a parallel application can be significantly affected by the operating system scheduling policy. In this paper, we use detailed simulation studies to evaluate the performance of several different scheduling strategies, These include regular priority scheduling, coscheduling or gang scheduling, process control with processor partitioning, handoff scheduling, and affinity-based scheduling. We also explore tradeoffs between the use of busy-waiting and blocking synchronization primitives and their interactions with the scheduling strategies. Since effective use of caches is essential to achieving high performance, a key focus is on the impact of the scheduling strategies on the caching behavior of the applications.Our results show that in situations where the number of processes exceeds the number of processors, regular priority-based scheduling in conjunction with busy-waiting synchronization primitives results in extremely poor processor utilization. In such situations, use of blocking synchronization primitives can significantly improve performance. Process control and gang scheduling strategies are shown to offer the highest performance, and their performance is relatively independent of the synchronization method used. However, for applications that have sizable working sets that fit into the cache, process control performs better than gang scheduling. For the applications considered, the performance gains due to handoff scheduling and processor affinity are shown to be small.", "title": "" }, { "docid": "dfa51004b99bce29e644fbcca4b833a5", "text": "This paper presents Latent Sampling-based Motion Planning (L-SBMP), a methodology towards computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this paper we combine these recent advances with techniques from samplingbased motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system’s states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space – we refer to this exploration algorithm as Learned Latent RRT (L2RRT). This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem.", "title": "" }, { "docid": "e742aa091dae6227994cffcdb5165769", "text": "In this paper, a new adaptive multi-batch experience replay scheme is proposed for proximal policy optimization (PPO) for continuous action control. On the contrary to original PPO, the proposed scheme uses the batch samples of past policies as well as the current policy for the update for the next policy, where the number of the used past batches is adaptively determined based on the oldness of the past batches measured by the average importance sampling (IS) weight. The new algorithm constructed by combining PPO with the proposed multi-batch experience replay scheme maintains the advantages of original PPO such as random minibatch sampling and small bias due to low IS weights by storing the pre-computed advantages and values and adaptively determining the mini-batch size. Numerical results show that the proposed method significantly increases the speed and stability of convergence on various continuous control tasks compared to original PPO.", "title": "" }, { "docid": "27381c67ea64e84846fb3ed156304288", "text": "The mapping of lab tests to the Laboratory Test Code controlled terminology in CDISC-SDTM § can be a challenge. One has to find candidates in the extensive controlled terminology list. Then there can be multiple lab tests that map to a single SDTM controlled term. This means additional variables must be used in order to produce a unique test definition (e.g. LBCAT, LBSPEC, LBMETHOD and/or LBELTM). Finally, it can occur that a controlled term is not available and a code needs to be defined in agreement with the rules for Lab tests. This paper describes my experience with the implementation of SDTM controlled terminology for lab tests during an SDTM conversion activity. In six clinical studies 124 lab tests were mapped to 101 SDTM controlled terms. The lab tests included routine lab parameters, coagulation parameters, hormones, glucose tolerance test and pregnancy test. INTRODUCTION This paper aims to give detailed examples of SDTM LB datasets that were created for six studies included in an FDA submission. Background information on the conversion project that formed the context of this work can be found in an earlier PhUSE contribution [1]. With the exception of part of the hormone data all laboratory data of these studies had been extracted from the Oracle Clinical TM NORMLAB2 system, which delivered complete and standardized lab data, i.e. standardized parameter (lab test) names, values, units and ranges. Subsequently, these NORMLAB2 extracts had been enriched with derived variables and records, following internal data standards and conventions, to form standardized analysis-ready datasets. These were the basis for conversion to SDTM LB datasets. The combined source datasets of the six studies held 124 distinct lab tests, which were mapped to 101 distinct lab controlled terms. Controlled terminology for lab tests is part of the SDTM terminology, which is published on the NCI EVS website [2]. New lab test terms have been released for public review through a series of packages [3], starting in 2007. Since version 3.1.2. of the SDTM Implementation Guide [4], the use of SDTM controlled terminology for lab tests is assumed for LBTESTCD and LBTEST (codelists C65047 and C67154). Table 1 provides an overview of the number of lab tests per study in the source data vs. the SDTM datasets (i.e. the number of LBTEST/LBTESTCD codes) and shows how these codes were distributed across different lab test categories. A set of 22 ‘routine safety parameters’ occurred in all four phase III studies (001-004), with 16 tests occurring in all six studies. § Clinical Data Interchange Standards Consortium Study Data Tabulation Model δ National Cancer Institute Enterprise Vocabulary Services", "title": "" }, { "docid": "a7c9d58c49f1802b94395c6f12c2d6dd", "text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2d04a311815c8fef8728e4a992d3efac", "text": "The amidase activities of two Aminobacter sp. strains (DSM24754 and DSM24755) towards the aryl-substituted substrates phenylhydantoin, indolylmethyl hydantoin, D,L-6-phenyl-5,6-dihydrouracil (PheDU) and para-chloro-D,L-6-phenyl-5,6-dihydrouracil were compared. Both strains showed hydantoinase and dihydropyrimidinase activity by hydrolyzing all substrates to the corresponding N-carbamoyl-α- or N-carbamoyl-β-amino acids. However, carbamoylase activity and thus a further degradation of these products to α- and β-amino acids was not detected. Additionally, the genes coding for a dihydropyrimidinase and a carbamoylase of Aminobacter sp. DSM24754 were elucidated. For Aminobacter sp. DSM24755 a dihydropyrimidinase gene flanked by two genes coding for putative ABC transporter proteins was detected. The deduced amino acid sequences of both dihydropyrimidinases are highly similar to the well-studied dihydropyrimidinase of Sinorhizobium meliloti CECT4114. The latter enzyme is reported to accept substituted hydantoins and dihydropyrimidines as substrates. The deduced amino acid sequence of the carbamoylase gene shows a high similarity to the very thermostable enzyme of Pseudomonas sp. KNK003A.", "title": "" }, { "docid": "062f6ecc9d26310de82572f500cb5f05", "text": "The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.", "title": "" }, { "docid": "05049ac85552c32f2c98d7249a038522", "text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.", "title": "" }, { "docid": "a6f2cee851d2c22d471f473caf1710a1", "text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.", "title": "" }, { "docid": "40dc2dc28dca47137b973757cdf3bf34", "text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.", "title": "" }, { "docid": "58d7e76a4b960e33fc7b541d04825dc9", "text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.", "title": "" } ]
scidocsrr
b9ccbb7e14686ad54dda551935532135
Energy Harvesting Using a Low-Cost Rectenna for Internet of Things (IoT) Applications
[ { "docid": "3d9fbf84b4a9d6524a3f87d0b6869b99", "text": "The idea of wireless power transfer (WPT) has been around since the inception of electricity. In the late 19th century, Nikola Tesla described the freedom to transfer energy between two points without the need for a physical connection to a power source as an \"all-surpassing importance to man\". A truly wireless device, capable of being remotely powered, not only allows the obvious freedom of movement but also enables devices to be more compact by removing the necessity of a large battery. Applications could leverage this reduction in size and weight to increase the feasibility of concepts such as paper-thin, flexible displays, contact-lens-based augmented reality, and smart dust, among traditional point-to-point power transfer applications. While several methods of wireless power have been introduced since Tesla's work, including near-field magnetic resonance and inductive coupling, laser-based optical power transmission, and far-field RF/microwave energy transmission, only RF/microwave and laser-based systems are truly long-range methods. While optical power transmission certainly has merit, its mechanisms are outside of the scope of this article and will not be discussed.", "title": "" }, { "docid": "c41efa28806b3ac3d2b23d9e52b85193", "text": "The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.", "title": "" } ]
[ { "docid": "d71faafdcf1b97951e979f13dbe91cb2", "text": "We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrasebased statistical machine translation.", "title": "" }, { "docid": "7146615b79dd39e358dd148e57a01fdb", "text": "Graphs are one of the key data structures for many real-world computing applications and the importance of graph analytics is ever-growing. While existing software graph processing frameworks improve programmability of graph analytics, underlying general purpose processors still limit the performance and energy efficiency of graph analytics. We architect a domain-specific accelerator, Graphicionado, for high-performance, energy-efficient processing of graph analytics workloads. For efficient graph analytics processing, Graphicionado exploits not only data structure-centric datapath specialization, but also memory subsystem specialization, all the while taking advantage of the parallelism inherent in this domain. Graphicionado augments the vertex programming paradigm, allowing different graph analytics applications to be mapped to the same accelerator framework, while maintaining flexibility through a small set of reconfigurable blocks. This paper describes Graphicionado pipeline design choices in detail and gives insights on how Graphicionado combats application execution inefficiencies on general-purpose CPUs. Our results show that Graphicionado achieves a 1.76-6.54x speedup while consuming 50-100x less energy compared to a state-of-the-art software graph analytics processing framework executing 32 threads on a 16-core Haswell Xeon processor.", "title": "" }, { "docid": "863e71cf1c1eddf3c6ceac400670e6f7", "text": "This paper provides a brief overview to four major types of causal models for health-sciences research: Graphical models (causal diagrams), potential-outcome (counterfactual) models, sufficient-component cause models, and structural-equations models. The paper focuses on the logical connections among the different types of models and on the different strengths of each approach. Graphical models can illustrate qualitative population assumptions and sources of bias not easily seen with other approaches; sufficient-component cause models can illustrate specific hypotheses about mechanisms of action; and potential-outcome and structural-equations models provide a basis for quantitative analysis of effects. The different approaches provide complementary perspectives, and can be employed together to improve causal interpretations of conventional statistical results.", "title": "" }, { "docid": "afe4c8e46449bfa37a04e67595d4537b", "text": "Gamification is the use of game design elements in non-game settings to engage participants and encourage desired behaviors. It has been identified as a promising technique to improve students' engagement which could have a positive impact on learning. This study evaluated the learning effectiveness and engagement appeal of a gamified learning activity targeted at the learning of C-programming language. Furthermore, the study inquired into which gamified learning activities were more appealing to students. The study was conducted using the mixed-method sequential explanatory protocol. The data collected and analysed included logs, questionnaires, and pre- and post-tests. The results of the evaluation show positive effects on the engagement of students toward the gamified learning activities and a moderate improvement in learning outcomes. Students reported different motivations for continuing and stopping activities once they completed the mandatory assignment. The preferences for different gamified activities were also conditioned by academic milestones.", "title": "" }, { "docid": "6c4b9b5383269ed47d2077068652f0b7", "text": "Security issues in computer networks have focused on attacks on end systems and the control plane. An entirely new class of emerging network attacks aims at the data plane of the network. Data plane forwarding in network routers has traditionally been implemented with custom-logic hardware, but recent router designs increasingly use software-programmable network processors for packet forwarding. These general-purpose processing devices exhibit software vulnerabilities and are susceptible to attacks. We demonstrate-to our knowledge the first-practical attack that exploits a vulnerability in packet processing software to launch a devastating denial-of-service attack from within the network infrastructure. This attack uses only a single attack packet to consume the full link bandwidth of the router's outgoing link. We also present a hardware-based defense mechanism that can detect situations where malicious packets try to change the operation of the network processor. Using a hardware monitor, our NetFPGA-based prototype system checks every instruction executed by the network processor and can detect deviations from correct processing within four clock cycles. A recovery system can restore the network processor to a safe state within six cycles. This high-speed detection and recovery system can ensure that network processors can be protected effectively and efficiently from this new class of attacks.", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" }, { "docid": "cb1fc7a4769141429dc7b41a8d8b7cb8", "text": "Today, by integrating Near Field Communication (NFC) technology in smartphones, bank cards and payment terminals, a purchase transaction can be executed immediately without any physical contact, without entering a PIN code or a signature. Europay Mastercard Visa (EMV) is the standard dedicated for securing contactless-NFC payment transactions. However, it does not ensure two main security proprieties: (1) the authentication of the payment terminal to the client's payment device, (2) the confidentiality of personal banking data. In this paper, we first of all detail EMV standard and its security vulnerabilities. Then, we propose a solution that enhances the EMV protocol by adding a new security layer aiming to solve EMV weaknesses. We formally check the correctness of the proposal using a security verification tool called Scyther.", "title": "" }, { "docid": "ef4e7445ec9bbbfc8d25d92a16042f88", "text": "CONCRETE", "title": "" }, { "docid": "121a8470fcbf121e5f4c42594c6d24fe", "text": "Research has consistently found that school students who do not identify as self-declared completely heterosexual are at increased risk of victimization by bullying from peers. This study examined heterosexual and nonheterosexual university students' involvement in both traditional and cyber forms of bullying, as either bullies or victims. Five hundred twenty-eight first-year university students (M=19.52 years old) were surveyed about their sexual orientation and their bullying experiences over the previous 12 months. The results showed that nonheterosexual young people reported higher levels of involvement in traditional bullying, both as victims and perpetrators, in comparison to heterosexual students. In contrast, cyberbullying trends were generally found to be similar for heterosexual and nonheterosexual young people. Gender differences were also found. The implications of these results are discussed in terms of intervention and prevention of the victimization of nonheterosexual university students.", "title": "" }, { "docid": "4a6c2d388bb114751b2ce9c6df55beab", "text": "To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider \"quantified self\" movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token \"mcdonalds\" or the category \"dessert\" being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the \"quick added calories\" functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries.", "title": "" }, { "docid": "77d2255e0a2d77ea8b2682937b73cc7d", "text": "Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several represen-", "title": "" }, { "docid": "b6df4868ee1496e581e8b76ca8fb165f", "text": "Through AspectJ, aspect-oriented programming (AOP) is becoming of increasing interest and availability to Java programmers as it matures as a methodology for improved software modularity via the separation of cross-cutting concerns. AOP proponents often advocate a development strategy where Java programmers write the main application, ignoring cross-cutting concerns, and then AspectJ programmers, domain experts in their specific concerns, weave in the logic for these more specialized cross-cutting concerns. However, several authors have recently debated the merits of this strategy by empirically showing certain drawbacks. The proposed solutions paint a different development strategy where base code and aspect programmers are aware of each other (to varying degrees) and interactions between cross-cutting concerns are planned for early on.\n Herein we explore new possibilities in the language design space that open up when the base code is aware of cross-cutting aspects. Using our insights from this exploration we concretize these new possibilities by extending AspectJ with concise yet powerful constructs, while maintaining full backwards compatibility. These new constructs allow base code and aspects to cooperate in ways that were previously not possible: arbitrary blocks of code can be advised, advice can be explicitly parameterized, base code can guide aspects in where to apply advice, and aspects can statically enforce new constraints upon the base code that they advise. These new techniques allow aspect modularity and program safety to increase. We illustrate the value of our extensions through an example based on transactions.", "title": "" }, { "docid": "8c232cd0cea7714dde71669024d3d811", "text": "This paper addresses the problem of finding the K closest pairs between two spatial data sets, where each set is stored in a structure belonging in the R-tree family. Five different algorithms (four recursive and one iterative) are presented for solving this problem. The case of 1 closest pair is treated as a special case. An extensive study, based on experiments performed with synthetic as well as with real point data sets, is presented. A wide range of values for the basic parameters affecting the performance of the algorithms, especially the effect of overlap between the two data sets, is explored. Moreover, an algorithmic as well as an experimental comparison with existing incremental algorithms addressing the same problem is presented. In most settings, the new algorithms proposed clearly outperform the existing ones.", "title": "" }, { "docid": "b31235bf87cc8ebd243fd8c52c63f8d4", "text": "The dual-polarized corporate-feed waveguide slot array antenna is designed for the 60 GHz band. Using the multi-layer structure, we have realized dual-polarization operation. Even though the gain is approximately 1 dB lower than the antenna for the single polarization due to the -15dB cross-polarization level in 8=58°, this antenna still shows very high gain over 32 dBi over the broad bandwidth. This antenna will be fabricated and measured in future.", "title": "" }, { "docid": "c05a32fdc2344cb4a6831f5cc033820f", "text": "We have constructed a wave-front sensor to measure the irregular as well as the classical aberrations of the eye, providing a more complete description of the eye's aberrations than has previously been possible. We show that the wave-front sensor provides repeatable and accurate measurements of the eye's wave aberration. The modulation transfer function of the eye computed from the wave-front sensor is in fair, though not complete, agreement with that obtained under similar conditions on the same observers by use of the double-pass and the interferometric techniques. Irregular aberrations, i.e., those beyond defocus, astigmatism, coma, and spherical aberration, do not have a large effect on retinal image quality in normal eyes when the pupil is small (3 mm). However, they play a substantial role when the pupil is large (7.3-mm), reducing visual performance and the resolution of images of the living retina. Although the pattern of aberrations varies from subject to subject, aberrations, including irregular ones, are correlated in left and right eyes of the same subject, indicating that they are not random defects.", "title": "" }, { "docid": "11c4f0610d701c08516899ebf14f14c4", "text": "Histone post-translational modifications impact many aspects of chromatin and nuclear function. Histone H4 Lys 20 methylation (H4K20me) has been implicated in regulating diverse processes ranging from the DNA damage response, mitotic condensation, and DNA replication to gene regulation. PR-Set7/Set8/KMT5a is the sole enzyme that catalyzes monomethylation of H4K20 (H4K20me1). It is required for maintenance of all levels of H4K20me, and, importantly, loss of PR-Set7 is catastrophic for the earliest stages of mouse embryonic development. These findings have placed PR-Set7, H4K20me, and proteins that recognize this modification as central nodes of many important pathways. In this review, we discuss the mechanisms required for regulation of PR-Set7 and H4K20me1 levels and attempt to unravel the many functions attributed to these proteins.", "title": "" }, { "docid": "e9c4877bca5f1bfe51f97818cc4714fa", "text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using", "title": "" }, { "docid": "4f287c788c7e95bf350a998650ff6221", "text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.", "title": "" }, { "docid": "1195635049c88da8b37a66ca1e85090b", "text": "Temporal-di erence (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at di erent levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of xed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985). 1 Multi-Scale Planning and Modeling Model-based reinforcement learning o ers a potentially elegant solution to the problem of integrating planning into a real-time learning and decisionmaking agent (Sutton, 1990; Barto et al., 1995; Peng & Williams, 1993, Moore & Atkeson, 1994; Dean et al., in prep). However, most current reinforcementlearning systems assume a single, xed time step: actions take one step to complete, and their immediate consequences become available after one step. This makes it di cult to learn and plan at di erent time scales. For example, commuting to work involves planning at a high level about which route to drive (or whether to take the train) and at a low level about how to steer, when to brake, etc. Planning is necessary at both levels in order to optimize precise low-level movements without becoming lost in a sea of detail when making decisions at a high level. Moreover, these levels cannot be kept totally distinct and separate. They must interrelate at least in the sense that the actions and plans at a high levels must be turned into actual, moment-by-moment decisions at the lowest level. The need for hierarchical and abstract planning is a fundamental problem in AI whether or not one uses the reinforcement-learning framework (e.g., Fikes et al., 1972; Sacerdoti, 1977; Kuipers, 1979; Laird et al., 1986; Korf, 1985; Minton, 1988; Watkins, 1989; Drescher, 1991; Ring, 1991; Wixson, 1991; Schmidhuber, 1991; Tenenberg et al., 1992; Kaelbling, 1993; Lin, 1993; Dayan & Hinton, 1993; Dejong, 1994; Chrisman, 1994; Hansen, 1994; Dean & Lin, in prep). We do not propose to fully solve it in this paper. Rather, we develop an approach to multiple-time-scale modeling of the world that may eventually be useful in such a solution. Our approach is to extend temporal-di erence (TD) methods, which are commonly used in reinforcement learning systems to learn value functions, such that they can be used to learn world models. When TD methods are used, the predictions of the models can naturally extend beyond a single time step. As we will show, they can even make predictions that are not speci c to a single time scale, but intermix many such scales, with no loss of performance when the models are used. This approach is an extension of the ideas of Singh (1992), Dayan (1993), and Sutton & Pinette", "title": "" }, { "docid": "be3204a5a4430cc3150bf0368a972e38", "text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.", "title": "" } ]
scidocsrr
ed989dd8908467e1038ee95aa0392a27
STEM education K-12: perspectives on integration
[ { "docid": "aabed671a466730e273225d8ee572f73", "text": "It is essential to base instruction on a foundation of understanding of children’s thinking, but it is equally important to adopt the longer-term view that is needed to stretch these early competencies into forms of thinking that are complex, multifaceted, and subject to development over years, rather than weeks or months. We pursue this topic through our studies of model-based reasoning. We have identified four forms of models and related modeling practices that show promise for developing model-based reasoning. Models have the fortuitous feature of making forms of student reasoning public and inspectable—not only among the community of modelers, but also to teachers. Modeling provides feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice.", "title": "" } ]
[ { "docid": "5d447d516e8f2db2e9d9943972b4b0d1", "text": "Autonomous robot manipulation often involves both estimating the pose of the object to be manipulated and selecting a viable grasp point. Methods using RGB-D data have shown great success in solving these problems. However, there are situations where cost constraints or the working environment may limit the use of RGB-D sensors. When limited to monocular camera data only, both the problem of object pose estimation and of grasp point selection are very challenging. In the past, research has focused on solving these problems separately. In this work, we introduce a novel method called SilhoNet that bridges the gap between these two tasks. We use a Convolutional Neural Network (CNN) pipeline that takes in region of interest (ROI) proposals to simultaneously predict an intermediate silhouette representation for objects with an associated occlusion mask. The 3D pose is then regressed from the predicted silhouettes. Grasp points from a precomputed database are filtered by back-projecting them onto the occlusion mask to find which points are visible in the scene. We show that our method achieves better overall performance than the state-of-the art PoseCNN network for 3D pose estimation on the YCB-video dataset.", "title": "" }, { "docid": "3ccc5fd5bbf570a361b40afca37cec92", "text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.", "title": "" }, { "docid": "892f6150dc4eef8ffaa419cf0ca69532", "text": "Symmetric ankle propulsion is the cornerstone of efficient human walking. The ankle plantar flexors provide the majority of the mechanical work for the step-to-step transition and much of this work is delivered via elastic recoil from the Achilles' tendon — making it highly efficient. Even though the plantar flexors play a central role in propulsion, body-weight support and swing initiation during walking, very few assistive devices have focused on aiding ankle plantarflexion. Our goal was to develop a portable ankle exoskeleton taking inspiration from the passive elastic mechanisms at play in the human triceps surae-Achilles' tendon complex during walking. The challenge was to use parallel springs to provide ankle joint mechanical assistance during stance phase but allow free ankle rotation during swing phase. To do this we developed a novel ‘smart-clutch’ that can engage and disengage a parallel spring based only on ankle kinematic state. The system is purely passive — containing no motors, electronics or external power supply. This ‘energy-neutral’ ankle exoskeleton could be used to restore symmetry and reduce metabolic energy expenditure of walking in populations with weak ankle plantar flexors (e.g. stroke, spinal cord injury, normal aging).", "title": "" }, { "docid": "5507f3199296478abbc6e106943a53ba", "text": "Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.", "title": "" }, { "docid": "0b22284d575fb5674f61529c367bb724", "text": "The scapula fulfils many roles to facilitate optimal function of the shoulder. Normal function of the shoulder joint requires a scapula that can be properly aligned in multiple planes of motion of the upper extremity. Scapular dyskinesis, meaning abnormal motion of the scapula during shoulder movement, is a clinical finding commonly encountered by shoulder surgeons. It is best considered an impairment of optimal shoulder function. As such, it may be the underlying cause or the accompanying result of many forms of shoulder pain and dysfunction. The present review looks at the causes and treatment options for this indicator of shoulder pathology and aims to provide an overview of the management of disorders of the scapula.", "title": "" }, { "docid": "928f64f8ef9b3ea5e107ae9c49840b2c", "text": "Mass spectrometry-based proteomics has greatly benefitted from enormous advances in high resolution instrumentation in recent years. In particular, the combination of a linear ion trap with the Orbitrap analyzer has proven to be a popular instrument configuration. Complementing this hybrid trap-trap instrument, as well as the standalone Orbitrap analyzer termed Exactive, we here present coupling of a quadrupole mass filter to an Orbitrap analyzer. This \"Q Exactive\" instrument features high ion currents because of an S-lens, and fast high-energy collision-induced dissociation peptide fragmentation because of parallel filling and detection modes. The image current from the detector is processed by an \"enhanced Fourier Transformation\" algorithm, doubling mass spectrometric resolution. Together with almost instantaneous isolation and fragmentation, the instrument achieves overall cycle times of 1 s for a top 10 higher energy collisional dissociation method. More than 2500 proteins can be identified in standard 90-min gradients of tryptic digests of mammalian cell lysate- a significant improvement over previous Orbitrap mass spectrometers. Furthermore, the quadrupole Orbitrap analyzer combination enables multiplexed operation at the MS and tandem MS levels. This is demonstrated in a multiplexed single ion monitoring mode, in which the quadrupole rapidly switches among different narrow mass ranges that are analyzed in a single composite MS spectrum. Similarly, the quadrupole allows fragmentation of different precursor masses in rapid succession, followed by joint analysis of the higher energy collisional dissociation fragment ions in the Orbitrap analyzer. High performance in a robust benchtop format together with the ability to perform complex multiplexed scan modes make the Q Exactive an exciting new instrument for the proteomics and general analytical communities.", "title": "" }, { "docid": "12dd3762060fd2e85732cd1807c7e5dc", "text": "Context: Topic modeling finds human-readable structures in unstructured textual data. A widely used topic modeler is Latent Dirichlet allocation. When run on different datasets, LDA suffers from “order effects” i.e. different topics are generated if the order of training data is shuffled. Such order effects introduce a systematic error for any study. This error can relate to misleading results; specifically, inaccurate topic descriptions and a reduction in the efficacy of text mining classification results. Objective: To provide a method in which distributions generated by LDA are more stable and can be used for further analysis. Method: We use LDADE, a search-based software engineering tool that tunes LDA’s parameters using DE (Differential Evolution). LDADE is evaluated on data from a programmer information exchange site (Stackoverflow), title and abstract text of thousands of Software Engineering (SE) papers, and software defect reports from NASA. Results were collected across different implementations of LDA (Python+Scikit-Learn, Scala+Spark); across different platforms (Linux, Macintosh) and for different kinds of LDAs (VEM, or using Gibbs sampling). Results were scored via topic stability and text mining classification accuracy. Results: In all treatments: (i) standard LDA exhibits very large topic instability; (ii) LDADE’s tunings dramatically reduce cluster instability; (iii) LDADE also leads to improved performances for supervised as well as unsupervised learning. Conclusion: Due to topic instability, using standard LDA with its “off-the-shelf” settings should now be depreciated. Also, in future, we should require SE papers that use LDA to test and (if needed) mitigate LDA topic instability. Finally, LDADE is a candidate technology for effectively and efficiently reducing that instability.", "title": "" }, { "docid": "bd3b9d9e8a1dc39f384b073765175de6", "text": "We generalize the stochastic block model to the important case in which edges are annotated with weights drawn from an exponential family distribution. This generalization introduces several technical difficulties for model estimation, which we solve using a Bayesian approach. We introduce a variational algorithm that efficiently approximates the model’s posterior distribution for dense graphs. In specific numerical experiments on edge-weighted networks, this weighted stochastic block model outperforms the common approach of first applying a single threshold to all weights and then applying the classic stochastic block model, which can obscure latent block structure in networks. This model will enable the recovery of latent structure in a broader range of network data than was previously possible.", "title": "" }, { "docid": "286f7edf797040089d2adb667aaabc00", "text": "We describe and compare three predominant email sender authentication mechanisms based on DNS: SPF, DKIM and Sender-ID Framework (SIDF). These mechanisms are designed mainly to assist in filtering of undesirable email messages, in particular spam and phishing emails. We clarify the limitations of these mechanisms, identify risks, and make recommendations. In particular, we argue that, properly used, SPF and DKIM can both help improve the efficiency and accuracy of email filtering.", "title": "" }, { "docid": "683e496bd08fe3a55c63ba8788481184", "text": "Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users' emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.", "title": "" }, { "docid": "4db2110c6030c7d19e59dfe8d42cf8f1", "text": "Extracellular vesicles (EVs) are membrane-enclosed vesicles that are released into the extracellular environment by various cell types, which can be classified as apoptotic bodies, microvesicles and exosomes. EVs have been shown to carry DNA, small RNAs, proteins and membrane lipids which are derived from the parental cells. Recently, several studies have demonstrated that EVs can regulate many biological processes, such as cancer progression, the immune response, cell proliferation, cell migration and blood vessel tube formation. This regulation is achieved through the release and transport of EVs and the transfer of their parental cell-derived molecular cargo to recipient cells. This thereby influences various physiological and sometimes pathological functions within the target cells. While intensive investigation of EVs has focused on pathological processes, the involvement of EVs in normal wound healing is less clear; however, recent preliminarily investigations have produced some initial insights. This review will provide an overview of EVs and discuss the current literature regarding the role of EVs in wound healing, especially, their influence on coagulation, cell proliferation, migration, angiogenesis, collagen production and extracellular matrix remodelling.", "title": "" }, { "docid": "6ce94fa6f50d9ee27d9997abd7671e8a", "text": "STUDY DESIGN\nThis study used a prospective, single-group repeated-measures design to analyze differences between the electromyographic (EMG) amplitudes produced by exercises for the trapezius and serratus anterior muscles.\n\n\nOBJECTIVE\nTo identify high-intensity exercises that elicit the greatest level of EMG activity in the trapezius and serratus anterior muscles.\n\n\nBACKGROUND\nThe trapezius and serratus anterior muscles are considered to be the only upward rotators of the scapula and are important for normal shoulder function. Electromyographic studies have been performed for these muscles during active and low-intensity exercises, but they have not been analyzed during high intensity exercises.\n\n\nMETHODS AND MEASURES\nSurface electrodes recorded EMG activity of the upper, middle, and lower trapezius and serratus anterior muscles during 10 exercises in 30 healthy subjects.\n\n\nRESULTS\nThe unilateral shoulder shrug exercise was found to produce the greatest EMG activity in the upper trapezius. For the middle trapezius, the greatest EMG amplitudes were generated with 2 exercises: shoulder horizontal extension with external rotation and the overhead arm raise in line with the lower trapezius muscle in the prone position. The arm raise overhead exercise in the prone position produced the maximum EMG activity in the lower trapezius. The serratus anterior was activated maximally with exercises requiring a great amount of upward rotation of the scapula. The exercises were shoulder abduction in the plane of the scapula above 120 degrees and a diagonal exercise with a combination of shoulder flexion, horizontal flexion, and external rotation.\n\n\nCONCLUSION\nThis study identified exercises that maximally activate the trapezius and serratus anterior muscles. This information may be helpful for clinicians in developing exercise programs for these muscles.", "title": "" }, { "docid": "8bc7698e1c8e4ef835f76a7a22128d68", "text": "The parallel data accesses inherent to large-scale data-intensive scientific computing require that data servers handle very high I/O concurrency. Concurrent requests from different processes or programs to hard disk can cause disk head thrashing between different disk regions, resulting in unacceptably low I/O performance. Current storage systems either rely on the disk scheduler at each data server, or use SSD as storage, to minimize this negative performance effect. However, the ability of the scheduler to alleviate this problem by scheduling requests in memory is limited by concerns such as long disk access times, and potential loss of dirty data with system failure. Meanwhile, SSD is too expensive to be widely used as the major storage device in the HPC environment. We propose iTransformer, a scheme that employs a small SSD to schedule requests for the data on disk. Being less space constrained than with more expensive DRAM, iTransformer can buffer larger amounts of dirty data before writing it back to the disk, or prefetch a larger volume of data in a batch into the SSD. In both cases high disk efficiency can be maintained even for concurrent requests. Furthermore, the scheme allows the scheduling of requests in the background to hide the cost of random disk access behind serving process requests. Finally, as a non-volatile memory, concerns about the quantity of dirty data are obviated. We have implemented iTransformer in the Linux kernel and tested it on a large cluster running PVFS2. Our experiments show that iTransformer can improve the I/O throughput of the cluster by 35% on average for MPI/IO benchmarks of various data access patterns.", "title": "" }, { "docid": "01b1eaf090cf90f14266b1b2d3c6a462", "text": "Centrality is an important concept in the study of social network analysis (SNA), which is used to measure the importance of a node in a network. While many different centrality measures exist, most of them are proposed and applied to static networks. However, most types of networks are dynamic that their topology changes over time. A popular approach to represent such networks is to construct a sequence of time windows with a single aggregated static graph that aggregates all edges observed over some time period. In this paper, an approach which overcomes the limitation of this representation is proposed based on the notion of the time-ordered graph, to measure the communication centrality of a node in dynamic networks.", "title": "" }, { "docid": "6c64e7ca2e41a6eb70fe39747b706bf8", "text": "Network Functions Virtualization (NFV) has enabled operators to dynamically place and allocate resources for network services to match workload requirements. However, unbounded end-to-end (e2e) latency of Service Function Chains (SFCs) resulting from distributed Virtualized Network Function (VNF) deployments can severely degrade performance. In particular, SFC instantiations with inter-data center links can incur high e2e latencies and Service Level Agreement (SLA) violations. These latencies can trigger timeouts and protocol errors with latency-sensitive operations.\n Traditional solutions to reduce e2e latency involve physical deployment of service elements in close proximity. These solutions are, however, no longer viable in the NFV era. In this paper, we present our solution that bounds the e2e latency in SFCs and inter-VNF control message exchanges by creating micro-service aggregates based on the affinity between VNFs. Our system, Contain-ed, dynamically creates and manages affinity aggregates using light-weight virtualization technologies like containers, allowing them to be placed in close proximity and hence bounding the e2e latency. We have applied Contain-ed to the Clearwater IP Multimedia System and built a proof-of-concept. Our results demonstrate that, by utilizing application and protocol specific knowledge, affinity aggregates can effectively bound SFC delays and significantly reduce protocol errors and service disruptions.", "title": "" }, { "docid": "ba964bfa07eba81cbc9cdff1dbdac675", "text": "We present drawing on air, a haptic-aided input technique for drawing controlled 3D curves through space. Drawing on air addresses a control problem with current 3D modeling approaches based on sweeping movement of the hands through the air. Although artists praise the immediacy and intuitiveness of these systems, a lack of control makes it nearly impossible to create 3D forms beyond quick design sketches or gesture drawings. Drawing on air introduces two new strategies for more controlled 3D drawing: one-handed drag drawing and two-handed tape drawing. Both approaches have advantages for drawing certain types of curves. We describe a tangent preserving method for transitioning between the two techniques while drawing. Haptic-aided redrawing and line weight adjustment while drawing are also supported in both approaches. In a quantitative user study evaluation by illustrators, the one and two-handed techniques performed at roughly the same level and both significantly outperformed freehand drawing and freehand drawing augmented with a haptic friction effect. We present the design and results of this experiment, as well as user feedback from artists and 3D models created in a style of line illustration for challenging artistic and scientific subjects.", "title": "" }, { "docid": "c900e3dfacce7a37ce742b95a2bae675", "text": "Friction stir welding (FSW) is a relatively new joining process that has been used for high production since 1996. Because melting does not occur and joining takes place below the melting temperature of the material, a high-quality weld is created. In this paper working principle and various factor affecting friction stir welding is discussed.", "title": "" }, { "docid": "e769b1eab6d5ebf78bc5d2bb12f05607", "text": "This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.", "title": "" }, { "docid": "c9d46300b513bca532ec080371511313", "text": "On a gambling task that models real-life decisions, patients with bilateral lesions of the ventromedial prefrontal cortex (VM) opt for choices that yield high immediate gains in spite of higher future losses. In this study, we addressed three possibilities that may account for this behaviour: (i) hypersensitivity to reward; (ii) insensitivity to punishment; and (iii) insensitivity to future consequences, such that behaviour is always guided by immediate prospects. For this purpose, we designed a variant of the original gambling task in which the advantageous decks yielded high immediate punishment but even higher future reward. The disadvantageous decks yielded low immediate punishment but even lower future reward. We measured the skin conductance responses (SCRs) of subjects after they had received a reward or punishment. Patients with VM lesions opted for the disadvantageous decks in both the original and variant versions of the gambling task. The SCRs of VM lesion patients after they had received a reward or punishment were not significantly different from those of controls. In a second experiment, we investigated whether increasing the delayed punishment in the disadvantageous decks of the original task or decreasing the delayed reward in the disadvantageous decks of the variant task would shift the behaviour of VM lesion patients towards an advantageous strategy. Both manipulations failed to shift the behaviour of VM lesion patients away from the disadvantageous decks. These results suggest that patients with VM lesions are insensitive to future consequences, positive or negative, and are primarily guided by immediate prospects. This 'myopia for the future' in VM lesion patients persists in the face of severe adverse consequences, i.e. rising future punishment or declining future reward.", "title": "" }, { "docid": "aebf72a8a624e0e7fa87f8e7eace9fae", "text": "A highly-efficient monopulse antenna system is proposed for radar tracking system application. In this study, a novel integrated front-end and back-end complicated three-dimensional (3-D) system is realized practically to achieve high-level of self-compactness. A wideband and compact monopulse comparator network is developed and integrated as the back-end circuit in the system. Performance of the complete monopulse system is verified together with the front-end antenna array. To ensure the system's electrical efficiency and mechanical strength, a 3-D metal-direct-printing technique is utilized to fabricate the complicated structure, avoiding drawbacks from conventional machining methods and assembly processes. Experimental results show the monopulse system can achieve a bandwidth of 12.9% with VSWR less than 1.5 in the Ku-band, and isolation is better than 30 dB. More than 31.5 dBi gain can be maintained in the sum-patterns of wide bandwidth. The amplitude imbalance is less than 0.2 dB and null-depths are lower than -30 dB in the difference-patterns. In particular, with the help of the metal-printing technique, more than 90% efficiency can be retained in the monopulse system. It is a great improvement compared with that obtained from traditional machining approaches, indicating that this technique is promising for realizing high-performance RF intricate systems electrically and mechanically.", "title": "" } ]
scidocsrr
2ca43e0cfb47fbd2b5f480a29feeab7a
Diet eyeglasses: Recognising food chewing using EMG and smart eyeglasses
[ { "docid": "634ded02136fef04ec8c64a819522e7b", "text": "Maintaining appropriate levels of food intake anddeveloping regularity in eating habits is crucial to weight lossand the preservation of a healthy lifestyle. Moreover, maintainingawareness of one's own eating habits is an important steptowards portion control and ultimately, weight loss. Though manysolutions have been proposed in the area of physical activitymonitoring, few works attempt to monitor an individual's foodintake by means of a noninvasive, wearable platform. In thispaper, we introduce a novel nutrition-intake monitoring systembased around a wearable, mobile, wireless-enabled necklacefeaturing an embedded piezoelectric sensor. We also propose aframework capable of estimating volume of meals, identifyinglong-term trends in eating habits, and providing classificationbetween solid foods and liquids with an F-Measure of 85% and86% respectively. The data is presented to the user in the formof a mobile application.", "title": "" } ]
[ { "docid": "ae59ef9772ea8f8277a2d91030bd6050", "text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.", "title": "" }, { "docid": "bc5a3cd619be11132ea39907f732bf4c", "text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.", "title": "" }, { "docid": "983cae67894ae61b2301dc79713969c0", "text": "Although there is no analytical framework for assessing the organizational benefits of ERP systems, several researchers have indicated that the balanced scorecard (BSC) approach may be an appropriate technique for evaluating the performance of ERP systems. This paper fills this gap in the literature by providing a balanced-scorecard based framework for valuing the strategic contributions of an ERP system. Using a successful SAP implementation by a major international aircraft engine manufacturing and service organization as a case study, this paper illustrates that an ERP system does indeed impacts the business objectives of the firm and derives a new innovative ERP framework for valuing the strategic impacts of ERP systems. The ERP valuation framework, called here an ERP scorecard, integrates the four Kaplan and Norton’s balanced scorecard dimensions with Zuboff’s automate, informate and transformate goals of information systems to provide a practical approach for measuring the contributions and impacts of ERP systems on the strategic goals of the company. # 2005 Published by Elsevier B.V.", "title": "" }, { "docid": "14dec918e2b6b4678c38f533e0f1c9c1", "text": "A method is presented to assess stability changes in waves in early-stage ship design. The method is practical: the calculations can be completed quickly and can be applied as soon as lines are available. The intended use of the described method is for preliminary analysis. If stability changes that result in large roll motion are indicated early in the design process, this permits planning and budgeting for direct assessments using numerical simulations and/or model experiments. The main use of the proposed method is for the justification for hull form shape modification or for necessary additional analysis to better quantify potentially increased stability risk. The method is based on the evaluation of changing stability in irregular seas and can be applied to any type of ship. To demonstrate the robustness of the method, results for ten naval ship types are presented and discussed. The proposed method is shown to identify ships with known risk for large stability changes in waves.", "title": "" }, { "docid": "fe16f2d946b3ea7bc1169d5667365dbe", "text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.", "title": "" }, { "docid": "8f930fc4f06f8b17e2826f0975af1fa1", "text": "Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called \"anchor\" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.", "title": "" }, { "docid": "413d0b457cc1b96bf65d8a3e1c98ed41", "text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.", "title": "" }, { "docid": "85c360e0354e5eab69dc26b7a2dd715e", "text": "1,2,3,4 Department of Information Technology, Matoshri Collage of Engineering & Reasearch Centre Eklahare, Nashik, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management .This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled . After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.", "title": "" }, { "docid": "469e5c159900b9d6662a9bfe9e01fde7", "text": "In the research of rule extraction from neural networks,fidelity describes how well the rules mimic the behavior of a neural network whileaccuracy describes how well the rules can be generalized. This paper identifies thefidelity-accuracy dilemma. It argues to distinguishrule extraction using neural networks andrule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.", "title": "" }, { "docid": "dceef3bbc02b4c83918d87d56cad863e", "text": "In this paper we present an automated way of using spare CPU resources within a shared memory multi-processor or multi-core machine. Our approach is (i) to profile the execution of a program, (ii) from this to identify pieces of work which are promising sources of parallelism, (iii) recompile the program with this work being performed speculatively via a work-stealing system and then (iv) to detect at run-time any attempt to perform operations that would reveal the presence of speculation.\n We assess the practicality of the approach through an implementation based on GHC 6.6 along with a limit study based on the execution profiles we gathered. We support the full Concurrent Haskell language compiled with traditional optimizations and including I/O operations and synchronization as well as pure computation. We use 20 of the larger programs from the 'nofib' benchmark suite. The limit study shows that programs vary a lot in the parallelism we can identify: some have none, 16 have a potential 2x speed-up, 4 have 32x. In practice, on a 4-core processor, we get 10-80% speed-ups on 7 programs. This is mainly achieved at the addition of a second core rather than beyond this.\n This approach is therefore not a replacement for manual parallelization, but rather a way of squeezing extra performance out of the threads of an already-parallel program or out of a program that has not yet been parallelized.", "title": "" }, { "docid": "e8478d17694b39bd252175139a5ca14d", "text": "Building a computationally creative system is a challenging undertaking. While such systems are beginning to proliferate, and a good number of them have been reasonably well-documented, it may seem, especially to newcomers to the field, that each system is a bespoke design that bears little chance of revealing any general knowledge about CC system building. This paper seeks to dispel this concern by presenting an abstract CC system description, or, in other words a practical, general approach for constructing CC systems.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "b63e88701018a80a7815ee43b62e90fd", "text": "Educational data mining and learning analytics promise better understanding of student behavior and knowledge, as well as new information on the tacit factors that contribute to student actions. This knowledge can be used to inform decisions related to course and tool design and pedagogy, and to further engage students and guide those at risk of failure. This working group report provides an overview of the body of knowledge regarding the use of educational data mining and learning analytics focused on the teaching and learning of programming. In a literature survey on mining students' programming processes for 2005-2015, we observe a significant increase in work related to the field. However, the majority of the studies focus on simplistic metric analysis and are conducted within a single institution and a single course. This indicates the existence of further avenues of research and a critical need for validation and replication to better understand the various contributing factors and the reasons why certain results occur. We introduce a novel taxonomy to analyse replicating studies and discuss the importance of replicating and reproducing previous work. We describe what is the state of the art in collecting and sharing programming data. To better understand the challenges involved in replicating or reproducing existing studies, we report our experiences from three case studies using programming data. Finally, we present a discussion of future directions for the education and research community.", "title": "" }, { "docid": "f3e63f3fb0ce0e74697e0a74867d9671", "text": "Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.", "title": "" }, { "docid": "4765f21109d36fb2631325fd0442aeac", "text": "The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty. Individual neurons can be studied in the reward systems of the brain, including dopamine neurons, orbitofrontal cortex, and striatum. The neural activity can be related to basic theoretical terms of reward and uncertainty, such as contiguity, contingency, prediction error, magnitude, probability, expected value, and variance.", "title": "" }, { "docid": "faa6f6dff0ed9b8b6eba8991c93a25fc", "text": "We present a system for Answer Selection that integrates fine-grained Question Classification with a Deep Learning model designed for Answer Selection. We detail the necessary changes to the Question Classification taxonomy and system, the creation of a new Entity Identification system and methods of highlighting entities to achieve this objective. Our experiments show that Question Classes are a strong signal to Deep Learning models for Answer Selection, and enable us to outperform the current state of the art in all variations of our experiments except one. In the best configuration, our MRR and MAP scores outperform the current state of the art by between 3 and 5 points on both versions of the TREC Answer Selection test set, a standard dataset for this task.", "title": "" }, { "docid": "49cf26b6c6dde96df9009a68758ee506", "text": "Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with ∗Corresponding author. Tel.: +86 27 87558918 Email addresses: Yang Xiao@hust.edu.cn (Yang Xiao), chenjun2015@hust.edu.cn (Jun Chen), yancheng wang@hust.edu.cn (Yancheng Wang), zgcao@hust.edu.cn (Zhiguo Cao), zhouty@ihpc.a-star.edu.sg (Joey Tianyi Zhou), xbai@hust.edu.cn (Xiang Bai) Preprint submitted to Information Sciences December 31, 2018 ar X iv :1 80 6. 11 26 9v 3 [ cs .C V ] 2 7 D ec 2 01 8 respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.", "title": "" }, { "docid": "ba8467f6b5a28a2b076f75ac353334a0", "text": "Progress in science has advanced the development of human society across history, with dramatic revolutions shaped by information theory, genetic cloning, and artificial intelligence, among the many scientific achievements produced in the 20th century. However, the way that science advances itself is much less well-understood. In this work, we study the evolution of scientific development over the past century by presenting an anatomy of 89 million digitalized papers published between 1900 and 2015. We find that science has benefited from the shift from individual work to collaborative effort, with over 90% of the world-leading innovations generated by collaborations in this century, nearly four times higher than they were in the 1900s. We discover that rather than the frequent myopic- and self-referencing that was common in the early 20th century, modern scientists instead tend to look for literature further back and farther around. Finally, we also observe the globalization of scientific development from 1900 to 2015, including 25-fold and 7-fold increases in international collaborations and citations, respectively, as well as a dramatic decline in the dominant accumulation of citations by the US, the UK, and Germany, from ~95% to ~50% over the same period. Our discoveries are meant to serve as a starter for exploring the visionary ways in which science has developed throughout the past century, generating insight into and an impact upon the current scientific innovations and funding policies.", "title": "" }, { "docid": "4ede3f2caa829e60e4f87a9b516e28bd", "text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.", "title": "" }, { "docid": "5898f4adaf86393972bcbf4c4ab91540", "text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.", "title": "" } ]
scidocsrr
484b12bbed6ea301f2f8b5acb6e011dd
A big data architecture for managing oceans of data and maritime applications
[ { "docid": "ebd0d534a87c3cd25eb276ea81af1860", "text": "As the challenge of our time, Big Data still has many research hassles, especially the variety of data. The high diversity of data sources often results in information silos, a collection of non-integrated data management systems with heterogeneous schemas, query languages, and APIs. Data Lake systems have been proposed as a solution to this problem, by providing a schema-less repository for raw data with a common access interface. However, just dumping all data into a data lake without any metadata management, would only lead to a 'data swamp'. To avoid this, we propose Constance, a Data Lake system with sophisticated metadata management over raw data extracted from heterogeneous data sources. Constance discovers, extracts, and summarizes the structural metadata from the data sources, and annotates data and metadata with semantic information to avoid ambiguities. With embedded query rewriting engines supporting structured data and semi-structured data, Constance provides users a unified interface for query processing and data exploration. During the demo, we will walk through each functional component of Constance. Constance will be applied to two real-life use cases in order to show attendees the importance and usefulness of our generic and extensible data lake system.", "title": "" }, { "docid": "461ee7b6a61a6d375a3ea268081f80f5", "text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.", "title": "" } ]
[ { "docid": "c0fd9b73e2af25591e3c939cdbed1c1a", "text": "We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code and dataset is made available at: https://github.com/hezhangsprinter/DCPDN", "title": "" }, { "docid": "f9161b68fef96e0e3141e2d45effa33a", "text": "Water molecules can be affected by magnetic fields (MF) due to their bipolar characteristics. In the present study maize plants, from sowing to the end period of generative stage, were irrigated with magnetically treated water (MTW).Tap water was treated with MF by passing through a locally designed alternative magnetic field generating apparatus (110 mT). Irrigation with MTW increased the ear length and fresh weight, 100-grain fresh and dry weights, and water productivity (119.5%, 119.1%, 114.2%, 116.6% and 122.3%, respectively), compared with the control groups. Levels of photosynthetic pigments i.e. chlorophyll a and b, and the contents of anthocyanin and flavonoids of the leaves were increased compared to those of non-treated ones. Increase of the activity of superoxide dismutase (SOD) and ascorbate peroxidase (APX) in leaves of the treated plants efficiently scavenged active oxygen species and resulted in the maintenance of photosynthetic membranes and reduction of malondealdehyde. Total ferritin, sugar, iron and calcium contents of kernels of MTW-irrigated plants were respectively 122.9%, 167.4%, 235% and 185% of the control ones. From the results presented here it can be concluded that the influence of MF on living plant cells, at least in part, is mediated by water. The results also suggest that irrigation of maize plant with MTW can be applied as a useful method for improvement of quantity and quality of it.", "title": "" }, { "docid": "796ae2d702a66d7af19ac4bb6a52aa6b", "text": "Methods for embedding secret data are more sophisticated than their ancient predecessors, but the basic principles remain unchanged.", "title": "" }, { "docid": "f4380a5acaba5b534d13e1a4f09afe4f", "text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.", "title": "" }, { "docid": "6f1e71399e5786eb9c3923a1e967cd8f", "text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23", "title": "" }, { "docid": "7cf8e1e356c8e5d00bc975e001c40384", "text": "We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability. Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations. Moreover, NeuroSAT generalizes to novel distributions; after training only on random SAT problems, at test time it can solve SAT problems encoding graph coloring, clique detection, dominating set, and vertex cover problems, all on a range of distributions over small random graphs.", "title": "" }, { "docid": "60ad412d0d6557d2a06e9914bbf3c680", "text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "60716b31303314598ac2f68d45c6cb51", "text": "Female genital cosmetic surgery procedures have gained popularity in the West in recent years. Marketing by surgeons promotes the surgeries, but professional organizations have started to question the promotion and practice of these procedures. Despite some surgeon claims of drastic transformations of psychological, emotional, and sexual life associated with the surgery, little reliable evidence of such effects exists. This article achieves two objectives. First, reviewing the published academic work on the topic, it identifies the current state of knowledge around female genital cosmetic procedures, as well as limitations in our knowledge. Second, examining a body of critical scholarship that raises sociological and psychological concerns not typically addressed in medical literature, it summarizes broader issues and debates. Overall, the article demonstrates a paucity of scientific knowledge and highlights a pressing need to consider the broader ramifications of surgical practices. \"Today we have a whole society held in thrall to the drastic plastic of labial rejuvenation.\"( 1 ) \"At the present time, the field of female cosmetic genital surgery is like the old Wild, Wild West: wide open and unregulated\"( 2 ).", "title": "" }, { "docid": "6ef6cbb60da56bfd53ae945480908d3c", "text": "OBJECTIVE\nIn multidisciplinary prenatal diagnosis centers, the search for a tetrasomy 12p mosaic is requested following the discovery of a diaphragmatic hernia in the antenatal period. Thus, the series of Pallister Killian syndromes (PKS: OMIM 601803) probably overestimate the prevalence of diaphragmatic hernia in this syndrome to the detriment of other morphological abnormalities.\n\n\nMETHODS\nA multicenter retrospective study was conducted with search for assistance from members of the French society for Fetal Pathology. For each identified case, we collected all antenatal and postnatal data. Antenatal data were compared with data from the clinicopathological examination to assess the adequacy of sonographic signs of PKS. A review of the literature on antenatal morphological anomalies in case of PKS completed the study.\n\n\nRESULTS\nTen cases were referred to us: 7 had cytogenetic confirmation and 6 had ultrasound screening. In the prenatal as well as post mortem period, the most common sign is facial dysmorphism (5 cases/6). A malformation of limbs is reported in half of the cases (3 out of 6). Ultrasound examination detected craniofacial dysmorphism in 5 cases out of 6. We found 1 case of left diaphragmatic hernia. Our results are in agreement with the malformation spectrum described in the literature.\n\n\nCONCLUSION\nSome malformation associations could evoke a SPK without classical diaphragmatic hernia.", "title": "" }, { "docid": "10ebcd3a97863037b5bdab03c06dd0e1", "text": "Nonlinear dynamical systems are ubiquitous in science and engineering, yet many issues still exist related to the analysis and prediction of these systems. Koopman theory circumvents these issues by transforming the finite-dimensional nonlinear dynamics to a linear dynamical system of functions in an infinite-dimensional Hilbert space of observables. The eigenfunctions of the Koopman operator evolve linearly in time and thus provide a natural coordinate system for simplifying the dynamical behaviors of the system. We consider a family of observable functions constructed by projecting the delay coordinates of the system onto the eigenvectors of the autocorrelation function, which can be regarded as continuous SVD basis vectors for time-delay observables. We observe that these functions are the most parsimonious basis of observables for a system with Koopman mode decomposition of order N , in the sense that the associated Koopman eigenfunctions are guaranteed to lie in the span of the first N of these coordinates. We conjecture and prove a number of theoretical results related to the quality of these approximations in the more general setting where the system has mixed spectra or the coordinates are otherwise insufficient to capture the full spectral information. We prove a related and very general result that the dynamics of the observables generated by projecting delay coordinates onto an arbitrary orthonormal basis are systemindependent and depend only on the choice of basis, which gives a highly efficient way of computing representations of the Koopman operator in these coordinates. We show that this formalism provides a theoretical underpinning for the empirical results in [8], which found that chaotic dynamical systems can be approximately factored into intermittently forced linear systems when viewed in delay coordinates. Finally, we compute these time delay observables for a number of example dynamical systems and show that empirical results match our theory.", "title": "" }, { "docid": "c45b962006b2bb13ab57fe5d643e2ca6", "text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.", "title": "" }, { "docid": "be593352763133428b837f1c593f30cf", "text": "Deep Learning’s recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "title": "" }, { "docid": "5d60a9e9475acda268fc8216a98e6162", "text": "Conventional topic modeling schemes, such as Latent Dirichlet Allocation, are known to perform inadequately when applied to tweets, due to the sparsity of short documents. To alleviate these disadvantages, we apply several pooling techniques, aggregating similar tweets into individual documents, and specifically study the aggregation of tweets sharing authors or hashtags. The results show that aggregating similar tweets into individual documents significantly increases topic coherence.", "title": "" }, { "docid": "faed829d4fc252159a0ed5e7ff1eea07", "text": "Modern cryptographic practice rests on the use of one-way functions, which are easy to evaluate but difficult to invert. Unfortunately, commonly used one-way functions are either based on unproven conjectures or have known vulnerabilities. We show that instead of relying on number theory, the mesoscopic physics of coherent transport through a disordered medium can be used to allocate and authenticate unique identifiers by physically reducing the medium's microstructure to a fixed-length string of binary digits. These physical one-way functions are inexpensive to fabricate, prohibitively difficult to duplicate, admit no compact mathematical representation, and are intrinsically tamper-resistant. We provide an authentication protocol based on the enormous address space that is a principal characteristic of physical one-way functions.", "title": "" }, { "docid": "bde4e8743d2146d3ee9af39f27d14b5a", "text": "For several decades now, there has been sporadic interest in automatically characterizing the speech impairment due to Parkinson's disease (PD). Most early studies were confined to quantifying a few speech features that were easy to compute. More recent studies have adopted a machine learning approach where a large number of potential features are extracted and the models are learned automatically from the data. In the same vein, here we characterize the disease using a relatively large cohort of 168 subjects, collected from multiple (three) clinics. We elicited speech using three tasks - the sustained phonation task, the diadochokinetic task and a reading task, all within a time budget of 4 minutes, prompted by a portable device. From these recordings, we extracted 1582 features for each subject using openSMILE, a standard feature extraction tool. We compared the effectiveness of three strategies for learning a regularized regression and find that ridge regression performs better than lasso and support vector regression for our task. We refine the feature extraction to capture pitch-related cues, including jitter and shimmer, more accurately using a time-varying harmonic model of speech. Our results show that the severity of the disease can be inferred from speech with a mean absolute error of about 5.5, explaining 61% of the variance and consistently well-above chance across all clinics. Of the three speech elicitation tasks, we find that the reading task is significantly better at capturing cues than diadochokinetic or sustained phonation task. In all, we have demonstrated that the data collection and inference can be fully automated, and the results show that speech-based assessment has promising practical application in PD. The techniques reported here are more widely applicable to other paralinguistic tasks in clinical domain.", "title": "" }, { "docid": "ca1d5c5da03fb9c3b6f7c023dc8f9e9c", "text": "Recent introduction of all-oral direct-acting antiviral (DAA) treatment has revolutionized care of patients with chronic hepatitis C virus infection. Because patients with different liver disease stages have been treated with great success including those awaiting liver transplantation, therapy has been extended to patients with hepatocellular carcinoma as well. From observational studies among compensated cirrhotic hepatitis C patients treated with interferon-containing regimens, it would have been expected that the rate of hepatocellular carcinoma occurrence is markedly decreased after a sustained virological response. However, recently 2 studies have been published reporting markedly increased rates of tumor recurrence and occurrence after viral clearance with DAA agents. Over the last decades, it has been established that chronic antigen stimulation during persistent infection with hepatitis C virus is associated with continuous activation and impaired function of several immune cell populations, such as natural killer cells and virus-specific T cells. This review therefore focuses on recent studies evaluating the restoration of adaptive and innate immune cell populations after DAA therapy in patients with chronic hepatitis C virus infection in the context of the immune responses in hepatocarcinogenesis.", "title": "" }, { "docid": "9a82781af933251208aef5e683839346", "text": "We present a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems. We discuss these systems’ mode-of-operation, functional behavior and include models of their expected performance, shortcomings, and limitations. We provide information about the systems’ optical characteristics, their correlation algorithms, and how these properties can affect different applications, including 3D reconstruction and gesture recognition. Our discussion covers the Intel RealSense R200 and the Intel RealSense D400 (formally RS400).", "title": "" }, { "docid": "74beaea9eccab976dc1ee7b2ddf3e4ca", "text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.", "title": "" }, { "docid": "bed6312dd677fa37c30e72d0383973ed", "text": " Fig.1にマスタリーラーニングのアウトラインを示す。 初めに教師はカリキュラムや教材をコンセプトやアイディアが重要であるためレビューする必要がある。 次に教師による診断手段や診断プロセスという形式的評価の計画である。また学習エラーを改善するための Corrective Activitiesの計画の主要な援助でもある。  Corrective Activites 矯正活動にはさまざまな形がとられる。Peer Cross-age Tutoring、コンピュータ支援レッスンなど  Enrichment Activities 問題解決練習の特別なtutoringであり、刺激的で早熟な学習者に実りのある学習となっている。  Formative Assesment B もしCorrective Activitiesが学習者を改善しているのならばこの2回目の評価では体得を行っている。 この2回目の評価は学習者に改善されていることや良い学習者になっていることを示し、強力なモチベーショ ンのデバイスとなる。最後は累積的試験または評価の開発がある。", "title": "" }, { "docid": "54af3c39dba9aafd5b638d284fd04345", "text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).", "title": "" } ]
scidocsrr
dbddb71ba5b69885d3284474a7414188
The in fl uence of social media interactions on consumer – brand relationships : A three-country study of brand perceptions and marketing behaviors
[ { "docid": "0ee70b75cdcf22b8a22a1810227d401f", "text": "Traditionally, consumers used the Internet to simply expend content: they read it, they watched it, and they used it to buy products and services. Increasingly, however, consumers are utilizing platforms–—such as content sharing sites, blogs, social networking, and wikis–—to create, modify, share, and discuss Internet content. This represents the social media phenomenon, which can now significantly impact a firm’s reputation, sales, and even survival. Yet, many executives eschew or ignore this form of media because they don’t understand what it is, the various forms it can take, and how to engage with it and learn. In response, we present a framework that defines social media by using seven functional building blocks: identity, conversations, sharing, presence, relationships, reputation, and groups. As different social media activities are defined by the extent to which they focus on some or all of these blocks, we explain the implications that each block can have for how firms should engage with social media. To conclude, we present a number of recommendations regarding how firms should develop strategies for monitoring, understanding, and responding to different social media activities. final version published in Business Horizons (2011) v. 54 pp. 241-251. doi: 10.106/j.bushor.2011.01.005 1. Welcome to the jungle: The social media ecology Social media employ mobile and web-based technologies to create highly interactive platforms via which individuals and communities share, co-", "title": "" }, { "docid": "bf1ba6901d6c64a341ba1491c6c2c3c9", "text": "The present research proposes schema congruity as a theoretical basis for examining the effectiveness and consequences of product anthropomorphism. Results of two studies suggest that the ability of consumers to anthropomorphize a product and their consequent evaluation of that product depend on the extent to which that product is endowed with characteristics congruent with the proposed human schema. Furthermore, consumers’ perception of the product as human mediates the influence of feature type on product evaluation. Results of a third study, however, show that the affective tag attached to the specific human schema moderates the evaluation but not the successful anthropomorphizing of theproduct.", "title": "" }, { "docid": "e6034310ee28d8ed4cbd1ea4c71cd76b", "text": "This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. A literature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and validity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the questionnaires by adding new concepts, and to conduct further validations where it appears necessary. C. Bartneck ( ) Department of Industrial Design, Eindhoven University of Technology, Den Dolech 2, 5600 Eindhoven, The Netherlands e-mail: c.bartneck@tue.nl D. Kulić Nakamura & Yamane Lab, Department of Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan e-mail: dana@ynl.t.u-tokyo.ac.jp E. Croft · S. Zoghbi Department of Mechanical Engineering, University of British Columbia, 6250 Applied Science Lane, Room 2054, Vancouver, V6T 1Z4, Canada E. Croft e-mail: ecroft@mech.ubc.ca S. Zoghbi e-mail: szoghbi@mech.ubc.ca", "title": "" }, { "docid": "6a27457b4d8efea03475f4d276a704c9", "text": "Why are certain pieces of online content more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique dataset of all the New York Times articles published over a three month period, the authors examine how emotion shapes virality. Results indicate that positive content is more viral than negative content, but that the relationship between emotion and social transmission is more complex than valence alone. Virality is driven, in part, by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low arousal, or deactivating emotions (e.g., sadness) is less viral. These results hold even controlling for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental results further demonstrate the causal impact of specific emotion on transmission, and illustrate that it is driven by the level of activation induced. Taken together, these findings shed light on why people share content and provide insight into designing effective viral marketing", "title": "" }, { "docid": "3711e4c4feec68299f3f94858e7611f8", "text": "There is an ongoing debate over the activities of brands and companies in social media. Some researchers believe social media provide a unique opportunity for brands to foster their relationships with customers, while others believe the contrary. Taking the perspective of the brand community building plus the brand trust and loyalty literatures, our goal is to show how brand communities based on social media influence elements of the customer centric model (i.e., the relationships among focal customer and brand, product, company, and other customers) and brand loyalty. A survey-based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on customer/product, customer/brand, customer/company and customer/other customers relationships, which in turn have positive effects on brand trust, and trust has positive effects on brand loyalty. We find that brand trust has a fully mediating role in converting the effects of enhanced relationships in brand community to brand loyalty. The implications for marketing practice and future research are discussed. © 2012 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "80ed0585f1b040f2af895f1067502899", "text": "In this paper, we present the concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is in order to reduce the cost, transmission and distribution losses. This concept is known as Microwave Power transmission (MPT). We also discussed the technological developments in Wireless Power Transmission (WPT) which are required for the improment .The components which are requiredfor the development of Microwave Power transmission(MPT)are also mentioned along with the performance when they are connected to various devices at different frequency levels . The advantages, disadvantages, biological impacts and applications of WPT are also presented.", "title": "" }, { "docid": "759140ad09a5a8ce5c5e1ca78e238de1", "text": "Various issues make framework development harder than regular development. Building product lines and frameworks requires increased coordination and communication between stakeholders and across the organization.\n The difficulty of building the right abstractions ranges from understanding the domain models, selecting and evaluating the framework architecture, to designing the right interfaces, and adds to the complexity of a framework project.", "title": "" }, { "docid": "743aeaa668ba32e6561e9e62015e24cd", "text": "A smart city enables the effective utilization of resources and better quality of services to the citizens. To provide services such as air quality management, weather monitoring and automation of homes and buildings in a smart city, the basic parameters are temperature, humidity and CO2. This paper presents a customised design of an Internet of Things (IoT) enabled environment monitoring system to monitor temperature, humidity and CO2. In developed system, data is sent from the transmitter node to the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone, for monitoring data remotely. The results and the performance of the proposed system is discussed.", "title": "" }, { "docid": "06ef397d13383ff09f2f6741c0626192", "text": "A fully-integrated low-dropout regulator (LDO) with fast transient response and full spectrum power supply rejection (PSR) is proposed to provide a clean supply for noise-sensitive building blocks in wideband communication systems. With the proposed point-of-load LDO, chip-level high-frequency glitches are well attenuated, consequently the system performance is improved. A tri-loop LDO architecture is proposed and verified in a 65 nm CMOS process. In comparison to other fully-integrated designs, the output pole is set to be the dominant pole, and the internal poles are pushed to higher frequencies with only 50 μA of total quiescent current. For a 1.2 V input voltage and 1 V output voltage, the measured undershoot and overshoot is only 43 mV and 82 mV, respectively, for load transient of 0 μA to 10 mA within edge times of 200 ps. It achieves a transient response time of 1.15 ns and the figure-of-merit (FOM) of 5.74 ps. PSR is measured to be better than -12 dB over the whole spectrum (DC to 20 GHz tested). The prototype chip measures 260×90 μm2, including 140 pF of stacked on-chip capacitors.", "title": "" }, { "docid": "d2c4f17c9bb6ec2112fe39e95dfed94e", "text": "B loyalty and the more modern topics of computing customer lifetime value and structuring loyalty programs remain the focal point for a remarkable number of research articles. At first, this research appears consistent with firm practices. However, close scrutiny reveals disaffirming evidence. Many current so-called loyalty programs appear unrelated to the cultivation of customer brand loyalty and the creation of customer assets. True investments are up-front expenditures that produce much greater future returns. In contrast, many socalled loyalty programs are shams because they produce liabilities (e.g., promises of future rewards or deferred rebates) rather than assets. These programs produce short-term revenue from customers while producing substantial future obligations to those customers. Rather than showing trust by committing to the customer, the firm asks the customer to trust the firm—that is, trust that future rewards are indeed forthcoming. The entire idea is antithetical to the concept of a customer asset. Many modern loyalty programs resemble old-fashioned trading stamps or deferred rebates that promise future benefits for current patronage. A true loyalty program invests in the customer (e.g., provides free up-front training, allows familiarization or customization) with the expectation of greater future revenue. Alternative motives for extant programs are discussed.", "title": "" }, { "docid": "1969bf5a07349cc5a9b498e0437e41fe", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "6300234fd4ed55285459b8561b5c0ed0", "text": "In conventional power system operation, droop control methods are used to facilitate load sharing among different generation sources. This method compensates for both active and reactive power imbalances by adjusting the output voltage magnitude and frequency of the generating unit. Both P-ω and Q-V droops have been used in synchronous machines for decades. Similar droop controllers were used in this study to develop a control algorithm for a three-phase isolated (islanded) inverter. Controllers modeled in a synchronous dq reference frame were simulated in PLECS and validated with the hardware setup. A small-signal model based on an averaged model of the inverter was developed to study the system's dynamics. The accuracy of this mathematical model was then verified using the data obtained from the experimental and simulation results. This validated model is a useful tool for the further dynamic analysis of a microgrid.", "title": "" }, { "docid": "066b4130dbc9c36d244e5da88936dfc4", "text": "Real-time strategy (RTS) games have drawn great attention in the AI research community, for they offer a challenging and rich testbed for both machine learning and AI techniques. Due to their enormous state spaces and possible map configurations, learning good and generalizable representations for machine learning is crucial to build agents that can perform well in complex RTS games. In this paper we present a convolutional neural network approach to learn an evaluation function that focuses on learning general features that are independent of the map configuration or size. We first train and evaluate the network on a winner prediction task on a dataset collected with a small set of maps with a fixed size. Then we evaluate the network’s generalizability to three set of larger maps. by using it as an evaluation function in the context of Monte Carlo Tree Search. Our results show that the presented architecture can successfully capture general and map-independent features applicable to more complex RTS situations.", "title": "" }, { "docid": "5739713d17ec5cc6952832644b2a1386", "text": "Group Support Systems (GSS) can improve the productivity of Group Work by offering a variety of tools to assist a virtual group across geographical distances. Experience shows that the value of a GSS depends on how purposefully and skillfully it is used. We present a framework for a universal GSS based on a thinkLet- and thinXel-based Group Process Modeling Language (GPML). Our framework approach uses the GPML to describe different kinds of group processes in an unambiguous and compact representation and to guide the participants automatically through these processes. We assume that a GSS based on this GPML can provide the following advantages: to support the user by designing and executing a collaboration process and to increase the applicability of GSSs for different kinds of group processes. We will present a prototype and use different kinds of group processes to illustrate the application of a GPML for a universal GSS.", "title": "" }, { "docid": "fd8ac9c61b2146a27465e96b4f0eb5f6", "text": "In this paper performance of LQR and ANFIS control for a Double Inverted Pendulum system is compared. The double inverted pendulum system is highly unstable and nonlinear. Mathematical model is presented by linearizing the system about its vertical position. The analysis of the system is performed for its stability, controllability and observability. Furthermore, the LQR controller and ANFIS controller based on the state variable fusion is proposed for the control of the double inverted pendulum system and simulation results show that ANFIS controller has better tracking performance and disturbance rejecting performance as compared to LQR controller.", "title": "" }, { "docid": "2f01e912a6fbafca1e791ef18fb51ceb", "text": "Visualizing the result of users' opinion mining on twitter using social network graph can play a crucial role in decision-making. Available data visualizing tools, such as NodeXL, use a specific file format as an input to construct and visualize the social network graph. One of the main components of the input file is the sentimental score of the users' opinion. This motivates us to develop a free and open source system that can take the opinion of users in raw text format and produce easy-to-interpret visualization of opinion mining and sentiment analysis result on a social network. We use a public machine learning library called LingPipe Library to classify the sentiments of users' opinion into positive, negative and neutral classes. Our proposed system can be used to analyze and visualize users' opinion on the network level to determine sub-social structures (sub-groups). Moreover, the proposed system can also identify influential people in the social network by using node level metrics such as betweenness centrality. In addition to the network level and node level analysis, our proposed method also provides an efficient filtering mechanism by either time and date, or the sentiment score. We tested our proposed system using user opinions about different Samsung products and related issues that are collected from five official twitter accounts of Samsung Company. The test results show that our proposed system will be helpful to analyze and visualize the opinion of users at both network level and node level.", "title": "" }, { "docid": "b8700283c7fb65ba2e814adffdbd84f8", "text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.", "title": "" }, { "docid": "f74acc86ecbd8aa9678fbcb13559ae01", "text": "Strawberry and kiwi leathers were used to develop a new healthy and preservative-free fruit snack for new markets. Fruit puree was dehydrated at 60 °C for 20 h and subjected to accelerated storage. Soluble solids, titratable acidity, pH, water activity (aw ), total phenolic (TP), antioxidant activity (AOA) and capacity (ORAC), and color change (browning index) were measured in leathers, cooked, and fresh purees. An untrained panel was used to evaluate consumer acceptability. Soluble solids of fresh purees were 11.24 to 13.04 °Brix, whereas pH was 3.46 to 3.39. Leathers presented an aw of 0.59 to 0.67, and a moisture content of 21 kg water/100 kg. BI decreased in both leathers over accelerated storage period. TP and AOA were higher (P ≤ 0.05) in strawberry formulations. ORAC decreased 57% in strawberry and 65% in kiwi leathers when compared to fruit puree. TP and AOA increased in strawberries during storage. Strawberry and Kiwi leathers may be a feasible new, natural, high antioxidant, and healthy snack for the Chilean and other world markets, such as Europe, particularly the strawberry leather, which was preferred by untrained panelists.", "title": "" }, { "docid": "6ff51eea5a590996ed0219a4991d32f2", "text": "The number R(4, 3, 3) is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on abstraction and symmetry breaking that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value R(4, 3, 3) = 30. Along the way it is required to first compute the previously unknown set ℛ ( 3 , 3 , 3 ; 13 ) $\\mathcal {R}(3,3,3;13)$ consisting of 78,892 Ramsey colorings.", "title": "" }, { "docid": "9be50791156572e6e1a579952073d810", "text": "A synthetic aperture radar (SAR) raw data simulator is an important tool for testing the system parameters and the imaging algorithms. In this paper, a scene raw data simulator based on an inverse ω-k algorithm for bistatic SAR of a translational invariant case is proposed. The differences between simulations of monostatic and bistatic SAR are also described. The algorithm proposed has high precision and can be used in long-baseline configuration and for single-pass interferometry. Implementation details are described, and plenty of simulation results are provided to validate the algorithm.", "title": "" }, { "docid": "fcf01af44da0c796cdaf02c8e05a0fd3", "text": "As a promising paradigm to reduce both capital and operating expenditures, the cloud radio access network (C-RAN) has been shown to provide high spectral efficiency and energy efficiency. Motivated by its significant theoretical performance gains and potential advantages, C-RANs have been advocated by both the industry and research community. This paper comprehensively surveys the recent advances of C-RANs, including system architectures, key techniques, and open issues. The system architectures with different functional splits and the corresponding characteristics are comprehensively summarized and discussed. The state-of-the-art key techniques in C-RANs are classified as: the fronthaul compression, large-scale collaborative processing, and channel estimation in the physical layer; and the radio resource allocation and optimization in the upper layer. Additionally, given the extensiveness of the research area, open issues, and challenges are presented to spur future investigations, in which the involvement of edge cache, big data mining, socialaware device-to-device, cognitive radio, software defined network, and physical layer security for C-RANs are discussed, and the progress of testbed development and trial test is introduced as well.", "title": "" }, { "docid": "af105dd5dca0642d119ca20661d5f633", "text": "This paper derives the forward and inverse kinematics of a humanoid robot. The specific humanoid that the derivation is for is a robot with 27 degrees of freedom but the procedure can be easily applied to other similar humanoid platforms. First, the forward and inverse kinematics are derived for the arms and legs. Then, the kinematics for the torso and the head are solved. Finally, the forward and inverse kinematic solutions for the whole body are derived using the kinematics of arms, legs, torso, and head.", "title": "" }, { "docid": "e682f1b64d6eae69252ea2298f035ac6", "text": "Objective\nPatient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of electronic health record databases, the limited number of researchers with access to non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value.\n\n\nMaterials and Methods\nWe introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset.\n\n\nResults\nOur ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall of 97.38 and a precision of 98.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall of 99.25 and a precision of 99.21.\n\n\nConclusion\nOur findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no manual feature engineering.", "title": "" }, { "docid": "5404f89c379ffc79de345414baf1e084", "text": "OBJECTIVES\nTo describe pelvic organ prolapse surgical success rates using a variety of definitions with differing requirements for anatomic, symptomatic, or re-treatment outcomes.\n\n\nMETHODS\nEighteen different surgical success definitions were evaluated in participants who underwent abdominal sacrocolpopexy within the Colpopexy and Urinary Reduction Efforts trial. The participants' assessments of overall improvement and rating of treatment success were compared between surgical success and failure for each of the definitions studied. The Wilcoxon rank sum test was used to identify significant differences in outcomes between success and failure.\n\n\nRESULTS\nTreatment success varied widely depending on definition used (19.2-97.2%). Approximately 71% of the participants considered their surgery \"very successful,\" and 85.2% considered themselves \"much better\" than before surgery. Definitions of success requiring all anatomic support to be proximal to the hymen had the lowest treatment success (19.2-57.6%). Approximately 94% achieved surgical success when it was defined as the absence of prolapse beyond the hymen. Subjective cure (absence of bulge symptoms) occurred in 92.1% while absence of re-treatment occurred in 97.2% of participants. Subjective cure was associated with significant improvements in the patient's assessment of both treatment success and overall improvement, more so than any other definition considered (P<.001 and <.001, respectively). Similarly, the greatest difference in symptom burden and health-related quality of life as measured by the Pelvic Organ Prolapse Distress Inventory and Pelvic Organ Prolapse Impact Questionnaire scores between treatment successes and failures was noted when success was defined as subjective cure (P<.001).\n\n\nCONCLUSION\nThe definition of success substantially affects treatment success rates after pelvic organ prolapse surgery. The absence of vaginal bulge symptoms postoperatively has a significant relationship with a patient's assessment of overall improvement, while anatomic success alone does not.\n\n\nLEVEL OF EVIDENCE\nII.", "title": "" }, { "docid": "328052245c3a5144c492e761e7f51bae", "text": "The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships (QSARs), among other issues, are hot topics in the field of materials science. Traditional experiments and computational modelling often consume tremendous time and resources and are limited by their experimental conditions and theoretical foundations. Thus, it is imperative to develop a new method of accelerating the discovery and design process for novel materials. Recently, materials discovery and design using machine learning have been receiving increasing attention and have achieved great improvements in both time efficiency and prediction accuracy. In this review, we first outline the typical mode of and basic procedures for applying machine learning in materials science, and we classify and compare the main algorithms. Then, the current research status is reviewed with regard to applications of machine learning in material property prediction, in new materials discovery and for other purposes. Finally, we discuss problems related to machine learning in materials science, propose possible solutions, and forecast potential directions of future research. By directly combining computational studies with experiments, we hope to provide insight into the parameters that affect the properties of materials, thereby enabling more efficient and target-oriented research on materials dis-", "title": "" } ]
scidocsrr
9a1fa0b7b8c2aef8ca0f36c7d5b5bc72
Insights into deep neural networks for speaker recognition
[ { "docid": "cd733cb756884a21cfcc9143e425f0f6", "text": "We propose a novel framework for speaker recognition in which extraction of sufficient statistics for the state-of-the-art i-vector model is driven by a deep neural network (DNN) trained for automatic speech recognition (ASR). Specifically, the DNN replaces the standard Gaussian mixture model (GMM) to produce frame alignments. The use of an ASR-DNN system in the speaker recognition pipeline is attractive as it integrates the information from speech content directly into the statistics, allowing the standard backends to remain unchanged. Improvement from the proposed framework compared to a state-of-the-art system are of 30% relative at the equal error rate when evaluated on the telephone conditions from the 2012 NIST speaker recognition evaluation (SRE). The proposed framework is a successful way to efficiently leverage transcribed data for speaker recognition, thus opening up a wide spectrum of research directions.", "title": "" }, { "docid": "e64f1f11ed113ca91094ef36eaf794a7", "text": "We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multicore machines. In order to be as hardwareagnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.", "title": "" } ]
[ { "docid": "c14eca26d1dc76a5e533583a56e4bd5d", "text": "In restorative dentistry, the non-vital tooth and its restoration have been extensively studied from both its structural and esthetic aspects. The restoration of endodontically treated teeth has much in common with modern implantology: both must include multifaceted biological, biomechanical and esthetic considerations with a profound understanding of materials and techniques; both are technique sensitive and both require a multidisciplinary approach. And for both, two fundamental principles from team sports apply well: firstly, the weakest link determines the limits, and secondly, it is a very long way to the top, but a very short way to failure. Nevertheless, there is one major difference: if the tooth fails, there is the option of the implant, but if the implant fails, there is only another implant or nothing. The aim of this essay is to try to answer some clinically relevant conceptual questions and to give some clinical guidelines regarding the reconstructive aspects, based on scientific evidence and clinical expertise.", "title": "" }, { "docid": "c4d1d0d636e23c377473fe631022bef1", "text": "Electronic concept mapping tools provide a flexible vehicle for constructing concept maps, linking concept maps to other concept maps and related resources, and distributing concept maps to others. As electronic concept maps are constructed, it is often helpful for users to consult additional resources, in order to jog their memories or to locate resources to link to the map under construction. The World Wide Web provides a rich range of resources for these tasks—if the right resources can be found. This paper presents ongoing research on how to automatically generate Web queries from concept maps under construction, in order to proactively suggest related information to aid concept mapping. First, it examines how concept map structure and content can be exploited to automatically select terms to include in initial queries, based on studies of (1) how concept map structure influences human judgments of concept importance, and (2) the relative value of including information from concept labels and linking phrases. Second, it examines how a concept map can be used to refine future queries by reinforcing the weights of terms that have proven to be good discriminators for the topic of the concept map. The described methods are being applied to developing “intelligent suggesters” to support the concept mapping process.", "title": "" }, { "docid": "5a7b68c341e20d5d788e46c089cfd855", "text": "This study aims at investigating alcoholic inpatients' attachment system by combining a measurement of adult attachment style (AAQ, Hazan and Shaver, 1987. Journal of Personality and Social Psychology, 52(3): 511-524) and the degree of alexithymia (BVAQ, Bermond and Vorst, 1998. Bermond-Vorst Alexithymia Questionnaire, Unpublished data). Data were collected from 101 patients (71 men, 30 women) admitted to a psychiatric hospital in Belgium for alcohol use-related problems, between September 2003 and December 2004. To investigate the research question, cluster analyses and regression analyses are performed. We found that it makes sense to distinguish three subgroups of alcoholic inpatients with different degrees of impairment of the attachment system. Our results also reveal a pattern of correspondence between the severity of psychiatric symptoms-personality disorder traits (ADP-IV), anxiety (STAI), and depression (BDI-II-Nl)-and the severity of the attachment system's impairment. Limitations of the study and suggestions for further research are highlighted and implications for diagnosis and treatment are discussed.", "title": "" }, { "docid": "e85b761664a01273a10819566699bf4f", "text": "Julius Bernstein belonged to the Berlin school of “organic physicists” who played a prominent role in creating modern physiology and biophysics during the second half of the nineteenth century. He trained under du Bois-Reymond in Berlin, worked with von Helmholtz in Heidelberg, and finally became Professor of Physiology at the University of Halle. Nowadays his name is primarily associated with two discoveries: (1) The first accurate description of the action potential in 1868. He developed a new instrument, a differential rheotome (= current slicer) that allowed him to resolve the exact time course of electrical activity in nerve and muscle and to measure its conduction velocity. (2) His ‘Membrane Theory of Electrical Potentials’ in biological cells and tissues. This theory, published by Bernstein in 1902, provided the first plausible physico-chemical model of bioelectric events; its fundamental concepts remain valid to this day. Bernstein pursued an intense and long-range program of research in which he achieved a new level of precision and refinement by formulating quantitative theories supported by exact measurements. The innovative design and application of his electromechanical instruments were milestones in the development of biomedical engineering techniques. His seminal work prepared the ground for hypotheses and experiments on the conduction of the nervous impulse and ultimately the transmission of information in the nervous system. Shortly after his retirement, Bernstein (1912) summarized his electrophysiological work and extended his theoretical concepts in a book Elektrobiologie that became a classic in its field. The Bernstein Centers for Computational Neuroscience recently established at several universities in Germany were named to honor the person and his work.", "title": "" }, { "docid": "78d00cb1af094c91cc7877ba051f925e", "text": "Neuropathic pain refers to pain that originates from pathology of the nervous system. Diabetes, infection (herpes zoster), nerve compression, nerve trauma, \"channelopathies,\" and autoimmune disease are examples of diseases that may cause neuropathic pain. The development of both animal models and newer pharmacological strategies has led to an explosion of interest in the underlying mechanisms. Neuropathic pain reflects both peripheral and central sensitization mechanisms. Abnormal signals arise not only from injured axons but also from the intact nociceptors that share the innervation territory of the injured nerve. This review focuses on how both human studies and animal models are helping to elucidate the mechanisms underlying these surprisingly common disorders. The rapid gain in knowledge about abnormal signaling promises breakthroughs in the treatment of these often debilitating disorders.", "title": "" }, { "docid": "30e47a275e7e00f80c8f12061575ee82", "text": "Spliddit is a first-of-its-kind fair division website, which offers provably fair solutions for the division of rent, goods, and credit. In this note, we discuss Spliddit's goals, methods, and implementation.", "title": "" }, { "docid": "3a5d43d86d39966aca2d93d1cf66b13d", "text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.", "title": "" }, { "docid": "6a1fa32d9a716b57a321561dfce83879", "text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .", "title": "" }, { "docid": "9d3e0a8af748c9addf598a27f414e0b2", "text": "Although insecticide resistance is a widespread problem for most insect pests, frequently the assessment of resistance occurs over a limited geographic range. Herein, we report the first widespread survey of insecticide resistance in the USA ever undertaken for the house fly, Musca domestica, a major pest in animal production facilities. The levels of resistance to six different insecticides were determined (using discriminating concentration bioassays) in 10 collections of house flies from dairies in nine different states. In addition, the frequencies of Vssc and CYP6D1 alleles that confer resistance to pyrethroid insecticides were determined for each fly population. Levels of resistance to the six insecticides varied among states and insecticides. Resistance to permethrin was highest overall and most consistent across the states. Resistance to methomyl was relatively consistent, with 65-91% survival in nine of the ten collections. In contrast, resistance to cyfluthrin and pyrethrins + piperonyl butoxide varied considerably (2.9-76% survival). Resistance to imidacloprid was overall modest and showed no signs of increasing relative to collections made in 2004, despite increasing use of this insecticide. The frequency of Vssc alleles that confer pyrethroid resistance was variable between locations. The highest frequencies of kdr, kdr-his and super-kdr were found in Minnesota, North Carolina and Kansas, respectively. In contrast, the New Mexico population had the highest frequency (0.67) of the susceptible allele. The implications of these results to resistance management and to the understanding of the evolution of insecticide resistance are discussed.", "title": "" }, { "docid": "5064d758b361171310ac31c323aa734b", "text": "The problem of assessing the significance of data mining results on high-dimensional 0-1 data sets has been studied extensively in the literature. For problems such as mining frequent sets and finding correlations, significance testing can be done by, e.g., chi-square tests, or many other methods. However, the results of such tests depend only on the specific attributes and not on the dataset as a whole. Moreover, the tests are more difficult to apply to sets of patterns or other complex results of data mining. In this paper, we consider a simple randomization technique that deals with this shortcoming. The approach consists of producing random datasets that have the same row and column margins with the given dataset, computing the results of interest on the randomized instances, and comparing them against the results on the actual data. This randomization technique can be used to assess the results of many different types of data mining algorithms, such as frequent sets, clustering, and rankings. To generate random datasets with given margins, we use variations of a Markov chain approach, which is based on a simple swap operation. We give theoretical results on the efficiency of different randomization methods, and apply the swap randomization method to several well-known datasets. Our results indicate that for some datasets the structure discovered by the data mining algorithms is a random artifact, while for other datasets the discovered structure conveys meaningful information.", "title": "" }, { "docid": "ffbab4b090448de06ff5237d43c5e293", "text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).", "title": "" }, { "docid": "471db984564becfea70fb2946ef4871e", "text": "We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection.", "title": "" }, { "docid": "9cdc0646b8c057ead7000ec14736fc12", "text": "This paper presents a multilayer aperture coupled microstrip antenna with a non symmetric U-shaped feed line. The antenna structure consists of a rectangular patch which is excited through two slots on the ground plane. A parametric study is presented on the effects of the position and dimensions of the slots. Results show that the antenna has VSWR < 2 from 2.6 GHz to 5.4 GHz (70%) and the gain of the structure is more than 7 dB from 2.7 GHz to 4.4 GHz (48%).", "title": "" }, { "docid": "f3f70e5ba87399e9d44bda293a231399", "text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.", "title": "" }, { "docid": "10ef865d0c70369d64c900fb46a1399d", "text": "This work introduces a set of scalable algorithms to identify patterns of human daily behaviors. These patterns are extracted from multivariate temporal data that have been collected from smartphones. We have exploited sensors that are available on these devices, and have identified frequent behavioral patterns with a temporal granularity, which has been inspired by the way individuals segment time into events. These patterns are helpful to both end-users and third parties who provide services based on this information. We have demonstrated our approach on two real-world datasets and showed that our pattern identification algorithms are scalable. This scalability makes analysis on resource constrained and small devices such as smartwatches feasible. Traditional data analysis systems are usually operated in a remote system outside the device. This is largely due to the lack of scalability originating from software and hardware restrictions of mobile/wearable devices. By analyzing the data on the device, the user has the control over the data, i.e., privacy, and the network costs will also be removed.", "title": "" }, { "docid": "c5f0155b2f6ce35a9cbfa38773042833", "text": "Leishmaniasis is caused by protozoa of the genus Leishmania, with the presentation restricted to the mucosa being infrequent. Although the nasal mucosa is the main site affected in this form of the disease, it is also possible the involvement of the lips, mouth, pharynx and larynx. The lesions are characteristically ulcerative-vegetative, with granulation tissue formation. Patients usually complain of pain, dysphagia and odynophagia. Differential diagnosis should include cancer, infectious diseases and granulomatous diseases. We present a case of a 64-year-old male patient, coming from an endemic area for American Tegumentary Leishmaniasis (ATL), with a chief complaint of persistent dysphagia and nasal obstruction for 6 months. The lesion was ulcerative with a purulent infiltration into the soft palate and uvula. After excluding other diseases, ATL was suggested as a hypothesis, having been requested serology and biopsy of the lesions. Was started the treatment with pentavalent antimony and the patient presented regression of the lesions in 30 days, with no other complications.", "title": "" }, { "docid": "362c41e8f90c097160c7785e8b4c9053", "text": "This paper focuses on biomimetic design in the field of technical textiles / smart fabrics. Biologically inspired design is a very promising approach that has provided many elegant solutions. Firstly, a few bio-inspired innovations are presented, followed the introduction of trans-disciplinary research as a useful tool for defining the design problem and giving solutions. Furthermore, the required methods for identifying and applying biological analogies are analysed. Finally, the bio-mimetic approach is questioned and the difficulties, limitations and errors that a designer might face when adopting it are discussed. Researchers and product developers that use this approach should also problematize on the role of biomimetic design: is it a practice that redirects us towards a new model of sustainable development or is it just another tool for generating product ideas in order to increase a company’s competitiveness in the global market? Author", "title": "" }, { "docid": "98e392ace28d496dafd83ec962ce00af", "text": "Continuous-time Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steady-state and transient-state probabilities. This paper introduces a branching temporal logic for expressing real-time probabilistic properties on CTMCs and presents approximate model checking algorithms for this logic. The logic, an extension of the continuous stochastic logic CSL of Aziz et al., contains a time-bounded until operator to express probabilistic timing properties over paths as well as an operator to express steady-state probabilities. We show that the model checking problem for this logic reduces to a system of linear equations (for unbounded until and the steady-state operator) and a Volterra integral equation system (for time-bounded until). We then show that the problem of model-checking timebounded until properties can be reduced to the problem of computing transient state probabilities for CTMCs. This allows the verification of probabilistic timing properties by efficient techniques for transient analysis for CTMCs such as uniformization. Finally, we show that a variant of lumping equivalence (bisimulation), a well-known notion for aggregating CTMCs, preserves the validity of all formulas in the logic.", "title": "" }, { "docid": "0512987d091d29681eb8ba38a1079cff", "text": "Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. 2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.", "title": "" } ]
scidocsrr
b2686fb00b3264a78e511ea71d26b947
Prenatal developmental origins of behavior and mental health: The influence of maternal stress in pregnancy
[ { "docid": "8980bdf92581e8a0816364362fec409b", "text": "OBJECTIVE\nPrenatal exposure to inappropriate levels of glucocorticoids (GCs) and maternal stress are putative mechanisms for the fetal programming of later health outcomes. The current investigation examined the influence of prenatal maternal cortisol and maternal psychosocial stress on infant physiological and behavioral responses to stress.\n\n\nMETHODS\nThe study sample comprised 116 women and their full term infants. Maternal plasma cortisol and report of stress, anxiety and depression were assessed at 15, 19, 25, 31 and 36 + weeks' gestational age. Infant cortisol and behavioral responses to the painful stress of a heel-stick blood draw were evaluated at 24 hours after birth. The association between prenatal maternal measures and infant cortisol and behavioral stress responses was examined using hierarchical linear growth curve modeling.\n\n\nRESULTS\nA larger infant cortisol response to the heel-stick procedure was associated with exposure to elevated concentrations of maternal cortisol during the late second and third trimesters. Additionally, a slower rate of behavioral recovery from the painful stress of a heel-stick blood draw was predicted by elevated levels of maternal cortisol early in pregnancy as well as prenatal maternal psychosocial stress throughout gestation. These associations could not be explained by mode of delivery, prenatal medical history, socioeconomic status or child race, sex or birth order.\n\n\nCONCLUSIONS\nThese data suggest that exposure to maternal cortisol and psychosocial stress exerts programming influences on the developing fetus with consequences for infant stress regulation.", "title": "" } ]
[ { "docid": "b3ea5290cad741aa7c3da97ab1c24ccd", "text": "Methods of alloplastic forehead augmentation using soft expanded polytetrafluoroethylene (ePTFE) and silicone implants are described. Soft ePTFE forehead implantation has the advantage of being technically simpler, with better fixation. The disadvantages are a limited degree of forehead augmentation and higher chance of infection. Properly fabricated soft silicone implants provide potential for larger degree of forehead silhouette augmentation with less risk of infection. The corrugated edge and central perforations of the implant minimize mobility and capsule contraction.", "title": "" }, { "docid": "b120095067684a67fe3327d18860e760", "text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.", "title": "" }, { "docid": "dae877409dca88fc6fed5cf6536e65ad", "text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.", "title": "" }, { "docid": "7abdd1fc5f2a8c5b7b19a6a30eadad0a", "text": "This Paper investigate action recognition by using Extreme Gradient Boosting (XGBoost). XGBoost is a supervised classification technique using an ensemble of decision trees. In this study, we also compare the performance of Xboost using another machine learning techniques Support Vector Machine (SVM) and Naive Bayes (NB). The experimental study on the human action dataset shows that XGBoost better as compared to SVM and NB in classification accuracy. Although takes more computational time the XGBoost performs good classification on action recognition.", "title": "" }, { "docid": "3a8be402f75af666076f441c124ac911", "text": "This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of “building blocks” in GP.", "title": "" }, { "docid": "f23ff5a1275911d47459fa9304b4cf7f", "text": "The input to a neural sequence-tosequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoderdecoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM’s child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores.", "title": "" }, { "docid": "9737e400108f6327be17d23db07b2e75", "text": "While recent deep monocular depth estimation approaches based on supervised regression have achieved remarkable performance, costly ground truth annotations are required during training. To cope with this issue, in this paper we present a novel unsupervised deep learning approach for predicting depth maps and show that the depth estimation task can be effectively tackled within an adversarial learning framework. Specifically, we propose a deep generative network that learns to predict the correspondence field (i.e. the disparity map) between two image views in a calibrated stereo camera setting. The proposed architecture consists of two generative sub-networks jointly trained with adversarial learning for reconstructing the disparity map and organized in a cycle such as to provide mutual constraints and supervision to each other. Extensive experiments on the publicly available datasets KITTI and Cityscapes demonstrate the effectiveness of the proposed model and competitive results with state of the art methods. The code is available at https://github.com/andrea-pilzer/unsup-stereo-depthGAN", "title": "" }, { "docid": "5519eea017d8f69804060f5e40748b1a", "text": "The nonlinear Fourier transform is a transmission and signal processing technique that makes positive use of the Kerr nonlinearity in optical fibre channels. I will overview recent advances and some of challenges in this field.", "title": "" }, { "docid": "69624d1ab7b438d5ff4b5192f492a11a", "text": "1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.", "title": "" }, { "docid": "25226432d192bf7192cf6d8dbee3cab7", "text": "According to the distributional inclusion hypothesis, entailment between words can be measured via the feature inclusions of their distributional vectors. In recent work, we showed how this hypothesis can be extended from words to phrases and sentences in the setting of compositional distributional semantics. This paper focuses on inclusion properties of tensors; its main contribution is a theoretical and experimental analysis of how feature inclusion works in different concrete models of verb tensors. We present results for relational, Frobenius, projective, and holistic methods and compare them to the simple vector addition, multiplication, min, and max models. The degrees of entailment thus obtained are evaluated via a variety of existing wordbased measures, such as Weed’s and Clarke’s, KL-divergence, APinc, balAPinc, and two of our previously proposed metrics at the phrase/sentence level. We perform experiments on three entailment datasets, investigating which version of tensor-based composition achieves the highest performance when combined with the sentence-level measures.", "title": "" }, { "docid": "af45d1bbdcbd94bbe5ae2cc0936f3650", "text": "Rationale: The imidazopyridine hypnotic zolpidem may produce less memory and cognitive impairment than classic benzodiazepines, due to its relatively low binding affinity for the benzodiazepine receptor subtypes found in areas of the brain which are involved in learning and memory. Objectives: The study was designed to compare the acute effects of single oral doses of zolpidem (5, 10, 20 mg/70 kg) and the benzodiazepine hypnotic triazolam (0.125, 0.25, and 0.5 mg/70 kg) on specific memory and attentional processes. Methods: Drug effects on memory for target (i.e., focal) information and contextual information (i.e., peripheral details surrounding a target stimulus presentation) were evaluated using a source monitoring paradigm, and drug effects on selective attention mechanisms were evaluated using a negative priming paradigm, in 18 healthy volunteers in a double-blind, placebo-controlled, crossover design. Results: Triazolam and zolpidem produced strikingly similar dose-related effects on memory for target information. Both triazolam and zolpidem impaired subjects’ ability to remember whether a word stimulus had been presented to them on the computer screen or whether they had been asked to generate the stimulus based on an antonym cue (memory for the origin of a stimulus, which is one type of contextual information). The results suggested that triazolam, but not zolpidem, impaired memory for the screen location of picture stimuli (spatial contextual information). Although both triazolam and zolpidem increased overall reaction time in the negative priming task, only triazolam increased the magnitude of negative priming relative to placebo. Conclusions: The observed differences between triazolam and zolpidem have implications for the cognitive and pharmacological mechanisms underlying drug-induced deficits in specific memory and attentional processes, as well for the cognitive and brain mechanisms underlying these processes.", "title": "" }, { "docid": "2c2dee4689e48f1a7c0061ac7d60a16b", "text": "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source task) but only very limited training data for a second task (the target task) that is similar but not identical to the first. These algorithms use varying assumptions about the similarity between the tasks to carry information from the source to the target task. Common assumptions are that only certain specific marginal or conditional distributions have changed while all else remains the same. Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Alternatively, if one has only the target task, but also has the ability to choose a limited amount of additional training data to collect, then active learning algorithms are used to make choices which will most improve performance on the target task. These algorithms may be combined into active transfer learning, but previous efforts have had to apply the two methods in sequence or use restrictive transfer assumptions. This thesis focuses on active transfer learning under the model shift assumption. We start by proposing two transfer learning algorithms that allow changes in all marginal and conditional distributions but assume the changes are smooth in order to achieve transfer between the tasks. We then propose an active learning algorithm for the second method that yields a combined active transfer learning algorithm. By analyzing the risk bounds for the proposed transfer learning algorithms, we show that when the conditional distribution changes, we are able to obtain a generalization error bound of O( 1 λ∗ √ nl ) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ∗) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we consider a general case where both the support and the model change across domains. We transform both X (features) and Y (labels) by a parameterized-location-scale shift to achieve transfer between tasks. On the other hand, multi-task learning attempts to simultaneously leverage data from multiple domains in order to estimate related functions on each domain. Similar to transfer learning, multi-task problems are also solved by imposing some kind of “smooth” relationship among/between tasks. We study how different smoothness assumptions on task relations affect the upper bounds of algorithms proposed for these problems under different settings. Finally, we propose methods to predict the entire distribution P (Y ) and P (Y |X) by transfer, while allowing both marginal and conditional distributions to change. Moreover, we extend this framework to multi-source distribution transfer. We demonstrate the effectiveness of our methods on both synthetic examples and real-world applications, including yield estimation on the grape image dataset, predicting air-quality from Weibo posts for cities, predicting whether a robot successfully climbs over an obstacle, examination score prediction for schools, and location prediction for taxis. Acknowledgments First and foremost, I would like to express my sincere gratitude to my advisor Jeff Schneider, who has been the biggest help during my whole PhD life. His brilliant insights have helped me formulate the problems of this thesis, brainstorm on new ideas and exciting algorithms. I have learnt many things about research from him, including how to organize ideas in a paper, how to design experiments, and how to give a good academic talk. This thesis would not have been possible without his guidance, advice, patience and encouragement. I would like to thank my thesis committee members Christos Faloutsos, Geoff Gordon and Jerry Zhu for providing great insights and feedbacks on my thesis. Christos has been very nice and he always finds time to talk to me even if he is very busy. Geoff has provided great insights on extending my work to classification and helped me clarified many notations/descriptions in my thesis. Jerry has been very helpful in extending my work on the text data and providing me the air quality dataset. I feel very fortunate to have them as my committee members. I would also like to thank Professor Barnabás Póczos, Professor Roman Garnett and Professor Artur Dubrawski, for providing very helpful suggestions and collaborations during my PhD. I am very grateful to many of the faculty members at Carnegie Mellon. Eric Xing’s Machine Learning course has been my introduction course for Machine Learning at Carnegie Mellon and it has taught me a lot about the foundations of machine learning, including all the inspiring machine learning algorithms and the theories behind them. Larry Wasserman’s Intermediate Statistics and Statistical Machine Learning are both wonderful courses and have been keys to my understanding of the statistical perspective of many machine learning algorithms. Geoff Gordon and Ryan Tibshirani’s Convex Optimization course has been a great tutorial for me to develop all the efficient optimizing techniques for the algorithms I have proposed. Further I want to thank all my colleagues and friends at Carnegie Mellon, especially people from the Auton Lab and the Computer Science Department at CMU. I would like to thank Dougal Sutherland, Yifei Ma, Junier Oliva, Tzu-Kuo Huang for insightful discussions and advices for my research. I would also like to thank all my friends who have provided great support and help during my stay at Carnegie Mellon, and to name a few, Nan Li, Junchen Jiang, Guangyu Xia, Zi Yang, Yixin Luo, Lei Li, Lin Xiao, Liu Liu, Yi Zhang, Liang Xiong, Ligia Nistor, Kirthevasan Kandasamy, Madalina Fiterau, Donghan Wang, Yuandong Tian, Brian Coltin. I would also like to thank Prof. Alon Halevy, who has been a great mentor during my summer internship at google research and also has been a great help in my job searching process. Finally I would like to thank my family, my parents Sisi and Tiangui, for their unconditional love, endless support, and unwavering faith in me. I truly thank them for shaping who I am, for teaching me to be a person who would never lose hope and give up.", "title": "" }, { "docid": "7c3457a5ca761b501054e76965b41327", "text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.", "title": "" }, { "docid": "463c1df3306820f92be1566c03a2b0f9", "text": "Precision and planning are key to reconstructive surgery. Augmented reality (AR) can bring the information within preoperative computed tomography angiography (CTA) imaging to life, allowing the surgeon to 'see through' the patient's skin and appreciate the underlying anatomy without making a single incision. This work has demonstrated that AR can assist the accurate identification, dissection and execution of vascular pedunculated flaps during reconstructive surgery. Separate volumes of osseous, vascular, skin, soft tissue structures and relevant vascular perforators were delineated from preoperative CTA scans to generate three-dimensional images using two complementary segmentation software packages. These were converted to polygonal models and rendered by means of a custom application within the HoloLens™ stereo head-mounted display. Intraoperatively, the models were registered manually to their respective subjects by the operating surgeon using a combination of tracked hand gestures and voice commands; AR was used to aid navigation and accurate dissection. Identification of the subsurface location of vascular perforators through AR overlay was compared to the positions obtained by audible Doppler ultrasound. Through a preliminary HoloLens-assisted case series, the operating surgeon was able to demonstrate precise and efficient localisation of perforating vessels.", "title": "" }, { "docid": "ff67f2bbf20f5ad2bef6641e8e7e3deb", "text": "An observation one can make when reviewing the literature on physical activity is that health-enhancing exercise habits tend to wear off as soon as individuals enter adolescence. Therefore, exercise habits should be promoted and preserved early in life. This article focuses on the formation of physical exercise habits. First, the literature on motivational determinants of habitual exercise and related behaviours is discussed, and the concept of habit is further explored. Based on this literature, a theoretical model of exercise habit formation is proposed. More specifically, expanding on the idea that habits are the result of automated cognitive processes, it is argued that physical exercise habits are capable of being automatically activated by the situational features that normally precede these behaviours. These habits may enhance health as a result of consistent performance over a long period of time. Subsequently, obstacles to the formation of exercise habits are discussed and interventions that may anticipate these obstacles are presented. Finally, implications for theory and practice are briefly discussed.", "title": "" }, { "docid": "62773348cf1d2cda966ec63f62f93efb", "text": "In 2003, psychology professor and sex researcher J. Michael Bailey published a book entitled The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism. The book's portrayal of male-to-female (MTF) transsexualism, based on a theory developed by sexologist Ray Blanchard, outraged some transgender activists. They believed the book to be typical of much of the biomedical literature on transsexuality-oppressive in both tone and claims, insulting to their senses of self, and damaging to their public identities. Some saw the book as especially dangerous because it claimed to be based on rigorous science, was published by an imprint of the National Academy of Sciences, and argued that MTF sex changes are motivated primarily by erotic interests and not by the problem of having the gender identity common to one sex in the body of the other. Dissatisfied with the option of merely criticizing the book, a small number of transwomen (particularly Lynn Conway, Andrea James, and Deirdre McCloskey) worked to try to ruin Bailey. Using published and unpublished sources as well as original interviews, this essay traces the history of the backlash against Bailey and his book. It also provides a thorough exegesis of the book's treatment of transsexuality and includes a comprehensive investigation of the merit of the charges made against Bailey that he had behaved unethically, immorally, and illegally in the production of his book. The essay closes with an epilogue that explores what has happened since 2003 to the central ideas and major players in the controversy.", "title": "" }, { "docid": "4e2c4b8fccda7f8c9ca7ffb6ced1ae5a", "text": "Fog/edge computing, function as a service, and programmable infrastructures, like software-defined networking or network function virtualisation, are becoming ubiquitously used in modern Information Technology infrastructures. These technologies change the characteristics and capabilities of the underlying computational substrate where services run (e.g. higher volatility, scarcer computational power, or programmability). As a consequence, the nature of the services that can be run on them changes too (smaller codebases, more fragmented state, etc.). These changes bring new requirements for service orchestrators, which need to evolve so as to support new scenarios where a close interaction between service and infrastructure becomes essential to deliver a seamless user experience. Here, we present the challenges brought forward by this new breed of technologies and where current orchestration techniques stand with regards to the new challenges. We also present a set of promising technologies that can help tame this brave new world.", "title": "" }, { "docid": "981cbb9140570a6a6f3d4f4f49cd3654", "text": "OBJECTIVES\nThe study sought to evaluate clinical outcomes in clinical practice with rhythm control versus rate control strategy for management of atrial fibrillation (AF).\n\n\nBACKGROUND\nRandomized trials have not demonstrated significant differences in stroke, heart failure, or mortality between rhythm and rate control strategies. The comparative outcomes in contemporary clinical practice are not well described.\n\n\nMETHODS\nPatients managed with a rhythm control strategy targeting maintenance of sinus rhythm were retrospectively compared with a strategy of rate control alone in a AF registry across various U.S. practice settings. Unadjusted and adjusted (inverse-propensity weighted) outcomes were estimated.\n\n\nRESULTS\nThe overall study population (N = 6,988) had a median of 74 (65 to 81) years of age, 56% were males, 77% had first detected or paroxysmal AF, and 68% had CHADS2 score ≥2. In unadjusted analyses, rhythm control was associated with lower all-cause death, cardiovascular death, first stroke/non-central nervous system systemic embolization/transient ischemic attack, or first major bleeding event (all p < 0.05); no difference in new onset heart failure (p = 0.28); and more frequent cardiovascular hospitalizations (p = 0.0006). There was no difference in the incidence of pacemaker, defibrillator, or cardiac resynchronization device implantations (p = 0.99). In adjusted analyses, there were no statistical differences in clinical outcomes between rhythm control and rate control treated patients (all p > 0.05); however, rhythm control was associated with more cardiovascular hospitalizations (hazard ratio: 1.24; 95% confidence interval: 1.10 to 1.39; p = 0.0003).\n\n\nCONCLUSIONS\nAmong patients with AF, rhythm control was not superior to rate control strategy for outcomes of stroke, heart failure, or mortality, but was associated with more cardiovascular hospitalizations.", "title": "" }, { "docid": "bb404a57964fcd5500006e039ba2b0dd", "text": "The needs of the child are paramount. The clinician’s first task is to diagnose the cause of symptoms and signs whether accidental, inflicted or the result of an underlying medical condition. Where abuse is diagnosed the task is to safeguard the child and treat the physical and psychological effects of maltreatment. A child is one who has not yet reached his or her 18th birthday. Child abuse is any action by another person that causes significant harm to a child or fails to meet a basic need. It involves acts of both commission and omission with effects on the child’s physical, developmental, and psychosocial well-being. The vast majority of carers from whatever walk of life, love, nurture and protect their children. A very few, in a momentary loss of control in an otherwise caring parent, cause much regretted injury. An even smaller number repeatedly maltreat their children in what becomes a pattern of abuse. One parent may harm, the other may fail to protect by omitting to seek help. Child abuse whether physical or psychological is unlawful.", "title": "" } ]
scidocsrr
289d04efc3d8f5819adf2c0de3e10913
An $X$ -Band Lumped-Element Wilkinson Combiner With Embedded Impedance Transformation
[ { "docid": "e9c52fb24425bff6ed514de6b92e8ba2", "text": "This paper proposes a ultra compact Wilkinson power combiner (WPC) incorporating synthetic transmission lines at K-band in CMOS technology. The 50 % improvement on the size reduction can be achieved by increasing the slow-wave factor of synthetic transmission line. The presented Wilkinson power combiner design is analyzed and fabricated by using standard 0.18 µm 1P6M CMOS technology. The prototype has only a chip size of 480 µm × 90 µm, corresponding to 0.0002λ02 at 21.5 GHz. The measured insertion losses and return losses are less and higher than 4 dB and 17.5 dB from 16 GHz to 27 GHz, respectively. Furthermore, the proposed WPC is also integrated into the phase shifter to confirm its feasibility. The prototype of phase shifter shows 15 % size reduction and on-wafer measurements show good linearity of full 360-degree phase shifting from 21 GHz to 27 GHz.", "title": "" } ]
[ { "docid": "d9870dc31895226f60537b3e8591f9fd", "text": "This paper reports on the design of a low phase noise 76.8 MHz AlN-on-silicon reference oscillator using SiO2 as temperature compensation material. The paper presents profound theoretical optimization of all the important parameters for AlN-on-silicon width extensional mode resonators, filling into the knowledge gap targeting the tens of megahertz frequency range for this type of resonators. Low loading CMOS cross coupled series resonance oscillator is used to reach the-state-of-the-art LTE phase noise specifications. Phase noise of 123 dBc/Hz at 1 kHz, and 162 dBc/Hz at 1 MHz offset is achieved. The oscillator's integrated root mean square RMS jitter is 106 fs (10 kHz to 20 MHz), consuming 850 μA, with startup time of 250 μs, and a figure-of-merit FOM of 216 dB. This work offers a platform for high performance MEMS reference oscillators; where, it shows the applicability of replacing bulky quartz with MEMS resonators in cellular platforms. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5a583f5b67ceb7c59da2cef8201880df", "text": "This article presents two designs of power amplifiers to be used with piezo-electric actuators in diesel injectors. The topologies as well as the controller approach and implementation are discussed.", "title": "" }, { "docid": "deeb21277f4cdb637a44941794e03359", "text": "This paper introduces methods to compute impulse responses without specification and estimation of the underlying multivariate dynamic system. The central idea consists in estimating local projections at each period of interest rather than extrapolating into increasingly distant horizons from a given model, as it is done with vector autoregressions (VAR). The advantages of local projections are numerous: (1) they can be estimated by simple regression techniques with standard regression packages; (2) they are more robust to misspecification; (3) joint or point-wise analytic inference is simple; and (4) they easily accommodate experimentation with highly non-linear and flexible specifications that may be impractical in a multivariate context. Therefore, these methods are a natural alternative to estimating impulse responses from VARs. Monte Carlo evidence and an application to a simple, closed-economy, new-Keynesian model clarify these numerous advantages. •", "title": "" }, { "docid": "b324860905b6d8c4b4a8429d53f2543d", "text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.", "title": "" }, { "docid": "163d7e9a00649b3a6036507f6a725af8", "text": "In the last decades, a lot of 3D face recognition techniques have been proposed. They can be divided into three parts, holistic matching techniques, feature-based techniques and hybrid techniques. In this paper, a hybrid technique is used, where, a prototype of a new hybrid face recognition technique depends on 3D face scan images are designed, simulated and implemented. Some geometric rules are used for analyzing and mapping the face. Image processing is used to get the twodimensional values of predetermined and specific facial points, software programming is used to perform a three-dimensional coordinates of the predetermined points and to calculate several geometric parameter ratios and relations. Neural network technique is used for processing the calculated geometric parameters and then performing facial recognition. The new design is not affected by variant pose, illumination and expression and has high accurate level compared with the 2D analysis. Moreover, the proposed algorithm is of higher performance than latest’s published biometric recognition algorithms in terms of cost, confidentiality of results, and availability of design tools.", "title": "" }, { "docid": "ea544ffc7eeee772388541d0d01812a7", "text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.", "title": "" }, { "docid": "0e4d0ecdc46b05c916b782a0594acd63", "text": "iii Acknowledgements iv Chapter", "title": "" }, { "docid": "72d863c7e323cd9b3ab4368a51743319", "text": "STUDY DESIGN\nThis study is a retrospective review of the initial enrollment data from a prospective multicentered study of adult spinal deformity.\n\n\nOBJECTIVES\nThe purpose of this study is to correlate radiographic measures of deformity with patient-based outcome measures in adult scoliosis.\n\n\nSUMMARY OF BACKGROUND DATA\nPrior studies of adult scoliosis have attempted to correlate radiographic appearance and clinical symptoms, but it has proven difficult to predict health status based on radiographic measures of deformity alone. The ability to correlate radiographic measures of deformity with symptoms would be useful for decision-making and surgical planning.\n\n\nMETHODS\nThe study correlates radiographic measures of deformity with scores on the Short Form-12, Scoliosis Research Society-29, and Oswestry profiles. Radiographic evaluation was performed according to an established positioning protocol for anteroposterior and lateral 36-inch standing radiographs. Radiographic parameters studied were curve type, curve location, curve magnitude, coronal balance, sagittal balance, apical rotation, and rotatory subluxation.\n\n\nRESULTS\nThe 298 patients studied include 172 with no prior surgery and 126 who had undergone prior spine fusion. Positive sagittal balance was the most reliable predictor of clinical symptoms in both patient groups. Thoracolumbar and lumbar curves generated less favorable scores than thoracic curves in both patient groups. Significant coronal imbalance of greater than 4 cm was associated with deterioration in pain and function scores for unoperated patients but not in patients with previous surgery.\n\n\nCONCLUSIONS\nThis study suggests that restoration of a more normal sagittal balance is the critical goal for any reconstructive spine surgery. The study suggests that magnitude of coronal deformity and extent of coronal correction are less critical parameters.", "title": "" }, { "docid": "8c8e9332a29edb7417ad47b045bf9de7", "text": "Knowledge and lessons from past accidental exposures in radiotherapy are very helpful in finding safety provisions to prevent recurrence. Disseminating lessons is necessary but not sufficient. There may be additional latent risks for other accidental exposures, which have not been reported or have not occurred, but are possible and may occur in the future if not identified, analyzed, and prevented by safety provisions. Proactive methods are available for anticipating and quantifying risk from potential event sequences. In this work, proactive methods, successfully used in industry, have been adapted and used in radiotherapy. Risk matrix is a tool that can be used in individual hospitals to classify event sequences in levels of risk. As with any anticipative method, the risk matrix involves a systematic search for potential risks; that is, any situation that can cause an accidental exposure. The method contributes new insights: The application of the risk matrix approach has identified that another group of less catastrophic but still severe single-patient events may have a higher probability, resulting in higher risk. The use of the risk matrix approach for safety assessment in individual hospitals would provide an opportunity for self-evaluation and managing the safety measures that are most suitable to the hospital's own conditions.", "title": "" }, { "docid": "3355c37593ee9ef1b2ab29823ca8c1d4", "text": "The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing.", "title": "" }, { "docid": "bf28cac251558f59aab6b49a373a8fba", "text": "Digital game play is becoming increasingly prevalent. Its participant-players number in the millions and its revenues are in billions of dollars. As they grow in popularity, digital games are also growing in complexity, depth and sophistication. This paper presents reasons why games and game play matter to the future of education. Drawing upon these works, the potential for instruction in digital games is recognised. Previous works in the area were also analysed with respect to their theoretical findings. We then propose a framework for digital Game-based Learning approach for adoption in education setting.", "title": "" }, { "docid": "4028f1eb3f14297fea30ae43fdf7fbb6", "text": "The optimisation of a tail-sitter UAV (Unmanned Aerial Vehicle) that uses a stall-tumble manoeuvre to transition from vertical to horizontal flight and a pull-up manoeuvre to regain the vertical is investigated. The tandem wing vehicle is controlled in the hover and vertical flight phases by prop-wash over wing mounted control surfaces. It represents an innovative and potentially simple solution to the dual requirements of VTOL (Vertical Take-off and Landing) and high speed forward flight by obviating the need for complex mechanical systems such as rotor heads or tilt-rotor systems.", "title": "" }, { "docid": "cb641fc639b86abadec4f85efc226c14", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "22a5aa4b9cbafa3cf63b6cf4aff60ba3", "text": "characteristics, burnout, and (other-ratings of) performance (N 146). We hypothesized that job demands (e.g., work pressure and emotional demands) would be the most important antecedents of the exhaustion component of burnout, which, in turn, would predict in-role performance (hypothesis 1). In contrast, job resources (e.g., autonomy and social support) were hypothesized to be the most important predictors of extra-role performance, through their relationship with the disengagement component of burnout (hypothesis 2). In addition, we predicted that job resources would buffer the relationship between job demands and exhaustion (hypothesis 3), and that exhaustion would be positively related to disengagement (hypothesis 4). The results of structural equation modeling analyses provided strong support for hypotheses 1, 2, and 4, but rejected hypothesis 3. These findings support the JD-R model’s claim that job demands and job resources initiate two psychological processes, which eventually affect organizational outcomes. © 2004 Wiley Periodicals, Inc.", "title": "" }, { "docid": "850a7daa56011e6c53b5f2f3e33d4c49", "text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.", "title": "" }, { "docid": "19a28d8bbb1f09c56f5c85be003a9586", "text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.", "title": "" }, { "docid": "c6954957e6629a32f9845df15c60be85", "text": "Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Löf random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.", "title": "" }, { "docid": "1e3585a27b6373685544dc392140a4fb", "text": "When operating in partially-known environments, autonomous vehicles must constantly update their maps and plans based on new sensor information. Much focus has been placed on developing efficient incremental planning algorithms that are able to efficiently replan when the map and associated cost function changes. However, much less attention has been placed on efficiently updating the cost function used by these planners, which can represent a significant portion of the time spent replanning. In this paper, we present the Limited Incremental Distance Transform algorithm, which can be used to efficiently update the cost function used for planning when changes in the environment are observed. Using this algorithm it is possible to plan paths in a completely incremental way starting from a list of changed obstacle classifications. We present results comparing the algorithm to the Euclidean distance transform and a mask-based incremental distance transform algorithm. Computation time is reduced by an order of magnitude for a UAV application. We also provide example results from an autonomous micro aerial vehicle with on-board sensing and computing.", "title": "" }, { "docid": "7182c5b1fac4a4d0d43a15c1feb28be1", "text": "This paper provides an objective evaluation of the performance impacts of binary XML encodings, using a fast stream-based XQuery processor as our representative application. Instead of proposing one binary format and comparing it against standard XML parsers, we investigate the individual effects of several binary encoding techniques that are shared by many proposals. Our goal is to provide a deeper understanding of the performance impacts of binary XML encodings in order to clarify the ongoing and often contentious debate over their merits, particularly in the domain of high performance XML stream processing.", "title": "" } ]
scidocsrr
4b2afadf68808bec3edbb2144ea1b547
AGIL: Learning Attention from Human for Visuomotor Tasks
[ { "docid": "825b567c1a08d769aa334b707176f607", "text": "A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.", "title": "" }, { "docid": "24880289ca2b6c31810d28c8363473b3", "text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "title": "" } ]
[ { "docid": "715d63ebb1316f7c35fd98871297b7d9", "text": "1. Associate Professor of Oncology of the State University of Ceará; Clinical Director of the Cancer Hospital of Ceará 2. Resident in Urology of Urology Department of the Federal University of Ceará 3. Associate Professor of Urology of the State University of Ceará; Assistant of the Division of Uro-Oncology, Cancer Hospital of Ceará 4. Professor of Urology Department of the Federal University of Ceará; Chief of Division of Uro-Oncology, Cancer Hospital of Ceará", "title": "" }, { "docid": "771611dc99e22b054b936fce49aea7fc", "text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.", "title": "" }, { "docid": "3e66d3e2674bdaa00787259ac99c3f68", "text": "Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. DempsterShafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.", "title": "" }, { "docid": "b6f9d5015fddbf92ab44ae6ce2f7d613", "text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.", "title": "" }, { "docid": "c2195ae053d1bbf712c96a442a911e31", "text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.", "title": "" }, { "docid": "a158bd5aaf6c1ea9ac2fcf5a77b24627", "text": "Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.", "title": "" }, { "docid": "42c0f8504f26d46a4cc92d3c19eb900d", "text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.", "title": "" }, { "docid": "7440101e3a6ff726c5c7a40f83d25816", "text": "The polar format algorithm (PFA) for spotlight synthetic aperture radar (SAR) is based on a linear approximation for the differential range to a scatterer. We derive a second-order Taylor series approximation of the differential range. We provide a simple and concise derivation of both the far-field linear approximation of the differential range, which forms the basis of the PFA, and the corresponding approximation limits based on the second-order terms of the approximation.", "title": "" }, { "docid": "3d4afb9ed09fbb6200175e2440b56755", "text": "A brief account is given of the discovery of abscisic acid (ABA) in roots and root caps of higher plants as well as the techniques by which ABA may be demonstrated in these tissues. The remainder of the review is concerned with examining the rôle of ABA in the regulation of root growth. In this regard, it is well established that when ABA is supplied to roots their elongation is usually inhibited, although at low external concentrations a stimulation of growth may also be found. Fewer observations have been directed at exploring the connection between root growth and the level of naturally occurring, endogenous ABA. Nevertheless, the evidence here also suggests that ABA is an inhibitory regulator of root growth. Moreover, ABA appears to be involved in the differential growth that arises in response to a gravitational stimulus. Recent reports that deny a rôle for ABA in root gravitropism are considered inconclusive. The response of roots to osmotic stress and the changes in ABA levels which ensue, are summarised; so are the interrelations between ABA and other hormones, particularly auxin (e.g. indoleacetic acid); both are considered in the context of the root growth and development. Quantitative changes in auxin and ABA levels may together provide the root with a flexible means of regulating its growth.", "title": "" }, { "docid": "4d0b04f546ab5c0d79bb066b1431ff51", "text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2fea6378ac23711ffa492a4b9c7dac06", "text": "This paper proposes an acceleration-based robust controller for the motion control problem, i.e., position and force control problems, of a novel series elastic actuator (SEA). A variable stiffness SEA is designed by using soft and hard springs in series so as to relax the fundamental performance limitation of conventional SEAs. Although the proposed SEA intrinsically has several superiorities in force control, its motion control problem, especially position control problem, is harder than conventional stiff and SEAs due to its special mechanical structure. It is shown that the performance of the novel SEA is limited when conventional motion control methods are used. The performance of the steady-state response is significantly improved by using disturbance observer (DOb), i.e., improving the robustness; however, it degrades the transient response by increasing the vibration at tip point. The vibration of the novel SEA and external disturbances are suppressed by using resonance ratio control (RRC) and arm DOb, respectively. The proposed method can be used in the motion control problem of conventional SEAs as well. The intrinsically safe mechanical structure and high-performance motion control system provide several benefits in industrial applications, e.g., robots can perform dexterous and versatile industrial tasks alongside people in a factory setting. The experimental results show viability of the proposals.", "title": "" }, { "docid": "c95f7046c21eb185c2582a571ed7d6d4", "text": "In some people, problematic cell phone use can lead to situations in which they lose control, similar to those observed in other cases of addiction. Although different scales have been developed to assess its severity, we lack an instrument that is able to determine the desire or craving associated with it. Thus, with the objective of evaluating craving for cell phone use, in this study, we develop and present the Mobile Phone Addiction Craving Scale (MPACS). It consists of eight Likert-style items, with 10 response options, referring to possible situations in which the interviewee is asked to evaluate the degree of restlessness that he or she feels if the cell phone is unavailable at the moment. It can be self-administered or integrated in an interview when abuse or problems are suspected. With the existence of a single dimension, reflected in the exploratory factor analysis (EFA), the scale presents adequate reliability and internal consistency (α = 0.919). Simultaneously, we are able to show significantly increased correlations (r = 0.785, p = 0.000) with the Mobile Phone Problematic Use Scale (MPPUS) and state anxiety (r = 0.330, p = 0.000). We are also able to find associations with impulsivity, measured using the urgency, premeditation, perseverance, and sensation seeking scale, particularly in the dimensions of negative urgency (r = 0.303, p = 0.000) and positive urgency (r = 0.290, p = 0.000), which confirms its construct validity. The analysis of these results conveys important discriminant validity among the MPPUS user categories that are obtained using the criteria by Chow et al. (1). The MPACS demonstrates higher levels of craving in persons up to 35 years of age, reversing with age. In contrast, we do not find significant differences among the sexes. Finally, a receiver operating characteristic (ROC) analysis allows us to establish the scores from which we are able to determine the different levels of craving, from the absence of craving to that referred to as addiction. Based on these results, we can conclude that this scale is a reliable tool that complements ongoing studies on problematic cell phone use.", "title": "" }, { "docid": "b8d8785968023a38d742abc15c01ee28", "text": "Cryptocurrencies (or digital tokens, digital currencies, e.g., BTC, ETH, XRP, NEO) have been rapidly gaining ground in use, value, and understanding among the public, bringing astonishing profits to investors. Unlike other money and banking systems, most digital tokens do not require central authorities. Being decentralized poses significant challenges for credit rating. Most ICOs are currently not subject to government regulations, which makes a reliable credit rating system for ICO projects necessary and urgent. In this paper, we introduce ICORATING, the first learning–based cryptocurrency rating system. We exploit natural-language processing techniques to analyze various aspects of 2,251 digital currencies to date, such as white paper content, founding teams, Github repositories, websites, etc. Supervised learning models are used to correlate the life span and the price change of cryptocurrencies with these features. For the best setting, the proposed system is able to identify scam ICO projects with 0.83 precision. We hope this work will help investors identify scam ICOs and attract more efforts in automatically evaluating and analyzing ICO projects. 1 2 Author contributions: J. Li designed research; Z. Sun, Z. Deng, F. Li and P. Shi prepared the data; S. Bian and A. Yuan contributed analytic tools; P. Shi and Z. Deng labeled the dataset; J. Li, W. Monroe and W. Wang designed the experiments; J. Li, W. Wu, Z. Deng and T. Zhang performed the experiments; J. Li and T. Zhang wrote the paper; W. Monroe and A. Yuan proofread the paper. Author Contacts: Figure 1: Market capitalization v.s. time. Figure 2: The number of new ICO projects v.s. time.", "title": "" }, { "docid": "4b3d890a8891cd8c84713b1167383f6f", "text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.", "title": "" }, { "docid": "7a62e5e29b9450280391a95145216877", "text": "We propose a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local visual features into a semantic space followed by a bottom-up aggregation of local information into a global representation of the entire image. Then a top-down propagation of the aggregated information takes place that enhances the contextual information of each local feature. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 x 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation.", "title": "" }, { "docid": "4dc9360837b5793a7c322f5b549fdeb1", "text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering", "title": "" }, { "docid": "40d8c7f1d24ef74fa34be7e557dca920", "text": "the rapid changing Internet environment has formed a competitive business setting, which provides opportunities for conducting businesses online. Availability of online transaction systems enable users to buy and make payment for products and services using the Internet platform. Thus, customers’ involvements in online purchasing have become an important trend. However, since the market is comprised of many different people and cultures, with diverse viewpoints, e-commerce businesses are being challenged by the reality of complex behavior of consumers. Therefore, it is vital to identify the factors that affect consumers purchasing decision through e-commerce in respective cultures and societies. In response to this claim, the purpose of this study is to explore the factors affecting customers’ purchasing decision through e-commerce (online shopping). Several factors such as trust, satisfaction, return policy, cash on delivery, after sale service, cash back warranty, business reputation, social and individual attitude, are considered. At this stage, the factors mentioned above, which are commonly considered influencing purchasing decision through online shopping in literature, are hypothesized to measure the causal relationship within the framework.", "title": "" }, { "docid": "0048b244bd55a724f9bcf4dbf5e551a8", "text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.", "title": "" }, { "docid": "eb7582d78766ce274ba899ad2219931f", "text": "BACKGROUND\nPrecise determination of breast volume facilitates reconstructive procedures and helps in the planning of tissue removal for breast reduction surgery. Various methods currently used to measure breast size are limited by technical drawbacks and unreliable volume determinations. The purpose of this study was to develop a formula to predict breast volume based on straightforward anthropomorphic measurements.\n\n\nMETHODS\nOne hundred one women participated in this study. Eleven anthropomorphic measurements were obtained on 202 breasts. Breast volumes were determined using a water displacement technique. Multiple stepwise linear regression was used to determine predictive variables and a unifying formula.\n\n\nRESULTS\nMean patient age was 37.7 years, with a mean body mass index of 31.8. Mean breast volumes on the right and left sides were 1328 and 1305 cc, respectively (range, 330 to 2600 cc). The final regression model incorporated the variables of breast base circumference in a standing position and a vertical measurement from the inframammary fold to a point representing the projection of the fold onto the anterior surface of the breast. The derived formula showed an adjusted R of 0.89, indicating that almost 90 percent of the variation in breast size was explained by the model.\n\n\nCONCLUSION\nSurgeons may find this formula a practical and relatively accurate method of determining breast volume.", "title": "" }, { "docid": "16cae1a2fe1c42b150b9bca8fd1a3289", "text": "Monte Carlo Tree Search (MCTS) has produced many recent breakthroughs in game AI research, particularly in computer Go. In this paper we consider how MCTS can be applied to create engaging AI for a popular commercial mobile phone game: Spades by AI Factory, which has been downloaded more than 2.5 million times. In particular, we show how MCTS can be integrated with knowledge-based methods to create an interesting, fun and strong player which makes far fewer plays that could be perceived by human observers as blunders than MCTS without the injection of knowledge. These blunders are particularly noticeable for Spades, where a human player must co-operate with an AI partner. MCTS gives objectively stronger play than the knowledge-based approach used in previous versions of the game and offers the flexibility to customise behaviour whilst maintaining a reusable core, with a reduced development cycle compared to purely knowledge-based techniques. Monte Carlo Tree Search (MCTS) is a family of game tree search algorithms that have advanced the state-of-theart in AI for a variety of challenging games, as surveyed in (Browne et al. 2012). Of particular note is the success of MCTS in the Chinese board game Go (Lee, Müller, and Teytaud 2010). MCTS has many appealing properties for decision making in games. It is an anytime algorithm that can effectively use whatever computation time is available. It also often performs well without any special knowledge or tuning for a particular game, although knowledge can be injected if desired to improve the AI’s strength or modify its playing style. These properties are attractive to a developer of a commercial game, where an AI that is perceived as high quality by players can be developed with significantly less effort than using purely knowledge-based AI methods. This paper presents findings from a collaboration between academic researchers and an independent game development company to integrate MCTS into a highly successful commercial version of the card game Spades for mobile devices running the Android operating system. Most previous work on MCTS uses win rate against a fixed AI opponent as the key metric of success. This is apCopyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. propriate when the aim is to win tournaments or to demonstrate MCTS’s ability to approximate optimal play. However for a commercial game, actual win rate is less important than how engaging the AI is for the players. For example if the AI is generally strong but occasionally makes moves that appear weak to a competent player, then the player’s enjoyment of the game is diminished. This is particularly important for games such as Spades where the player must cooperate with an AI partner whose apparent errors result in losses for the human player. In this paper we combine MCTS with knowledge-based approaches with the goal of creating an AI player that is not only strong in objective terms but is also perceived as strong by players. AI Factory1 is an independent UK-based company, incorporated in April 2003. AI Factory has developed a successful implementation of the popular card game Spades, which to date has been downloaded more than 2.5 million times and has an average review score of 4.5/5 from more than 78 000 reviews on the Google Play store. The knowledge-based AI used in previous versions plays competitively and has been well reviewed by users. This AI was developed using expert knowledge of the game and contains a large number of heuristics developed and tested over a period of 10 years. Much of the decision making is governed by these heuristics which are used to decide bids, infer what cards other players may hold, predict what cards other players may be likely to play and to decide what card to play. In AI Factory Spades, players interact with two AI opponents and one AI partner. Players can select their partners and opponents from a number of AI characters, each with a strength rating from 1 to 5 stars. Gameplay data shows that relatively few players choose intermediate level opponents: occasional or beginning players tend to choose 1-star opponents, whereas those players who play the game most frequently play almost exclusively against 5-star opponents. Presumably these are experienced card game players seeking a challenge. However some have expressed disappointment with the 5-star AI: although strong overall, it occasionally makes apparently bad moves. Our work provides strong evidence for a belief commonly held amongst game developers: the objective measures of strength (such as win rate) often used in the academic study of AI do not nechttp://www.aifactory.co.uk essarily provide a good metric for quality from a commercial AI perspective. The moves chosen by the AI may or may not be suboptimal in a game theoretic sense, but it is clear from player feedback that humans apply some intuition about which moves are good or bad. It is an unsatisfying experience when the AI makes moves which violate this intuition, except possibly where violating this intuition is a correct play, but even then this appears to lead to player dissatisfaction. The primary motivation for this work is to improve the strongest levels of AI play to satisfy experienced players, both in terms of the objective strength of the AI and in how convincing the chosen moves appear. Previous work has adapted MCTS to games which, like Spades, involve hidden information. This has led to the development of the Information Set Monte Carlo Tree Search (ISMCTS) family of algorithms (Cowling, Powley, and Whitehouse 2012). ISMCTS achieves a higher win rate than a knowledge-based AI developed by AI Factory for the Chinese card game Dou Di Zhu, and also performs well in other domains. ISMCTS uses determinizations, randomisations of the current game state which correspond to guessing hidden information. Each determinization is a game state that could conceivably be the actual current state, given the AI player’s observations so far. In Spades, a determinization is generated by randomly distributing the unseen cards amongst the other players. Each ISMCTS iteration is restricted to a newly generated determinization, resulting in a single tree that collects statistics from many determinizations. We demonstrate that the ISMCTS algorithm provides strong levels of play for Spades. However, previous work on ISMCTS has not dealt with the requirements for a commercially viable AI. Consequently, further research and development was needed in order to ensure the AI is perceived to be high quality by users. However, the effort required to inject knowledge into MCTS was small compared to the work needed to develop a heuristic-based AI from scratch. MCTS therefore shows great promise as a reusable basis for AI in commercial games. The ISMCTS player described in this paper is used in the currently available version of AI Factory Spades for the 4and 5-star AI levels, and AI Factory have already begun using the same code and techniques in products under development. This paper is structured as follows. We begin by outlining the rules of Spades and describing the knowledge-based approach used in AI Factory Spades. We then discuss some of the issues encountered in integrating MCTS with an existing mature codebase, and in running MCTS on mobile platforms with limited processor power and memory. We assess our MCTS player in terms of both raw playing strength and player engagement. We conclude with some thoughts on the promise of MCTS for future commercial games.", "title": "" } ]
scidocsrr
8ea6c2e2d82663cb0a47e7863d07b2ae
Projective Feature Learning for 3D Shapes with Multi-View Depth Images
[ { "docid": "0964d1cc6584f2e20496c2f02952ba46", "text": "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45% verification accuracy on LFW is achieved with only weakly aligned faces.", "title": "" } ]
[ { "docid": "614174e5e1dffe9824d7ef8fae6fb499", "text": "This paper starts with presenting a fundamental principle based on which the celebrated orthogonal frequency division multiplexing (OFDM) waveform is constructed. It then extends the same principle to construct the newly introduced generalized frequency division multiplexing (GFDM) signals. This novel derivation sheds light on some interesting properties of GFDM. In particular, our derivation seamlessly leads to an implementation of GFDM transmitter which has significantly lower complexity than what has been reported so far. Our derivation also facilitates a trivial understanding of how GFDM (similar to OFDM) can be applied in MIMO channels.", "title": "" }, { "docid": "0f5caf6bb5e0fdb99fba592fd34f1a8b", "text": "Lawrence Kohlberg (1958) agreed with Piaget's (1932) theory of moral development in principle but wanted to develop his ideas further. He used Piaget’s storytelling technique to tell people stories involving moral dilemmas. In each case, he presented a choice to be considered, for example, between the rights of some authority and the needs of some deserving individual who is being unfairly treated. One of the best known of Kohlberg’s (1958) stories concerns a man called Heinz who lived somewhere in Europe. Heinz’s wife was dying from a particular type of cancer. Doctors said a new drug might save her. The drug had been discovered by a local chemist, and the Heinz tried desperately to buy some, but the chemist was charging ten times the money it cost to make the drug, and this was much more than the Heinz could afford. Heinz could only raise half the money, even after help from family and friends. He explained to the chemist that his wife was dying and asked if he could have the drug cheaper or pay the rest of the money later. The chemist refused, saying that he had discovered the drug and was going to make money from it. The husband was desperate to save his wife, so later that night he broke into the chemist’s and stole the drug.", "title": "" }, { "docid": "61980865ef90d0236af464caf2005024", "text": "Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG) signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy) were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM) classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.", "title": "" }, { "docid": "c4fef61aa26aa1d3ef693845b2ff3ee0", "text": "According to AV vendors malicious software has been growing exponentially last years. One of the main reasons for these high volumes is that in order to evade detection, malware authors started using polymorphic and metamorphic techniques. As a result, traditional signature-based approaches to detect malware are being insufficient against new malware and the categorization of malware samples had become essential to know the basis of the behavior of malware and to fight back cybercriminals. During the last decade, solutions that fight against malicious software had begun using machine learning approaches. Unfortunately, there are few opensource datasets available for the academic community. One of the biggest datasets available was released last year in a competition hosted on Kaggle with data provided by Microsoft for the Big Data Innovators Gathering (BIG 2015). This thesis presents two novel and scalable approaches using Convolutional Neural Networks (CNNs) to assign malware to its corresponding family. On one hand, the first approach makes use of CNNs to learn a feature hierarchy to discriminate among samples of malware represented as gray-scale images. On the other hand, the second approach uses the CNN architecture introduced by Yoon Kim [12] to classify malware samples according their x86 instructions. The proposed methods achieved an improvement of 93.86% and 98,56% with respect to the equal probability benchmark.", "title": "" }, { "docid": "dfc9099b1b31d5f214b341c65fbb8e92", "text": "In this communication, a dual-feed dual-polarized microstrip antenna with low cross polarization and high isolation is experimentally studied. Two different feed mechanisms are designed to excite a dual orthogonal linearly polarized mode from a single radiating patch. One of the two modes is excited by an aperture-coupled feed, which comprises a compact resonant annular-ring slot and a T-shaped microstrip feedline; while the other is excited by a pair of meandering strips with a 180$^{\\circ}$ phase differences. Both linearly polarized modes are designed to operate at 2400-MHz frequency band, and from the measured results, it is found that the isolation between the two feeding ports is less than 40 dB across a 10-dB input-impedance bandwidth of 14%. In addition, low cross polarization is observed from the radiation patterns of the two modes, especially at the broadside direction. Simulation analyses are also carried out to support the measured results.", "title": "" }, { "docid": "5e43dd30c8cf58fe1b79686b33a015b9", "text": "We review Boltzmann machines extended for time-series. These models often have recurrent structure, and back propagration through time (BPTT) is used to learn their parameters. The perstep computational complexity of BPTT in online learning, however, grows linearly with respect to the length of preceding time-series (i.e., learning rule is not local in time), which limits the applicability of BPTT in online learning. We then review dynamic Boltzmann machines (DyBMs), whose learning rule is local in time. DyBM’s learning rule relates to spike-timing dependent plasticity (STDP), which has been postulated and experimentally confirmed for biological neural networks.", "title": "" }, { "docid": "040f73fc915d3799193abf5e3a48e8f4", "text": "BACKGROUND\nDiphallia is a very rare anomaly and seen once in every 5.5 million live births. True diphallia with normal penile structures is extremely rare. Surgical management for patients with complete penile duplication without any penile or urethral pathology is challenging.\n\n\nCASE REPORT\nA 4-year-old boy presented with diphallia. Initial physical examination revealed first physical examination revealed complete penile duplication, urine flow from both penises, meconium flow from right urethra, and anal atresia. Further evaluations showed double colon and rectum, double bladder, and large recto-vesical fistula. Two cavernous bodies and one spongious body were detected in each penile body. Surgical treatment plan consisted of right total penectomy and end-to-side urethra-urethrostomy. No postoperative complications and no voiding dysfunction were detected during the 18 months follow-up.\n\n\nCONCLUSION\nPenile duplication is a rare anomaly, which presents differently in each patient. Because of this, the treatment should be individualized and end-to-side urethra-urethrostomy may be an alternative to removing posterior urethra. This approach eliminates the risk of damaging prostate gland and sphincter.", "title": "" }, { "docid": "48c4b2a708f2607a8d66b642e917433d", "text": "In this paper we present an approach to control a real car with brain signals. To achieve this, we use a brain computer interface (BCI) which is connected to our autonomous car. The car is equipped with a variety of sensors and can be controlled by a computer. We implemented two scenarios to test the usability of the BCI for controlling our car. In the first scenario our car is completely brain controlled, using four different brain patterns for steering and throttle/brake. We will describe the control interface which is necessary for a smooth, brain controlled driving. In a second scenario, decisions for path selection at intersections and forkings are made using the BCI. Between these points, the remaining autonomous functions (e.g. path following and obstacle avoidance) are still active. We evaluated our approach in a variety of experiments on a closed airfield and will present results on accuracy, reaction times and usability.", "title": "" }, { "docid": "b4cadd9179150203638ff9b045a4145d", "text": "Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application.", "title": "" }, { "docid": "b123916f2795ab6810a773ac69bdf00b", "text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.", "title": "" }, { "docid": "8fdfebc612ff46103281fcdd7c9d28c8", "text": "We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented. Wir entwickeln einen Algorithmus mit kürzesten alternierenden Wegen für das lineare Zuordnungsproblem. Er enthält neue Routinen für die Anfangswerte und eine spezielle Implementierung der Kürzesten-Wege-Methode von Dijkstra. Sowohl für dichte als auch für dünne Probleme zeigen Testläufe, daß unser Algorithmus gleichmäßig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementierung in Pascal wird angegeben.", "title": "" }, { "docid": "eb9b4bea2d1a6230f8fb9e742bb7bc23", "text": "Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forwardand back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.", "title": "" }, { "docid": "9c2e89bad3ca7b7416042f95bf4f4396", "text": "We present a simple and computationally efficient algorithm for approximating Catmull-Clark subdivision surfaces using a minimal set of bicubic patches. For each quadrilateral face of the control mesh, we construct a geometry patch and a pair of tangent patches. The geometry patches approximate the shape and silhouette of the Catmull-Clark surface and are smooth everywhere except along patch edges containing an extraordinary vertex where the patches are C0. To make the patch surface appear smooth, we provide a pair of tangent patches that approximate the tangent fields of the Catmull-Clark surface. These tangent patches are used to construct a continuous normal field (through their cross-product) for shading and displacement mapping. Using this bifurcated representation, we are able to define an accurate proxy for Catmull-Clark surfaces that is efficient to evaluate on next-generation GPU architectures that expose a programmable tessellation unit.", "title": "" }, { "docid": "3fa5de33e7ccd6c440a4a65a5681f8b8", "text": "Argumentation is the process by which arguments are constructed and handled. Argumentation constitutes a major component of human intelligence. The ability to engage in argumentation is essential for humans to understand new problems, to perform scientific reasoning, to express, to clarify and to defend their opinions in their daily lives. Argumentation mining aims to detect the arguments presented in a text document, the relations between them and the internal structure of each individual argument. In this paper we analyse the main research questions when dealing with argumentation mining and the different methods we have studied and developed in order to successfully confront the challenges of argumentation mining in legal texts.", "title": "" }, { "docid": "5793cf03753f498a649c417e410c325e", "text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.", "title": "" }, { "docid": "b1960cfe66e08bac1d4ff790ecfb0190", "text": "Cloud federations are a new collaboration paradigm where organizations share data across their private cloud infrastructures. However, the adoption of cloud federations is hindered by federated organizations' concerns on potential risks of data leakage and data misuse. For cloud federations to be viable, federated organizations' privacy concerns should be alleviated by providing mechanisms that allow organizations to control which users from other federated organizations can access which data. We propose a novel identity and access management system for cloud federations. The system allows federated organizations to enforce attribute-based access control policies on their data in a privacy-preserving fashion. Users are granted access to federated data when their identity attributes match the policies, but without revealing their attributes to the federated organization owning data. The system also guarantees the integrity of the policy evaluation process by using block chain technology and Intel SGX trusted hardware. It uses block chain to ensure that users identity attributes and access control policies cannot be modified by a malicious user, while Intel SGX protects the integrity and confidentiality of the policy enforcement process. We present the access control protocol, the system architecture and discuss future extensions.", "title": "" }, { "docid": "b7e78ca489cdfb8efad03961247e12f2", "text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling", "title": "" }, { "docid": "7431ee071307189e58b5c7a9ce3a2189", "text": "Among tangible threats and vulnerabilities facing current biometric systems are spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access and advantages. Recently, an increasing attention has been given to this research problem. This can be attested by the growing number of articles and the various competitions that appear in major biometric forums. We have recently participated in a large consortium (TABULARASA) dealing with the vulnerabilities of existing biometric systems to spoofing attacks with the aim of assessing the impact of spoofing attacks, proposing new countermeasures, setting standards/protocols, and recording databases for the analysis of spoofing attacks to a wide range of biometrics including face, voice, gait, fingerprints, retina, iris, vein, electro-physiological signals (EEG and ECG). The goal of this position paper is to share the lessons learned about spoofing and anti-spoofing in face biometrics, and to highlight open issues and future directions.", "title": "" }, { "docid": "8a22660b73d11ee9c634579527049d43", "text": "Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene. Motivated by the important role of attention in human perception, we tackle this limitation by introducing unsupervised attention mechanisms that are jointly adversarially trained with the generators and discriminators. We demonstrate qualitatively and quantitatively that our approach attends to relevant regions in the image without requiring supervision, which creates more realistic mappings when compared to those of recent approaches. Input Ours CycleGAN [1] RA [2] DiscoGAN [3] UNIT [4] DualGAN [5] Figure 1: By explicitly modeling attention, our algorithm is able to better alter the object of interest in unsupervised image-to-image translation tasks, without changing the background at the same time.", "title": "" }, { "docid": "ec593c78e3b2bc8f9b8a657093daac49", "text": "Analyses of 3-D seismic data in predominantly basin-floor settings offshore Indonesia, Nigeria, and the Gulf of Mexico, reveal the extensive presence of gravity-flow depositional elements. Five key elements were observed: (1) turbidity-flow leveed channels, (2) channeloverbank sediment waves and levees, (3) frontal splays or distributarychannel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets. Each depositional element displays a unique morphology and seismic expression. The reservoir architecture of each of these depositional elements is a function of the interaction between sedimentary process, sea-floor morphology, and sediment grain-size distribution. (1) Turbidity-flow leveed-channel widths range from greater than 3 km to less than 200 m. Sinuosity ranges from moderate to high, and channel meanders in most instances migrate down-system. The highamplitude reflection character that commonly characterizes these features suggests the presence of sand within the channels. In some instances, high-sinuosity channels are associated with (2) channel-overbank sediment-wave development in proximal overbank levee settings, especially in association with outer channel bends. These sediment waves reach heights of 20 m and spacings of 2–3 km. The crests of these sediment waves are oriented normal to the inferred transport direction of turbidity flows, and the waves have migrated in an upflow direction. Channel-margin levee thickness decreases systematically down-system. Where levee thickness can no longer be resolved seismically, high-sinuosity channels feed (3) frontal splays or low-sinuosity, distributary-channel complexes. Low-sinuosity distributary-channel complexes are expressed as lobate sheets up to 5–10 km wide and tens of kilometers long that extend to the distal edges of these systems. They likely comprise sheet-like sandstone units consisting of shallow channelized and associated sand-rich overbank deposits. Also observed are (4) crevasse-splay deposits, which form as a result of the breaching of levees, commonly at channel bends. Similar to frontal splays, but smaller in size, these deposits commonly are characterized by sheet-like turbidites. (5) Debris-flow deposits comprise low-sinuosity channel fills, narrow elongate lobes, and sheets and are characterized seismically by contorted, chaotic, low-amplitude reflection patterns. These deposits commonly overlie striated or grooved pavements that can be up to tens of kilometers long, 15 m deep, and 25 m wide. Where flows are unconfined, striation patterns suggest that divergent flow is common. Debris-flow deposits extend as far basinward as turbidites, and individual debris-flow units can reach 80 m in thickness and commonly are marked by steep edges. Transparent to chaotic seismic reflection character suggest that these deposits are mud-rich. Stratigraphically, deep-water basin-floor successions commonly are characterized by mass-transport deposits at the base, overlain by turbidite frontal-splay deposits and subsequently by leveed-channel deposits. Capping this succession is another mass-transport unit ultimately overlain and draped by condensed-section deposits. This succession can be related to a cycle of relative sea-level change and associated events at the corresponding shelf edge. Commonly, deposition of a deep-water sequence is initiated with the onset of relative sea-level fall and ends with subsequent rapid relative sea-level rise. INTRODUCTION The understanding of deep-water depositional systems has advanced significantly in recent years. In the past, much understanding of deep-water sedimentation came from studies of outcrops, recent fan systems, and 2D reflection seismic data (Bouma 1962; Mutti and Ricci Lucchi 1972; Normark 1970, 1978; Walker 1978; Posamentier et al. 1991; Weimer 1991; Mutti and Normark 1991). However, in recent years this knowledge has advanced significantly because of (1) the interest by petroleum companies in deep-water exploration (e.g., Pirmez et al. 2000), and the advent of widely available high-quality 3D seismic data across a broad range of deepwater environments (e.g., Beaubouef and Friedman 2000; Posamentier et al. 2000), (2) the recent drilling and coring of both near-surface and reservoir-level deep-water systems (e.g., Twichell et al. 1992), and (3) the increasing utilization of deep-tow side-scan sonar and other imaging devices (e.g., Twichell et al. 1992; Kenyon and Millington 1995). It is arguably the first factor that has had the most significant impact on our understanding of deep-water systems. Three-dimensional seismic data afford an unparalleled view of the deep-water depositional environment, in some instances with vertical resolution down to 2–3 m. Seismic time slices, horizon-datum time slices, and interval attributes provide images of deepwater depositional systems in map view that can then be analyzed from a geomorphologic perspective. Geomorphologic analyses lead to the identification of depositional elements, which, when integrated with seismic profiles, can yield significant stratigraphic insight. Finally, calibration by correlation with borehole data, including logs, conventional core, and biostratigraphic samples, can provide the interpreter with an improved understanding of the geology of deep-water systems. The focus of this study is the deep-water component of a depositional sequence. We describe and discuss only those elements and stratigraphic successions that are present in deep-water depositional environments. The examples shown in this study largely are Pleistocene in age and most are encountered within the uppermost 400 m of substrate. These relatively shallowly buried features represent the full range of lowstand deep-water depositional sequences from early and late lowstand through transgressive and highstand deposits. Because they are not buried deeply, these stratigraphic units commonly are well-imaged on 3D seismic data. It is also noteworthy that although the examples shown here largely are of Pleistocene age, the age of these deposits should not play a significant role in subsequent discussion. What determines the architecture of deep-water deposits are the controlling parameters of flow discharge, sand-to-mud ratio, slope length, slope gradient, and rugosity of the seafloor, and not the age of the deposits. It does not matter whether these deposits are Pleistocene, Carboniferous, or Precambrian; the physical ‘‘first principles’’ of sediment gravity flow apply without distinguishing between when these deposits formed. However, from the perspective of studying deep-water turbidites it is advantageous that the Pleistocene was such an active time in the deepwater environment, resulting in deposition of numerous shallowly buried, well-imaged, deep-water systems. Depositional Elements Approach This study is based on the grouping of similar geomorphic features referred to as depositional elements. Depositional elements are defined by 368 H.W. POSAMENTIER AND V. KOLLA FIG. 1.—Schematic depiction of principal depositional elements in deep-water settings. Mutti and Normark (1991) as the basic mappable components of both modern and ancient turbidite systems and stages that can be recognized in marine, outcrop, and subsurface studies. These features are the building blocks of landscapes. The focus of this study is to use 3D seismic data to characterize the geomorphology and stratigraphy of deep-water depositional elements and infer process of deposition where appropriate. Depositional elements can vary from place to place and in the same place through time with changes of environmental parameters such as sand-to-mud ratio, flow discharge, and slope gradient. In some instances, systematic changes in these environmental parameters can be tied back to changes of relative sea level. The following depositional elements will be discussed: (1) turbidityflow leveed channels, (2) overbank sediment waves and levees, (3) frontal splays or distributary-channel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets (Fig. 1). Each element is described and depositional processes are discussed. Finally, the exploration significance of each depositional element is reviewed. Examples are drawn from three deep-water slope and basin-floor settings: the Gulf of Mexico, offshore Nigeria, and offshore eastern Kalimantan, Indonesia. We utilized various visualization techniques, including 3D perspective views, horizon slices, and horizon and interval attribute displays, to bring out the detailed characteristics of depositional elements and their respective geologic settings. The deep-water depositional elements we present here are commonly characterized by peak seismic frequencies in excess of 100 Hz. The vertical resolution at these shallow depths of burial is in the range of 3–4 m, thus affording high-resolution images of depositional elements. We hope that our study, based on observations from the shallow subsurface, will provide general insights into the reservoir architecture of deep-water depositional elements, which can be extrapolated to more poorly resolved deep-water systems encountered at deeper exploration depths. DEPOSITIONAL ELEMENTS The following discussion focuses on five depositional elements in deepwater environments. These include turbidity-flow leveed channels, overbank or levee deposits, frontal splays or distributary-channel complexes, crevasse splays, and debris-flow sheets, lobes, and channels (Fig. 1). Turbidity-Flow Leveed Channels Leveed channels are common depositional elements in slope and basinfloor environments. Leveed channels observed in this study range in width from 3 km to less than 250 m and in sinuosity (i.e., the ratio of channelaxis length to channel-belt length) between 1.2 and 2.2. Some leveed channels are internally characterized by complex cut-and-fill architecture. Many leveed channels show evidence ", "title": "" } ]
scidocsrr
05c93893f503dc646716fb23d52ebad1
3D Printing Your Wireless Coverage
[ { "docid": "1f39815e008e895632403bbe9456acad", "text": "Information on site-specific spectrum characteristics is essential to evaluate and improve the performance of wireless networks. However, it is usually very costly to obtain accurate spectrum-condition information in heterogeneous wireless environments. This paper presents a novel spectrum-survey system, called Sybot (Spectrum survey robot), that guides network engineers to efficiently monitor the spectrum condition (e.g., RSS) of WiFi networks. Sybot effectively controls mobility and employs three disparate monitoring techniques - complete, selective, and diagnostic - that help produce and maintain an accurate spectrum-condition map for challenging indoor WiFi networks. By adaptively triggering the most suitable of the three techniques, Sybot captures spatio-temporal changes in spectrum condition. Moreover, based on the monitoring results, Sybot automatically determines several key survey parameters, such as site-specific measurement time and space granularities. Sybot has been prototyped with a commodity IEEE 802.11 router and Linux OS, and experimentally evaluated, demonstrating its ability to generate accurate spectrum-condition maps while reducing the measurement effort (space, time) by more than 56%.", "title": "" }, { "docid": "080dbf49eca85711f26d4e0d8386937a", "text": "In this work, we investigate the use of directional antennas and beam steering techniques to improve performance of 802.11 links in the context of communication between amoving vehicle and roadside APs. To this end, we develop a framework called MobiSteer that provides practical approaches to perform beam steering. MobiSteer can operate in two modes - cached mode - where it uses prior radiosurvey data collected during \"idle\" drives, and online mode, where it uses probing. The goal is to select the best AP and beam combination at each point along the drive given the available information, so that the throughput can be maximized. For the cached mode, an optimal algorithm for AP and beam selection is developed that factors in all overheads.\n We provide extensive experimental results using a commercially available eight element phased-array antenna. In the experiments, we use controlled scenarios with our own APs, in two different multipath environments, as well as in situ scenarios, where we use APs already deployed in an urban region - to demonstrate the performance advantage of using MobiSteer over using an equivalent omni-directional antenna. We show that MobiSteer improves the connectivity duration as well as PHY-layer data rate due to better SNR provisioning. In particular, MobiSteer improves the throughput in the controlled experiments by a factor of 2 - 4. In in situ experiments, it improves the connectivity duration by more than a factor of 2 and average SNR by about 15 dB.", "title": "" } ]
[ { "docid": "ff56bae298b25accf6cd8c2710160bad", "text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.", "title": "" }, { "docid": "b1d61ca503702f950ef1275b904850e7", "text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.", "title": "" }, { "docid": "9746a126b884fe5e542ebb31f814c281", "text": "LLC resonant DC/DC converters are becoming popular in computing applications, such as telecom, server systems. For these applications, it is required to meet the EMI standard. In this paper, novel EMI noise transferring path and EMI model for LLC resonant DC/DC converters are proposed. DM and CM noise of LLC resonant converter are analyzed. Several EMI noise reduction approaches are proposed. Shield layers are applied to reduce CM noise. By properly choosing the ground point of shield layer, significant noise reduction can be obtained. With extra EMI balance capacitor, CM noise can be reduced further. Two channel interleaving LLC resonant converters are proposed to cancel the CM current. Conceptually, when two channels operate with 180 degree phase shift, CM current can be canceled. Therefore, the significant EMI noise reduction can be achieved.", "title": "" }, { "docid": "7d1a7bc7809a578cd317dfb8ba5b7678", "text": "In this paper, we introduce a new technology, which allows people to share taste and smell sensations digitally with a remote person through existing networking technologies such as the Internet. By introducing this technology, we expect people to share their smell and taste experiences with their family and friends remotely. Sharing these senses are immensely beneficial since those are strongly associated with individual memories, emotions, and everyday experiences. As the initial step, we developed a control system, an actuator, which could digitally stimulate the sense of taste remotely. The system uses two approaches to stimulate taste sensations digitally: the electrical and thermal stimulations on tongue. Primary results suggested that sourness and saltiness are the main sensations that could be evoked through this device. Furthermore, this paper focuses on future aspects of such technology for remote smell actuation followed by applications and possibilities for further developments.", "title": "" }, { "docid": "a79424d0ec38c2355b288364f45f90de", "text": "This paper mainly deals with various classification algorithms namely, Bayes. NaiveBayes, Bayes. BayesNet, Bayes. NaiveBayesUpdatable, J48, Randomforest, and Multi Layer Perceptron. It analyzes the hepatitis patients from the UC Irvine machine learning repository. The results of the classification model are accuracy and time. Finally, it concludes that the Naive Bayes performance is better than other classification techniques for hepatitis patients.", "title": "" }, { "docid": "a04e2df0d6ca5eae1db6569b43b897bd", "text": "Workflow technologies have become a major vehicle for easy and efficient development of scientific applications. In the meantime, state-of-the-art resource provisioning technologies such as cloud computing enable users to acquire computing resources dynamically and elastically. A critical challenge in integrating workflow technologies with resource provisioning technologies is to determine the right amount of resources required for the execution of workflows in order to minimize the financial cost from the perspective of users and to maximize the resource utilization from the perspective of resource providers. This paper suggests an architecture for the automatic execution of large-scale workflow-based applications on dynamically and elastically provisioned computing resources. Especially, we focus on its core algorithm named PBTS (Partitioned Balanced Time Scheduling), which estimates the minimum number of computing hosts required to execute a workflow within a user-specified finish time. The PBTS algorithm is designed to fit both elastic resource provisioning models such as Amazon EC2 and malleable parallel application models such as MapReduce. The experimental results with a number of synthetic workflows and several real science workflows demonstrate that PBTS estimates the resource capacity close to the theoretical low bound. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a5e01cfeb798d091dd3f2af1a738885b", "text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.", "title": "" }, { "docid": "758978c4b8f3bdd0a57fe9865892fbc3", "text": "The foundation of a process model lies in its structural specifications. Using a generic process modeling language for workflows, we show how a structural specification may contain deadlock and lack of synchronization conflicts that could compromise the correct execution of workflows. In general, identification of such conflicts is a computationally complex problem and requires development of effective algorithms specific for the target modeling language. We present a visual verification approach and algorithm that employs a set of graph reduction rules to identify structural conflicts in process models for the given workflow modeling language. We also provide insights into the correctness and complexity of the reduction process. Finally, we show how the reduction algorithm may be used to count possible instance subgraphs of a correct process model. The main contribution of the paper is a new technique for satisfying well-defined correctness criteria in process models.", "title": "" }, { "docid": "12a5fb7867cddaca43c3508b0c1a1ed2", "text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.", "title": "" }, { "docid": "746f77aad26e3e3492ef021ac0d7da6a", "text": "The proliferation of mobile computing and smartphone technologies has resulted in an increasing number and range of services from myriad service providers. These mobile service providers support numerous emerging services with differing quality metrics but similar functionality. Facilitating an automated service workflow requires fast selection and composition of services from the services pool. The mobile environment is ambient and dynamic in nature, requiring more efficient techniques to deliver the required service composition promptly to users. Selecting the optimum required services in a minimal time from the numerous sets of dynamic services is a challenge. This work addresses the challenge as an optimization problem. An algorithm is developed by combining particle swarm optimization and k-means clustering. It runs in parallel using MapReduce in the Hadoop platform. By using parallel processing, the optimum service composition is obtained in significantly less time than alternative algorithms. This is essential for handling large amounts of heterogeneous data and services from various sources in the mobile environment. The suitability of this proposed approach for big data-driven service composition is validated through modeling and simulation.", "title": "" }, { "docid": "7ebbb9ebc94c72997895b4141de6f67a", "text": "Purpose – The purpose of this paper is to highlight the potential role that the so-called “toxic triangle” (Padilla et al., 2007) can play in undermining the processes around effectiveness. It is the interaction between leaders, organisational members, and the environmental context in which those interactions occur that has the potential to generate dysfunctional behaviours and processes. The paper seeks to set out a set of issues that would seem to be worthy of further consideration within the Journal and which deal with the relationships between organisational effectiveness and the threats from insiders. Design/methodology/approach – The paper adopts a systems approach to the threats from insiders and the manner in which it impacts on organisation effectiveness. The ultimate goal of the paper is to stimulate further debate and discussion around the issues. Findings – The paper adds to the discussions around effectiveness by highlighting how senior managers can create the conditions in which failure can occur through the erosion of controls, poor decision making, and the creation of a culture that has the potential to generate failure. Within this setting, insiders can serve to trigger a series of failures by their actions and for which the controls in place are either ineffective or have been by-passed as a result of insider knowledge. Research limitations/implications – The issues raised in this paper need to be tested empirically as a means of providing a clear evidence base in support of their relationships with the generation of organisational ineffectiveness. Practical implications – The paper aims to raise awareness and stimulate thinking by practising managers around the role that the “toxic triangle” of issues can play in creating the conditions by which organisations can incubate the potential for crisis. Originality/value – The paper seeks to bring together a disparate body of published work within the context of “organisational effectiveness” and sets out a series of dark characteristics that organisations need to consider if they are to avoid failure. The paper argues the case that effectiveness can be a fragile construct and that the mechanisms that generate failure also need to be actively considered when discussing what effectiveness means in practice.", "title": "" }, { "docid": "e36bc2b20c8fb5ba6d03672f7896a92c", "text": "We study the adaptation of convolutional neural networks to the complex temporal radio signal domain. We compare the efficacy of radio modulation classification using naively learned features against using expert features, which are currently used widely and well regarded in the field and we show significant performance improvements. We show that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task.", "title": "" }, { "docid": "04ef2056dd9490820fd4309c906840aa", "text": "A millimeter-wave filtering monopulse antenna array based on substrate integrated waveguide (SIW) technology is proposed, manufactured, and tested in this communication. The proposed antenna array consists of a filter, a monopulse comparator, a feed network, and four antennas. A square dual-mode SIW cavity is designed to realize the monopulse comparator, in which internal coupling slots are located at its diagonal lines for the purpose of meeting the internal coupling coefficiencies in both sum and difference channels. Then, a four-output filter including the monopulse comparator is synthesized efficiently by modifying the coupling matrix of a single-ended filter. Finally, each SIW resonator coupled with those four outputs of the filter is replaced by a cavity-backed slot antenna so as to form the proposed filtering antenna array. A prototype is demonstrated at Ka band with a center frequency of 29.25 GHz and fractional bandwidth of 1.2%. Our measurement shows that, for the H-plane, the sidelobe levels of the sum pattern are less than -15 dB and the null depths of the difference pattern are less than -28 dB. The maximum measured gain of the sum beam at the center operating frequency is 8.1 dBi.", "title": "" }, { "docid": "8aca118a1171c2c3fd7057468adc84b2", "text": "Automatically constructing a complete documentary or educational film from scattered pieces of images and knowledge is a significant challenge. Even when this information is provided in an annotated format, the problems of ordering, structuring and animating sequences of images, and producing natural language descriptions that correspond to those images within multiple constraints, are each individually difficult tasks. This paper describes an approach for tackling these problems through a combination of rhetorical structures with narrative and film theory to produce movie-like visual animations from still images along with natural language generation techniques needed to produce text descriptions of what is being seen in the animations. The use of rhetorical structures from NLG is used to integrate separate components for video creation and script generation. We further describe an implementation, named GLAMOUR, that produces actual, short video documentaries, focusing on a cultural heritage domain, and that have been evaluated by professional filmmakers.  2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0048b244bd55a724f9bcf4dbf5e551a8", "text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.", "title": "" }, { "docid": "d22e8f2029e114b0c648a2cdfba4978a", "text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.", "title": "" }, { "docid": "8a16fe77b90f86adcdaf87f873b59d44", "text": "As computational learning agents move into domains that incur real costs (e.g., autonomous driving or financial investment), it will be necessary to learn good policies without numerous high-cost learning trials. One promising approach to reducing sample complexity of learning a task is knowledge transfer from humans to agents. Ideally, methods of transfer should be accessible to anyone with task knowledge, regardless of that person's expertise in programming and AI. This paper focuses on allowing a human trainer to interactively shape an agent's policy via reinforcement signals. Specifically, the paper introduces \"Training an Agent Manually via Evaluative Reinforcement,\" or TAMER, a framework that enables such shaping. Differing from previous approaches to interactive shaping, a TAMER agent models the human's reinforcement and exploits its model by choosing actions expected to be most highly reinforced. Results from two domains demonstrate that lay users can train TAMER agents without defining an environmental reward function (as in an MDP) and indicate that human training within the TAMER framework can reduce sample complexity over autonomous learning algorithms.", "title": "" }, { "docid": "195f4ab1fe7950d011a9fd01a567128b", "text": "To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization ability is, however, observed for the saliency-boosted model on unseen data.", "title": "" }, { "docid": "95063d2a5b2df6c13c89ecfdceeb6c06", "text": "This paper proposes a novel reference signal generation method for the unified power quality conditioner (UPQC) adopted to compensate current and voltage-quality problems of sensitive loads. The UPQC consists of a shunt and series converter having a common dc link. The shunt converter eliminates current harmonics originating from the nonlinear load side and the series converter mitigates voltage sag/swell originating from the supply side. The developed controllers for shunt and series converters are based on an enhanced phase-locked loop and nonlinear adaptive filter. The dc link control strategy is based on the fuzzy-logic controller. A fast sag/swell detection method is also presented. The efficacy of the proposed system is tested through simulation studies using the Power System Computer Aided Design/Electromagnetic Transients dc analysis program. The proposed UPQC achieves superior capability of mitigating the effects of voltage sag/swell and suppressing the load current harmonics under distorted supply conditions.", "title": "" } ]
scidocsrr
945ba57676c8d5d5f087939aa6b5a6b5
Obstacle detection with ultrasonic sensors and signal analysis metrics
[ { "docid": "990c123bcc1bf3bbf2a42990ba724169", "text": "This paper demonstrates an innovative and simple solution for obstacle detection and collision avoidance of unmanned aerial vehicles (UAVs) optimized for and evaluated with quadrotors. The sensors exploited in this paper are low-cost ultrasonic and infrared range finders, which are much cheaper though noisier than more expensive sensors such as laser scanners. This needs to be taken into consideration for the design, implementation, and parametrization of the signal processing and control algorithm for such a system, which is the topic of this paper. For improved data fusion, inertial and optical flow sensors are used as a distance derivative for reference. As a result, a UAV is capable of distance controlled collision avoidance, which is more complex and powerful than comparable simple solutions. At the same time, the solution remains simple with a low computational burden. Thus, memory and time-consuming simultaneous localization and mapping is not required for collision avoidance.", "title": "" } ]
[ { "docid": "963f97c27adbc7d1136e713247e9a852", "text": "Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a program to processors, so as to minimize the makespan. This formulation assumes that the processors are dedicated to the program in question. But when the parallel system is shared by a number of users, this is not necessarily the case. In the context of multiprogrammed parallel machines, scheduling refers to the execution of threads from competing programs. This is an operating system issue, involved with resource allocation, not a program development issue. Scheduling schemes for multiprogrammed parallel systems can be classi ed as one or two leveled. Single-level scheduling combines the allocation of processing power with the decision of which thread will use it. Two level scheduling decouples the two issues: rst, processors are allocated to the job, and then the job's threads are scheduled using this pool of processors. The processors of a parallel system can be shared in two basic ways, which are relevant for both one-level and two-level scheduling. One approach is to use time slicing, e.g. when all the processors in the system (or all the processors in the pool) service a global queue of ready threads. The other approach is to use space slicing, and partition the processors statically or dynamically among the di erent jobs. As these approaches are orthogonal to each other, it is also possible to combine them in various ways; for example, this is often done in gang scheduling. Systems using the various approaches are described, and the implications of the di erent mechanisms are discussed. The goals of this survey are to describe the many di erent approaches within a uni ed framework based on the mechanisms used to achieve multiprogramming, and at the same time document commercial systems that have not been described in the open literature.", "title": "" }, { "docid": "add026119d82ec730038fcc3521304c5", "text": "Deep Learning has emerged as a new area in machine learning and is applied to a number of signal and image applications.The main purpose of the work presented in this paper, is to apply the concept of a Deep Learning algorithm namely, Convolutional neural networks (CNN) in image classification. The algorithm is tested on various standard datasets, like remote sensing data of aerial images (UC Merced Land Use Dataset) and scene images from SUN database. The performance of the algorithm is evaluated based on the quality metric known as Mean Squared Error (MSE) and classification accuracy. The graphical representation of the experimental results is given on the basis of MSE against the number of training epochs. The experimental result analysis based on the quality metrics and the graphical representation proves that the algorithm (CNN) gives fairly good classification accuracy for all the tested datasets.", "title": "" }, { "docid": "6e675e8a57574daf83ab78cea25688f5", "text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore “unsupervised” approaches to quality prediction that does not require labelled data. An alternate technique is to use “supervised” approaches that learn models from project data labelled with, say, “defective” or “not-defective”. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSE’16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.", "title": "" }, { "docid": "bffddca72c7e9d6e5a8c760758a98de0", "text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.", "title": "" }, { "docid": "848f8efe11785c00e8e8af737d173d44", "text": "Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers’ habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.", "title": "" }, { "docid": "b3235d925a1f452ee5ed97cac709b9d4", "text": "Xiaoming Zhai is a doctoral student in the Department of Physics, Beijing Normal University, and is a visiting scholar in the College of Education, University of Washington. His research interests include physics assessment and evaluation, as well as technology-supported physics instruction. He has been a distinguished high school physics teacher who won numerous nationwide instructional awards. Meilan Zhang is an instructor in the Department of Teacher Education at University of Texas at El Paso. Her research focuses on improving student learning using mobile technology, understanding Internet use and the digital divide using big data from Internet search trends and Web analytics. Min Li is an Associate Professor in the College of Education, University of Washington. Her expertise is science assessment and evaluation, and quantitative methods. Address for correspondence: Xiaoming Zhai, Department of Physics, Beijing Normal University, Room A321, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China. Email: xiaomingzh@mail.bnu.edu.cn", "title": "" }, { "docid": "2b23723ab291aeff31781cba640b987b", "text": "As the urban population is increasing, more and more cars are circulating in the city to search for parking spaces which contributes to the global problem of traffic congestion. To alleviate the parking problems, smart parking systems must be implemented. In this paper, the background on parking problems is introduced and relevant algorithms, systems, and techniques behind the smart parking are reviewed and discussed. This paper provides a good insight into the guidance, monitoring and reservations components of the smart car parking and directions to the future development.", "title": "" }, { "docid": "4bd7a933cf0d54a84c106a1591452565", "text": "Face anti-spoofing (a.k.a. presentation attack detection) has recently emerged as an active topic with great significance for both academia and industry due to the rapidly increasing demand in user authentication on mobile phones, PCs, tablets, and so on. Recently, numerous face spoofing detection schemes have been proposed based on the assumption that training and testing samples are in the same domain in terms of the feature space and marginal probability distribution. However, due to unlimited variations of the dominant conditions (illumination, facial appearance, camera quality, and so on) in face acquisition, such single domain methods lack generalization capability, which further prevents them from being applied in practical applications. In light of this, we introduce an unsupervised domain adaptation face anti-spoofing scheme to address the real-world scenario that learns the classifier for the target domain based on training samples in a different source domain. In particular, an embedding function is first imposed based on source and target domain data, which maps the data to a new space where the distribution similarity can be measured. Subsequently, the Maximum Mean Discrepancy between the latent features in source and target domains is minimized such that a more generalized classifier can be learned. State-of-the-art representations including both hand-crafted and deep neural network learned features are further adopted into the framework to quest the capability of them in domain adaptation. Moreover, we introduce a new database for face spoofing detection, which contains more than 4000 face samples with a large variety of spoofing types, capture devices, illuminations, and so on. Extensive experiments on existing benchmark databases and the new database verify that the proposed approach can gain significantly better generalization capability in cross-domain scenarios by providing consistently better anti-spoofing performance.", "title": "" }, { "docid": "b56a6fe9c9d4b45e9d15054004fac918", "text": "Code-switching refers to the phenomena of mixing of words or phrases from foreign languages while communicating in a native language by the multilingual speakers. Codeswitching is a global phenomenon and is widely accepted in multilingual communities. However, for training the language model (LM) for such tasks, a very limited code-switched textual resources are available as yet. In this work, we present an approach to reduce the perplexity (PPL) of Hindi-English code-switched data when tested over the LM trained on purely native Hindi data. For this purpose, we propose a novel textual feature which allows the LM to predict the code-switching instances. The proposed feature is referred to as code-switching factor (CS-factor). Also, we developed a tagger that facilitates the automatic tagging of the code-switching instances. This tagger is trained on a development data and assigns an equivalent class of foreign (English) words to each of the potential native (Hindi) words. For this study, the textual resource has been created by crawling the blogs from a couple of websites educating about the usage of the Internet. In the context of recognition of the code-switching data, the proposed technique is found to yield a substantial improvement in terms of PPL.", "title": "" }, { "docid": "b54abd40f41235fa8e8cd4e9f42cd777", "text": "This paper presents a review of thermal energy storage system design methodologies and the factors to be considered at different hierarchical levels for concentrating solar power (CSP) plants. Thermal energy storage forms a key component of a power plant for improvement of its dispatchability. Though there have been many reviews of storage media, there are not many that focus on storage system design along with its integration into the power plant. This paper discusses the thermal energy storage system designs presented in the literature along with thermal and exergy efficiency analyses of various thermal energy storage systems integrated into the power plant. Economic aspects of these systems and the relevant publications in literature are also summarized in this effort. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "63da0b3d1bc7d6aedd5356b8cdf67b24", "text": "This paper concentrated on a new application of Deep Neural Network (DNN) approach. The DNN, also widely known as Deep Learning(DL), has been the most popular topic in research community recently. Through the DNN, the original data set can be represented in a new feature space with machine learning algorithms, and intelligence models may have the chance to obtain a better performance in the “learned” feature space. Scientists have achieved encouraging results by employing DNN in some research fields, including Computer Vision, Speech Recognition, Natural Linguistic Programming and Bioinformation Processing. However, as an approach mainly functioned for learning features, DNN is reasonably believed to be a more universal approach: it may have the potential in other data domains and provide better feature spaces for other type of problems. In this paper, we present some initial investigations on applying DNN to deal with the time series problem in meteorology field. In our research, we apply DNN to process the massive weather data involving millions of atmosphere records provided by The Hong Kong Observatory (HKO)1. The obtained features are employed to predict the weather change in the next 24 hours. The results show that the DNN is able to provide a better feature space for weather data sets, and DNN is also a potential tool for the feature fusion of time series problems.", "title": "" }, { "docid": "1fcd6f0c91522a91fa05b0d969f8eec1", "text": "Nonnegative matrix factorization (NMF) is a popular method for multivariate analysis of nonnegative data, the goal of which is to decompose a data matrix into a product of two factor matrices with all entries in factor matrices restricted to be nonnegative. NMF was shown to be useful in a task of clustering (especially document clustering), but in some cases NMF produces the results inappropriate to the clustering problems. In this paper, we present an algorithm for orthogonal nonnegative matrix factorization, where an orthogonality constraint is imposed on the nonnegative decomposition of a term-document matrix. The result of orthogonal NMF can be clearly interpreted for the clustering problems, and also the performance of clustering is usually better than that of the NMF. We develop multiplicative updates directly from true gradient on Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Experiments on several different document data sets show our orthogonal NMF algorithms perform better in a task of clustering, compared to the standard NMF and an existing orthogonal NMF.", "title": "" }, { "docid": "e048d73b37168c7b7ed46915e11b1bf0", "text": "Creating graphic designs can be challenging for novice users. This paper presents DesignScape, a system which aids the design process by making interactive layout suggestions, i.e., changes in the position, scale, and alignment of elements. The system uses two distinct but complementary types of suggestions: refinement suggestions, which improve the current layout, and brainstorming suggestions, which change the style. We investigate two interfaces for interacting with suggestions. First, we develop a suggestive interface, where suggestions are previewed and can be accepted. Second, we develop an adaptive interface where elements move automatically to improve the layout. We compare both interfaces with a baseline without suggestions, and show that for novice designers, both interfaces produce significantly better layouts, as evaluated by other novices.", "title": "" }, { "docid": "01202e09e54a1fc9f5b36d67fbbf3870", "text": "This paper is intended to investigate the copper-graphene surface plasmon resonance (SPR)-based biosensor by considering the high adsorption efficiency of graphene. Copper (Cu) is used as a plasmonic material whereas graphene is used to prevent Cu from oxidation and enhance the reflectance intensity. Numerical investigation is performed using finite-difference-time-domain (FDTD) method by comparing the sensing performance such as reflectance intensity that explains the sensor sensitivity and the full-width-at-half-maximum (FWHM) of the spectrum for detection accuracy. The measurements were observed with various Cu thin film thicknesses ranging from 20nm to 80nm with 785nm operating wavelength. The proposed sensor shows that the 40nm-thick Cu-graphene (1 layer) SPR-based sensor gave better performance with narrower plasmonic spectrum line width (reflectance intensity of 91.2%) and better FWHM of 3.08°. The measured results also indicate that the Cu-graphene SPR-based sensor is suitable for detecting urea with refractive index of 1.49 in dielectric medium.", "title": "" }, { "docid": "609997fbec79d71daa7c63e6fbbc6cc4", "text": "Memory encoding occurs rapidly, but the consolidation of memory in the neocortex has long been held to be a more gradual process. We now report, however, that systems consolidation can occur extremely quickly if an associative \"schema\" into which new information is incorporated has previously been created. In experiments using a hippocampal-dependent paired-associate task for rats, the memory of flavor-place associations became persistent over time as a putative neocortical schema gradually developed. New traces, trained for only one trial, then became assimilated and rapidly hippocampal-independent. Schemas also played a causal role in the creation of lasting associative memory representations during one-trial learning. The concept of neocortical schemas may unite psychological accounts of knowledge structures with neurobiological theories of systems memory consolidation.", "title": "" }, { "docid": "3e8f290f9d19996feb6551cde8815307", "text": "Simplification of IT services is an imperative of the times we are in. Large legacy behemoths that exist at financial institutions are a result of years of patch work development on legacy landscapes that have developed in silos at various lines of businesses (LOBs). This increases costs -- for running financial services, changing the services as well as providing services to customers. We present here a basic guide to what constitutes complexity of IT landscape at financial institutions, what simplification means, and opportunities for simplification and how it can be carried out. We also explain a 4-phase approach to planning and executing Simplification of IT services at financial institutions.", "title": "" }, { "docid": "526e36dd9e3db50149687ea6358b4451", "text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.", "title": "" }, { "docid": "f45e43935de492d3598469cd24c48188", "text": "Given a task of predicting Y from X , a loss function L, and a set of probability distributions Γ on (X,Y ), what is the optimal decision rule minimizing the worstcase expected loss over Γ? In this paper, we address this question by introducing a generalization of the maximum entropy principle. Applying this principle to sets of distributions with marginal on X constrained to be the empirical marginal, we provide a minimax interpretation of the maximum likelihood problem over generalized linear models, which connects the minimax problem for each loss function to a generalized linear model. While in some cases such as quadratic and logarithmic loss functions we revisit well-known linear and logistic regression models, our approach reveals novel models for other loss functions. In particular, for the 0-1 loss we derive a classification approach which we call the minimax SVM. The minimax SVM minimizes the worst-case expected 0-1 loss over the proposed Γ by solving a tractable optimization problem. Moreover, applying the minimax approach to Brier loss function we derive a new classification model called the minimax Brier. The maximum likelihood problem for this model uses the Huber penalty function. We perform several numerical experiments to show the power of the minimax SVM and the minimax Brier.", "title": "" }, { "docid": "00a3504c21cf0a971a717ce676d76933", "text": "In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them.\n We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest.\n Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.", "title": "" }, { "docid": "625002b73c5e386989ddd243a71a1b56", "text": "AutoTutor is a learning environment that tutors students by holding a conversation in natural language. AutoTutor has been developed for Newtonian qualitative physics and computer literacy. Its design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse. AutoTutor presents challenging problems (formulated as questions) from a curriculum script and then engages in mixed initiative dialogue that guides the student in building an answer. It provides the student with positive, neutral, or negative feedback on the student's typed responses, pumps the student for more information, prompts the student to fill in missing words, gives hints, fills in missing information with assertions, identifies and corrects erroneous ideas, answers the student's questions, and summarizes answers. AutoTutor has produced learning gains of approximately .70 sigma for deep levels of comprehension.", "title": "" } ]
scidocsrr
ad11557e120de6ea0d14b61f7169719b
Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation
[ { "docid": "6298ab25b566616b0f3c1f6ee8889d19", "text": "This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach.", "title": "" } ]
[ { "docid": "1f355bd6b46e16c025ba72aa9250c61d", "text": "Whole-cell biosensors have several advantages for the detection of biological substances and have proven to be useful analytical tools. However, several hurdles have limited whole-cell biosensor application in the clinic, primarily their unreliable operation in complex media and low signal-to-noise ratio. We report that bacterial biosensors with genetically encoded digital amplifying genetic switches can detect clinically relevant biomarkers in human urine and serum. These bactosensors perform signal digitization and amplification, multiplexed signal processing with the use of Boolean logic gates, and data storage. In addition, we provide a framework with which to quantify whole-cell biosensor robustness in clinical samples together with a method for easily reprogramming the sensor module for distinct medical detection agendas. Last, we demonstrate that bactosensors can be used to detect pathological glycosuria in urine from diabetic patients. These next-generation whole-cell biosensors with improved computing and amplification capacity could meet clinical requirements and should enable new approaches for medical diagnosis.", "title": "" }, { "docid": "36da2b6102762c80b3ae8068d764e220", "text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move", "title": "" }, { "docid": "8e65001ed1e4a3994a95df2626ff4d89", "text": "The most popular metric distance used in iris code matching is Hamming distance. In this paper, we improve the performance of iris code matching stage by applying adaptive Hamming distance. Proposed method works with Hamming subsets with adaptive length. Based on density of masked bits in the Hamming subset, each subset is able to expand and adjoin to the right or left neighbouring bits. The adaptive behaviour of Hamming subsets increases the accuracy of Hamming distance computation and improves the performance of iris code matching. Results of applying proposed method on Chinese Academy of Science Institute of Automation, CASIA V3.3 shows performance of 99.96% and false rejection rate 0.06.", "title": "" }, { "docid": "868fe4091a136f16f6844e8739b65902", "text": "This paper uses an ant colony meta-heuristic optimization method to solve the redundancy allocation problem (RAP). The RAP is a well known NP-hard problem which has been the subject of much prior work, generally in a restricted form where each subsystem must consist of identical components in parallel to make computations tractable. Meta-heuristic methods overcome this limitation, and offer a practical way to solve large instances of the relaxed RAP where different components can be placed in parallel. The ant colony method has not yet been used in reliability design, yet it is a method that is expressly designed for combinatorial problems with a neighborhood structure, as in the case of the RAP. An ant colony optimization algorithm for the RAP is devised & tested on a well-known suite of problems from the literature. It is shown that the ant colony method performs with little variability over problem instance or random number seed. It is competitive with the best-known heuristics for redundancy allocation.", "title": "" }, { "docid": "ef3ac22e7d791113d08fd778a79008c3", "text": "Great efforts have been dedicated to harvesting knowledge bases from online encyclopedias. These knowledge bases play important roles in enabling machines to understand texts. However, most current knowledge bases are in English and non-English knowledge bases, especially Chinese ones, are still very rare. Many previous systems that extract knowledge from online encyclopedias, although are applicable for building a Chinese knowledge base, still suffer from two challenges. The first is that it requires great human efforts to construct an ontology and build a supervised knowledge extraction model. The second is that the update frequency of knowledge bases is very slow. To solve these challenges, we propose a never-ending Chinese Knowledge extraction system, CN-DBpedia, which can automatically generate a knowledge base that is of ever-increasing in size and constantly updated. Specially, we reduce the human costs by reusing the ontology of existing knowledge bases and building an end-to-end facts extraction model. We further propose a smart active update strategy to keep the freshness of our knowledge base with little human costs. The 164 million API calls of the published services justify the success of our system.", "title": "" }, { "docid": "bc4a72d96daf03f861b187fa73f57ff6", "text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.", "title": "" }, { "docid": "ad80f2e78e80397bd26dac5c0500266c", "text": "The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the eq norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.", "title": "" }, { "docid": "65a4197d7f12c320a34fdd7fcac556af", "text": "The article presents an overview of current specialized ontology engineering tools, as well as texts’ annotation tools based on ontologies. The main functions and features of these tools, their advantages and disadvantages are discussed. A systematic comparative analysis of means for engineering ontologies is presented. ACM Classification", "title": "" }, { "docid": "43a7e786704b5347f3b67c08ac9c4f70", "text": "Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.", "title": "" }, { "docid": "0d25072b941ee3e8690d9bd274623055", "text": "The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.", "title": "" }, { "docid": "bdd1c64962bfb921762259cca4a23aff", "text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.", "title": "" }, { "docid": "3072b7d80b0e9afffe6489996eca19aa", "text": "Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.", "title": "" }, { "docid": "8f1a5420deb75a2b664ceeaae8fc03f9", "text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.", "title": "" }, { "docid": "c2fc709aeb4c48a3bd2071b4693d4296", "text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.", "title": "" }, { "docid": "a17818c54117d502c696abb823ba5a6b", "text": "The next generation of multimedia services have to be optimized in a personalized way, taking user factors into account for the evaluation of individual experience. Previous works have investigated the influence of user factors mostly in a controlled laboratory environment which often includes a limited number of users and fails to reflect real-life environment. Social media, especially Facebook, provide an interesting alternative for Internet-based subjective evaluation. In this article, we develop (and open-source) a Facebook application, named YouQ1, as an experimental platform for studying individual experience for videos. Our results show that subjective experiments based on YouQ can produce reliable results as compared to a controlled laboratory experiment. Additionally, YouQ has the ability to collect user information automatically from Facebook, which can be used for modeling individual experience.", "title": "" }, { "docid": "5d80fa7763fd815e4e9530bc1a99b5d0", "text": "This paper introduces a new email dataset, consisting of both single and thread emails, manually annotated with summaries and keywords. A total of 349 emails and threads have been annotated. The dataset is our first step toward developing automatic methods for summarization and keyword extraction from emails. We describe the email corpus, along with the annotation interface, annotator guidelines, and agreement studies.", "title": "" }, { "docid": "9a4dab93461185ea98ccea7733081f73", "text": "This article discusses two standards operating on principles of cognitive radio in television white space (TV WS) frequencies 802.22and 802.11af. The comparative analysis of these systems will be presented and the similarities as well as the differences among these two perspective standards will be discussed from the point of view of physical (PHY), medium access control (MAC) and cognitive layers.", "title": "" }, { "docid": "569fed958b7a471e06ce718102687a1e", "text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.", "title": "" }, { "docid": "48a0e75b97fdaa734f033c6b7791e81f", "text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.", "title": "" }, { "docid": "cf95d41dc5a2bcc31b691c04e3fb8b96", "text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.", "title": "" } ]
scidocsrr
65a1853af116c63a9854549e34fd9d75
Texture-aware ASCII art synthesis with proportional fonts
[ { "docid": "921b024ca0a99e3b7cd3a81154d70c66", "text": "Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.", "title": "" }, { "docid": "07a1d62b56bd1e2acf4282f69e85fb93", "text": "Many state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test the hypothesis that when viewing natural images, the optimal perceptual weights for pooling should be proportional to local information content, which can be estimated in units of bit using advanced statistical models of natural images. Our extensive studies based upon six publicly-available subject-rated image databases concluded with three useful findings. First, information content weighting leads to consistent improvement in the performance of IQA algorithms. Second, surprisingly, with information content weighting, even the widely criticized peak signal-to-noise-ratio can be converted to a competitive perceptual quality measure when compared with state-of-the-art algorithms. Third, the best overall performance is achieved by combining information content weighting with multiscale structural similarity measures.", "title": "" } ]
[ { "docid": "3d4cfb2d3ba1e70e5dd03060f5d5f663", "text": "BACKGROUND\nAlzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.\n\n\nMETHODS\nEighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.\n\n\nRESULTS\nThe CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.\n\n\nCONCLUSION\nAD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.", "title": "" }, { "docid": "081da5941b0431d00b4058c26987d43f", "text": "Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "98e9d8fb4a04ad141b3a196fe0a9c08b", "text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.", "title": "" }, { "docid": "f24f686a705a1546d211ac37d5cc2fdb", "text": "In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worst-case memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.", "title": "" }, { "docid": "894e4f975ce81a181025e65227e70b18", "text": "Gesturing and motion control have become common as interaction methods for video games since the advent of the Nintendo Wii game console. Despite the growing number of motion-based control platforms for video games, no set of shared design heuristics for motion control across the platforms has been published. Our approach in this paper combines analysis of player experiences across platforms. We work towards a collection of design heuristics for motion-based control by studying game reviews in two motion-based control platforms, Xbox 360 Kinect and PlayStation 3 Move. In this paper we present an analysis of player problems within 256 game reviews, on which we ground a set of heuristics for motion-controlled games.", "title": "" }, { "docid": "c89f44a3216a9411a42cb0a420f4b73b", "text": "Chemical fiber paper tubes are the essential spinning equipment on filament high-speed spinning and winding machine of the chemical fiber industry. The precision of its application directly impacts on the formation of the silk, determines the cost of the spinning industry. Due to the accuracy of its application requirements, the paper tubes with defects must be detected and removed. Traditional industrial defect detection methods are usually carried out using the target operator's characteristics, only to obtain surface information, not only the detection efficiency and accuracy is difficult to improve, due to human judgment, it's difficult to give effective algorithm for some targets. And the existing learning algorithms are also difficult to use the deep features, so they can not get good results. Based on the Faster-RCNN method in depth learning, this paper extracts the deep features of the defective target by Convolutional Neural Network (CNN), which effectively solves the internal joint defects that the traditional algorithm can not effectively detect. As to the external joints and damaged flaws that the traditional algorithm can detect, this algorithm has better results, the experimental accuracy rate can be raised up to 98.00%. At the same time, it can be applied to a variety of lighting conditions, reducing the pretreatment steps and improving efficiency. The experimental results show that the method is effective and worthy of further research.", "title": "" }, { "docid": "299e7f7d1c48d4a6a22c88dcf422f7a1", "text": "Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.", "title": "" }, { "docid": "6bbc32ecaf54b9a51442f92edbc2604a", "text": "Artificial bee colony (ABC), an optimization algorithm is a recent addition to the family of population based search algorithm. ABC has taken its inspiration from the collective intelligent foraging behavior of honey bees. In this study we have incorporated golden section search mechanism in the structure of basic ABC to improve the global convergence and prevent to stick on a local solution. The proposed variant is termed as ILS-ABC. Comparative numerical results with the state-of-art algorithms show the performance of the proposal when applied to the set of unconstrained engineering design problems. The simulated results show that the proposed variant can be successfully applied to solve real life problems.", "title": "" }, { "docid": "407574abdcba82be2e9aea5a9b38c0a3", "text": "In this paper, we investigate resource block (RB) assignment and modulation-and-coding scheme (MCS) selection to maximize downlink throughput of long-term evolution (LTE) systems, where all RB's assigned to the same user in any given transmission time interval (TTI) must use the same MCS. We develop several effective MCS selection schemes by using the effective packet-level SINR based on exponential effective SINR mapping (EESM), arithmetic mean, geometric mean, and harmonic mean. From both analysis and simulation results, we show that the system throughput of all the proposed schemes are better than that of the scheme in [7]. Furthermore, the MCS selection scheme using harmonic mean based effective packet-level SINR almost reaches the optimal performance and significantly outperforms the other proposed schemes.", "title": "" }, { "docid": "1d51506f851a8b125edd7edcd8c6bd1b", "text": "A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.", "title": "" }, { "docid": "a49c8e6f222b661447d1de32e29d0f16", "text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.", "title": "" }, { "docid": "703f0baf67a1de0dfb03b3192327c4cf", "text": "Fleet management systems are commonly used to coordinate mobility and delivery services in a broad variety of domains. However, their traditional top-down control architecture becomes a bottleneck in open and dynamic environments, where scalability, proactiveness, and autonomy are becoming key factors for their success. Here, the authors present an abstract event-based architecture for fleet management systems that supports tailoring dynamic control regimes for coordinating fleet vehicles, and illustrate it for the case of medical emergency management. Then, they go one step ahead in the transition toward automatic or driverless fleets, by conceiving fleet management systems in terms of cyber-physical systems, and putting forward the notion of cyber fleets.", "title": "" }, { "docid": "815feed9cce2344872c50da6ffb77093", "text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.", "title": "" }, { "docid": "d214ef50a5c26fb65d8c06ea7db3d07c", "text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.", "title": "" }, { "docid": "b7c0864be28d70d49ae4a28fb7d78f04", "text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.", "title": "" }, { "docid": "d5a9d2a212deee5057a0289f72b51d9b", "text": "Compared to supervised feature selection, unsupervised feature selection tends to be more challenging due to the lack of guidance from class labels. Along with the increasing variety of data sources, many datasets are also equipped with certain side information of heterogeneous structure. Such side information can be critical for feature selection when class labels are unavailable. In this paper, we propose a new feature selection method, SideFS, to exploit such rich side information. We model the complex side information as a heterogeneous network and derive instance correlations to guide subsequent feature selection. Representations are learned from the side information network and the feature selection is performed in a unified framework. Experimental results show that the proposed method can effectively enhance the quality of selected features by incorporating heterogeneous side information.", "title": "" }, { "docid": "3294f746432ba9746a8cc8082a1021f7", "text": "CRYPTONITE is a programmable processor tailored to the needs of crypto algorithms. The design of CRYPTONITE was based on an in-depth application analysis in which standard crypto algorithms (AES, DES, MD5, SHA-1, etc) were distilled down to their core functionality. We describe this methodology and use AES as a central example. Starting with a functional description of AES, we give a high level account of how to implement AES efficiently in hardware, and present several novel optimizations (which are independent of CRYPTONITE).We then describe the CRYPTONITE architecture, highlighting how AES implementation issues influenced the design of the processor and its instruction set. CRYPTONITE is designed to run at high clock rates and be easy to implement in silicon while providing a significantly better performance/area/power tradeoff than general purpose processors.", "title": "" }, { "docid": "f9765c97a101a163a486b18e270d67f5", "text": "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any lp norm (p ≥ 1) on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random/adversarial input transform) and regularization techniques such as weight decay, dropout, and batch norm. 2", "title": "" }, { "docid": "1ed9151f81e15db5bb08a7979d5eeddb", "text": "Deep learning has delivered its powerfulness in many application domains, especially in image and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage. Researchers have investigated on reducing DNN model size with negligible accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN training and inference model suitable for embedded platforms with reduced asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms achieving extraordinary processing speed.", "title": "" }, { "docid": "808de7fe99686dabb5b1ea28187cd406", "text": "Automated Guided Vehicles (AGVs) are being increasingly used for intelligent transportation and distribution of materials in warehouses and auto-production lines. In this paper, a preliminary hazard analysis of an AGV’s critical components is conducted by the approach of Failure Modes Effects and Criticality Analysis (FMECA). To implement this research, a particular AGV transport system is modelled as a phased mission. Then, Fault Tree Analysis (FTA) is adopted to model the causes of phase failure, enabling the probability of success in each phase and hence mission success to be determined. Through this research, a promising technical approach is established, which allows the identification of the critical AGV components and crucial mission phases of AGVs at the design stage. 1998 ACM Subject Classification B.8 Performance and Reliability", "title": "" } ]
scidocsrr
cfce53af4a6921ef254a17c119cbedf0
Extending the road beyond CMOS - IEEE Circuits and Devices Magazine
[ { "docid": "5706ae68d5e2b56679e0c89361fcc8b8", "text": "Quantum computers promise to exceed the computational efficiency of ordinary classical machines because quantum algorithms allow the execution of certain tasks in fewer steps. But practical implementation of these machines poses a formidable challenge. Here I present a scheme for implementing a quantum-mechanical computer. Information is encoded onto the nuclear spins of donor atoms in doped silicon electronic devices. Logical operations on individual spins are performed using externally applied electric fields, and spin measurements are made using currents of spin-polarized electrons. The realization of such a computer is dependent on future refinements of conventional silicon electronics.", "title": "" } ]
[ { "docid": "991e2e65cb6b47d8355e14d674272f2d", "text": "In this paper, we develop a cooperative mechanism, RELICS, to combat selfishness in DTNs. In DTNs, nodes belong to self-interested individuals. A node may be selfish in expending resources, such as energy, on forwarding messages from others, unless offered incentives. We devise a rewarding scheme that provides incentives to nodes in a physically realizable way in that the rewards are reflected into network operation. We call it in-network realization of incentives. We introduce explicit ranking of nodes depending on their transit behavior, and translate those ranks into message priority. Selfishness drives each node to set its energy depletion rate as low as possible while maintaining its own delivery ratio above some threshold. We show that our cooperative mechanism compels nodes to cooperate and also achieves higher energy-economy compared to other previous results.", "title": "" }, { "docid": "1c6677209ac3c37e4ac84b153321ab7c", "text": "BACKGROUND\nAsthma guidelines indicate that the goal of treatment should be optimum asthma control. In a busy clinic practice with limited time and resources, there is need for a simple method for assessing asthma control with or without lung function testing.\n\n\nOBJECTIVES\nThe objective of this article was to describe the development of the Asthma Control Test (ACT), a patient-based tool for identifying patients with poorly controlled asthma.\n\n\nMETHODS\nA 22-item survey was administered to 471 patients with asthma in the offices of asthma specialists. The specialist's rating of asthma control after spirometry was also collected. Stepwise regression methods were used to select a subset of items that showed the greatest discriminant validity in relation to the specialist's rating of asthma control. Internal consistency reliability was computed, and discriminant validity tests were conducted for ACT scale scores. The performance of ACT was investigated by using logistic regression methods and receiver operating characteristic analyses.\n\n\nRESULTS\nFive items were selected from regression analyses. The internal consistency reliability of the 5-item ACT scale was 0.84. ACT scale scores discriminated between groups of patients differing in the specialist's rating of asthma control (F = 34.5, P <.00001), the need for change in patient's therapy (F = 40.3, P <.00001), and percent predicted FEV(1) (F = 4.3, P =.0052). As a screening tool, the overall agreement between ACT and the specialist's rating ranged from 71% to 78% depending on the cut points used, and the area under the receiver operating characteristic curve was 0.77.\n\n\nCONCLUSION\nResults reinforce the usefulness of a brief, easy to administer, patient-based index of asthma control.", "title": "" }, { "docid": "486bd67781bb1067aa4ff6009cdeecb7", "text": "BACKGROUND\nThere was less than satisfactory progress, especially in sub-Saharan Africa, towards child and maternal mortality targets of Millennium Development Goals (MDGs) 4 and 5. The main aim of this study was to describe the prevalence and determinants of essential new newborn care practices in the Lawra District of Ghana.\n\n\nMETHODS\nA cross-sectional study was carried out in June 2014 on a sample of 422 lactating mothers and their children aged between 1 and 12 months. A systematic random sampling technique was used to select the study participants who attended post-natal clinic in the Lawra district hospital.\n\n\nRESULTS\nOf the 418 newborns, only 36.8% (154) was judged to have had safe cord care, 34.9% (146) optimal thermal care, and 73.7% (308) were considered to have had adequate neonatal feeding. The overall prevalence of adequate new born care comprising good cord care, optimal thermal care and good neonatal feeding practices was only 15.8%. Mothers who attained at least Senior High Secondary School were 20.5 times more likely to provide optimal thermal care [AOR 22.54; 95% CI (2.60-162.12)], compared to women had no formal education at all. Women who received adequate ANC services were 4.0 times (AOR  =  4.04 [CI: 1.53, 10.66]) and 1.9 times (AOR  =  1.90 [CI: 1.01, 3.61]) more likely to provide safe cord care and good neonatal feeding as compared to their counterparts who did not get adequate ANC. However, adequate ANC services was unrelated to optimal thermal care. Compared to women who delivered at home, women who delivered their index baby in a health facility were 5.6 times more likely of having safe cord care for their babies (AOR = 5.60, Cl: 1.19-23.30), p = 0.03.\n\n\nCONCLUSIONS\nThe coverage of essential newborn care practices was generally low. Essential newborn care practices were positively associated with high maternal educational attainment, adequate utilization of antenatal care services and high maternal knowledge of newborn danger signs. Therefore, greater improvement in essential newborn care practices could be attained through proven low-cost interventions such as effective ANC services, health and nutrition education that should span from community to health facility levels.", "title": "" }, { "docid": "a53225746b2b6dba6078a998031c2af6", "text": "Decision Tree induction is commonly used classification algorithm. One of the important problems is how to use records with unknown values from training as well as testing data. Many approaches have been proposed to address the impact of unknown values at training on accuracy of prediction. However, very few techniques are there to address the problem in testing data. In our earlier work, we discussed and summarized these strategies in details. In Lazy Decision Tree, the problem of unknown attribute values in test instance is completely eliminated by delaying the construction of tree till the classification time and using only known attributes for classification. In this paper we present novel algorithm ‘Eager Decision Tree’ which constructs a single prediction model at the time of training which considers all possibilities of unknown attribute values from testing data. It naturally removes the problem of handing unknown values in testing data in Decision Tree induction like Lazy Decision Tree.", "title": "" }, { "docid": "c9171bf5a2638b35ff7dc9c8e6104d30", "text": "Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/ packages.", "title": "" }, { "docid": "ef3b9dd6b463940bc57cdf7605c24b1e", "text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.", "title": "" }, { "docid": "06326f180f768b01e13d764c1171bdf3", "text": "Recent advances in far-field fluorescence microscopy have led to substantial improvements in image resolution, achieving a near-molecular resolution of 20 to 30 nanometers in the two lateral dimensions. Three-dimensional (3D) nanoscale-resolution imaging, however, remains a challenge. We demonstrated 3D stochastic optical reconstruction microscopy (STORM) by using optical astigmatism to determine both axial and lateral positions of individual fluorophores with nanometer accuracy. Iterative, stochastic activation of photoswitchable probes enables high-precision 3D localization of each probe, and thus the construction of a 3D image, without scanning the sample. Using this approach, we achieved an image resolution of 20 to 30 nanometers in the lateral dimensions and 50 to 60 nanometers in the axial dimension. This development allowed us to resolve the 3D morphology of nanoscopic cellular structures.", "title": "" }, { "docid": "bce0f6f9ca0697cb85bd07a118598aea", "text": "The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interaction and new principles for better designs. I support this claim with four ideas about cognition: (1) interacting with tools changes the way we think and perceive -- tools, when manipulated, are soon absorbed into the body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of our environments; (2) we think with our bodies not just with our brains; (3) we know more by doing than by seeing -- there are times when physically performing an activity is better than watching someone else perform the activity, even though our motor resonance system fires strongly during other person observation; (4) there are times when we literally think with things. These four ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.", "title": "" }, { "docid": "fb5a3c43655886c0387e63cd02fccd50", "text": "Android is the most widely used smartphone OS with 82.8% market share in 2015 (IDC, 2015). It is therefore the most widely targeted system by malware authors. Researchers rely on dynamic analysis to extract malware behaviors and often use emulators to do so. However, using emulators lead to new issues. Malware may detect emulation and as a result it does not execute the payload to prevent the analysis. Dealing with virtual device evasion is a never-ending war and comes with a non-negligible computation cost (Lindorfer et al., 2014). To overcome this state of affairs, we propose a system that does not use virtual devices for analysing malware behavior. Glassbox is a functional prototype for the dynamic analysis of malware applications. It executes applications on real devices in a monitored and controlled environment. It is a fully automated system that installs, tests and extracts features from the application for further analysis. We present the architecture of the platform and we compare it with existing Android dynamic analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger application behaviors by measuring the average coverage of basic blocks on the AndroCoverage dataset (AndroCoverage, 2016). We show that it executes on average 13.52% more basic blocks than the Monkey program.", "title": "" }, { "docid": "2f7d487059a77b582c3e0a33fd5d38af", "text": "Disturbance regimes are changing rapidly, and the consequences of such changes for ecosystems and linked social-ecological systems will be profound. This paper synthesizes current understanding of disturbance with an emphasis on fundamental contributions to contemporary landscape and ecosystem ecology, then identifies future research priorities. Studies of disturbance led to insights about heterogeneity, scale, and thresholds in space and time and catalyzed new paradigms in ecology. Because they create vegetation patterns, disturbances also establish spatial patterns of many ecosystem processes on the landscape. Drivers of global change will produce new spatial patterns, altered disturbance regimes, novel trajectories of change, and surprises. Future disturbances will continue to provide valuable opportunities for studying pattern-process interactions. Changing disturbance regimes will produce acute changes in ecosystems and ecosystem services over the short (years to decades) and long-term (centuries and beyond). Future research should address questions related to (1) disturbances as catalysts of rapid ecological change, (2) interactions among disturbances, (3) relationships between disturbance and society, especially the intersection of land use and disturbance, and (4) feedbacks from disturbance to other global drivers. Ecologists should make a renewed and concerted effort to understand and anticipate the causes and consequences of changing disturbance regimes.", "title": "" }, { "docid": "9ca12c5f314d077093753dc0f3ff9cd5", "text": "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning — answering image-related questions which require a multi-step, high-level process — a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-theart error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.", "title": "" }, { "docid": "ccd7e49646f1ef1d31f033f84c63c6e6", "text": "Language modeling is a prototypical unsupervised task of natural language processing (NLP). It has triggered the developments of essential bricks of models used in speech recognition, translation or summarization. More recently, language modeling has been shown to give a sensible loss function for learning high-quality unsupervised representations in tasks like text classification (Howard & Ruder, 2018), sentiment detection (Radford et al., 2017) or word vector learning (Peters et al., 2018) and there is thus a revived interest in developing better language models. More generally, improvement in sequential prediction models are believed to be beneficial for a wide range of applications like model-based planning or reinforcement learning whose models have to encode some form of memory.", "title": "" }, { "docid": "14276adf4f5b3538f95cfd10902825ef", "text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.", "title": "" }, { "docid": "ceaa0ceb14034ecc2840425a627a3c71", "text": "In this article, we present a novel class of robots that are able to move by growing and building their own structure. In particular, taking inspiration by the growing abilities of plant roots, we designed and developed a plant root-like robot that creates its body through an additive manufacturing process. Each robotic root includes a tubular body, a growing head, and a sensorized tip that commands the robot behaviors. The growing head is a customized three-dimensional (3D) printer-like system that builds the tubular body of the root in the format of circular layers by fusing and depositing a thermoplastic material (i.e., polylactic acid [PLA] filament) at the tip level, thus obtaining movement by growing. A differential deposition of the material can create an asymmetry that results in curvature of the built structure, providing the possibility of root bending to follow or escape from a stimulus or to reach a desired point in space. Taking advantage of these characteristics, the robotic roots are able to move inside a medium by growing their body. In this article, we describe the design of the growing robot together with the modeling of the deposition process and the description of the implemented growing movement strategy. Experiments were performed in air and in an artificial medium to verify the functionalities and to evaluate the robot performance. The results showed that the robotic root, with a diameter of 50 mm, grows with a speed of up to 4 mm/min, overcoming medium pressure of up to 37 kPa (i.e., it is able to lift up to 6 kg) and bending with a minimum radius of 100 mm.", "title": "" }, { "docid": "26dc59c30371f1d0b2ff2e62a96f9b0f", "text": "Hindi is very complex language with large number of phonemes and being used with various ascents in different regions in India. In this manuscript, speaker dependent and independent isolated Hindi word recognizers using the Hidden Markov Model (HMM) is implemented, under noisy environment. For this study, a set of 10 Hindi names has been chosen as a test set for which the training and testing is performed. The scheme instigated here implements the Mel Frequency Cepstral Coefficients (MFCC) in order to compute the acoustic features of the speech signal. Then, K-means algorithm is used for the codebook generation by performing clustering over the obtained feature space. Baum Welch algorithm is used for re-estimating the parameters, and finally for deciding the recognized Hindi word whose model likelihood is highest, Viterbi algorithm has been implemented; for the given HMM. This work resulted in successful recognition with 98. 6% recognition rate for speaker dependent recognition, for total of 10 speakers (6 male, 4 female) and 97. 5% for speaker independent isolated word recognizer for 10 speakers (male).", "title": "" }, { "docid": "e252e35a2869cdd5c06d8ba31a525f6a", "text": "The conventional border patrol systems suffer from intensive human involvement. Recently, unmanned border patrol systems employ high-tech devices, such as unmanned aerial vehicles, unattended ground sensors, and surveillance towers equipped with camera sensors. However, any single technique encounters inextricable problems, such as high false alarm rate and line-of-sight-constraints. There lacks a coherent system that coordinates various technologies to improve the system accuracy. In this paper, the concept of BorderSense, a hybrid wireless sensor network architecture for border patrol systems, is introduced. BorderSense utilizes the most advanced sensor network technologies, including the wireless multimedia sensor networks and the wireless underground sensor networks. The framework to deploy and operate BorderSense is developed. Based on the framework, research challenges and open research issues are discussed. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "176d1eeb8dd1e366431d8ad4bb7734a1", "text": "Online, reverse auctions are increasingly being utilized in industrial sourcing activities. This phenomenon represents a novel, emerging area of inquiry with significant implications for sourcing strategies. However, there is little systematic thinking or empirical evidence on the topic. In this paper, the use of these auctions in sourcing activities is reviewed and four key aspects are highlighted: (i) the differences from physical auctions or those of the theoretical literature, (ii) the conditions for using online, reverse auctions, (iii) methods for structuring the auctions, and (iv) evaluations of auction performance. Some empirical evidence on these issues is also provided. ONLINE, REVERSE AUCTIONS: ISSUES, THEMES, AND PROSPECTS FOR THE FUTURE INTRODUCTION For nearly the past decade, managers, analysts, researchers, and the business press have been remarking that, “The Internet will change everything.” And since the advent of the Internet, we have seen it challenge nearly every aspect of marketing practice. This raises the obligation to consider the consequences of the Internet to management practices, the theme of this special issue. Yet, it may take decades to fully understand the impact of the Internet on marketing practice, in general. This paper is one step in that direction. Specifically, I consider the impact of the Internet in a business-to-business context, the sourcing of direct and indirect materials from a supply base. It has been predicted that the Internet will bring about $1 trillion in efficiencies to the annual $7 trillion that is spent on the procurement of goods and services worldwide (USA Today, 2/7/00, B1). How and when this will happen remains an open question. However, one trend that is showing increasing promise is the use of online, reverse auctions. Virtually every major industry has begun to use and adopt these auctions on a regular basis (Smith 2002). During the late 1990s, slow-growth, manufacturing firms such as Boeing, SPX/Eaton, United Technologies, and branches of the United States military, utilized these auctions. Since then, consumer product companies such as Emerson Electronics, Nestle, and Quaker have followed suit. Even high-tech firms such as Dell, Hewlett-Packard, Intel, and Sun Microsystems have increased their usage of auctions in sourcing activities. And the intention and potential for the use of these auctions to continue to grow in the future is clear. In their annual survey of purchasing managers, Purchasing magazine found that 25% of its respondents expected to use reverse auctions in their sourcing efforts. Currently, the annual throughput in these auctions is estimated to be $40 billion; however, the addressable spend of the Global 500 firms is potentially $6.3 trillion.", "title": "" }, { "docid": "6d066cec0c45a5504559ed40fc084d0e", "text": "The combination of visual and inertial sensors has proved to be very popular in robot navigation and, in particular, Micro Aerial Vehicle (MAV) navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. In this paper, we propose a navigation algorithm for MAVs equipped with a single camera and an Inertial Measurement Unit (IMU) which is able to run onboard and in real-time. The main focus here is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40Hz on an onboard Atom computer 1.6 GHz.", "title": "" }, { "docid": "ea278850f00c703bdd73957c3f7a71ce", "text": "In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.", "title": "" }, { "docid": "954660a163fc8453368a6863d1c3fd85", "text": "The application potential of very high resolution (VHR) remote sensing imagery has been boosted by recent developments in the data acquisition and processing ability of aerial photogrammetry. However, shadows in images contribute to problems such as incomplete spectral information, lower intensity brightness, and fuzzy boundaries, which seriously affect the efficiency of the image interpretation. In this paper, to address these issues, a simple and automatic method of shadow detection is presented. The proposed method combines the advantages of the property-based and geometric-based methods to automatically detect the shadowed areas in VHR imagery. A geometric model of the scene and the solar position are used to delineate the shadowed and non-shadowed areas in the VHR image. A matting method is then applied to the image to refine the shadow mask. Different types of shadowed aerial orthoimages were used to verify the effectiveness of the proposed shadow detection method, and the results were compared with the results obtained by two state-of-the-art methods. The overall accuracy of the proposed method on the three tests was around 90%, confirming the effectiveness and robustness of the new method for detecting fine shadows, without any human input. The proposed method also performs better in detecting shadows in areas with water than the other two methods.", "title": "" } ]
scidocsrr
6201489b4c017a2e9d506a20358f5dc2
Meta-Unsupervised-Learning: A supervised approach to unsupervised learning
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "e4890b63e9a51029484354535765801c", "text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.", "title": "" }, { "docid": "fa984593899ca62025f54a7b4e7019c8", "text": "Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then design algorithms to (approximately) optimize various graph-based objective functions. However, in most applications, this similarity information is merely based on some heuristic; the ground truth is really the unknown correct clustering of the data points and the real goal is to achieve low error on the data. In this work, we develop a theoretical approach to clustering from this perspective. In particular, motivated by recent work in learning theory that asks \"what natural properties of a similarity (or kernel) function are sufficient to be able to learn well?\" we ask \"what natural properties of a similarity function are sufficient to be able to cluster well?\"\n To study this question we develop a theoretical framework that can be viewed as an analog of the PAC learning model for clustering, where the object of study, rather than being a concept class, is a class of (concept, similarity function) pairs, or equivalently, a property the similarity function should satisfy with respect to the ground truth clustering. We then analyze both algorithmic and information theoretic issues in our model. While quite strong properties are needed if the goal is to produce a single approximately-correct clustering, we find that a number of reasonable properties are sufficient under two natural relaxations: (a) list clustering: analogous to the notion of list-decoding, the algorithm can produce a small list of clusterings (which a user can select from) and (b) hierarchical clustering: the algorithm's goal is to produce a hierarchy such that desired clustering is some pruning of this tree (which a user could navigate). We develop a notion of the clustering complexity of a given property (analogous to notions of capacity in learning theory), that characterizes its information-theoretic usefulness for clustering. We analyze this quantity for several natural game-theoretic and learning-theoretic properties, as well as design new efficient algorithms that are able to take advantage of them. Our algorithms for hierarchical clustering combine recent learning-theoretic approaches with linkage-style methods. We also show how our algorithms can be extended to the inductive case, i.e., by using just a constant-sized sample, as in property testing. The analysis here uses regularity-type results of [FK] and [AFKK].", "title": "" } ]
[ { "docid": "ae8fde6c520fb4d1e18c4ff19d59a8d8", "text": "Visual-to-auditory Sensory Substitution Devices (SSDs) are non-invasive sensory aids that provide visual information to the blind via their functioning senses, such as audition. For years SSDs have been confined to laboratory settings, but we believe the time has come to use them also for their original purpose of real-world practical visual rehabilitation. Here we demonstrate this potential by presenting for the first time new features of the EyeMusic SSD, which gives the user whole-scene shape, location & color information. These features include higher resolution and attempts to overcome previous stumbling blocks by being freely available to download and run from a smartphone platform. We demonstrate with use the EyeMusic the potential of SSDs in noisy real-world scenarios for tasks such as identifying and manipulating objects. We then discuss the neural basis of using SSDs, and conclude by discussing other steps-in-progress on the path to making their practical use more widespread.", "title": "" }, { "docid": "a7d25265e939e484533bfd380a18502c", "text": "Cloud computing is emerging as a viable platform for scientific exploration. Elastic and on-demand access to resources (and other services), the abstraction of “unlimited” resources, and attractive pricing models provide incentives for scientists to move their workflows into clouds. Generalizing these concepts beyond a single virtualized datacenter, it is possible to create federated marketplaces where different types of resources (e.g., clouds, HPC grids, supercomputers) that may be geographically distributed, are collectively exposed as a single elastic infrastructure. This presents opportunities for optimizing the execution of application workflows with heterogeneous and dynamic requirements, and tackling larger scale problems. In this paper, we introduce a framework to manage the end-to-end execution of data-intensive application workflows in dynamic software-defined resource federation. This framework enables the autonomic execution of workflows by elastically provisioning an appropriate set of resources that meet application requirements, and by adapting this set of resources at runtime as the requirements change. It also allows users to customize scheduling policies that drive the way resources federated and used. To demonstrate the benefits of our approach, we study the execution of two different data-intensive scientific workflows in a multi-cloud federation using different policies and objective functions.", "title": "" }, { "docid": "799ccd75d6781e38cf5e2faee5784cae", "text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.", "title": "" }, { "docid": "2fc1afae973ddd832afa92d27222ef09", "text": "In our 1990 paper, we showed that managers concerned with their reputations might choose to mimic the behavior of other managers and ignore their own information. We presented a model in which “smart” managers receive correlated, informative signals, whereas “dumb” managers receive independent, uninformative signals. Managers have an incentive to follow the herd to indicate to the labor market that they have received the same signal as others, and hence are likely to be smart. This model of reputational herding has subsequently found empirical support in a number of recent papers, including Judith A. Chevalier and Glenn D. Ellison’s (1999) study of mutual fund managers and Harrison G. Hong et al.’s (2000) study of equity analysts. We argued in our 1990 paper that reputational herding “requires smart managers’ prediction errors to be at least partially correlated with each other” (page 468). In their Comment, Marco Ottaviani and Peter Sørensen (hereafter, OS) take issue with this claim. They write: “correlation is not necessary for herding, other than in degenerate cases.” It turns out that the apparent disagreement hinges on how strict a definition of herding one adopts. In particular, we had defined a herding equilibrium as one in which agentB alwaysignores his own information and follows agent A. (See, e.g., our Propositions 1 and 2.) In contrast, OS say that there is herding when agent B sometimesignores his own information and follows agent A. The OS conclusion is clearly correct given their weaker definition of herding. At the same time, however, it also seems that for the stricter definition that we adopted in our original paper, correlated errors on the part of smart managers are indeed necessary for a herding outcome—even when one considers the expanded parameter space that OS do. We will try to give some intuition for why the different definitions of herding lead to different conclusions about the necessity of correlated prediction errors. Along the way, we hope to convince the reader that our stricter definition is more appropriate for isolating the economic effects at work in the reputational herding model. An example is helpful in illustrating what is going on. Consider a simple case where the parameter values are as follows: p 5 3⁄4; q 5 1⁄4; z 5 1⁄2, andu 5 1⁄2. In our 1990 paper, we also imposed the constraint that z 5 ap 1 (1 2 a)q, which further implies thata 5 1⁄2. The heart of the OS Comment is the idea that this constraint should be disposed of—i.e., we should look at other values of a. Without loss of generality, we will consider values of a above 1⁄2, and distinguish two cases.", "title": "" }, { "docid": "c27aee0b72f3e8239915a8d33c060e96", "text": "Advances in artificial impedance surface conformal antennas are presented. A detailed conical impedance modulation is proposed for the first time. By coating an artificial impedance surface on a cone, we can control the conical surface wave radiating at the desired direction. The surface impedance is constructed by printing a dense texture of sub wavelength metal patches on a grounded dielectric slab. The effective surface impedance depends on the size of the patches, and can be varied as a function of position. The final devices are conical conformal antennas with simple layout and feeding. Simulated results are presented, and better aperture efficiency and lower side lobe level are obtained than our predecessors [2].", "title": "" }, { "docid": "1be5530691f5d0638a399adfc9b6bc36", "text": "Nontechnical losses, particularly due to electrical theft, have been a major concern in power system industries for a long time. Large-scale consumption of electricity in a fraudulent manner may imbalance the demand-supply gap to a great extent. Thus, there arises the need to develop a scheme that can detect these thefts precisely in the complex power networks. So, keeping focus on these points, this paper proposes a comprehensive top-down scheme based on decision tree (DT) and support vector machine (SVM). Unlike existing schemes, the proposed scheme is capable enough to precisely detect and locate real-time electricity theft at every level in power transmission and distribution (T&D). The proposed scheme is based on the combination of DT and SVM classifiers for rigorous analysis of gathered electricity consumption data. In other words, the proposed scheme can be viewed as a two-level data processing and analysis approach, since the data processed by DT are fed as an input to the SVM classifier. Furthermore, the obtained results indicate that the proposed scheme reduces false positives to a great extent and is practical enough to be implemented in real-time scenarios.", "title": "" }, { "docid": "955376cf6d04373c407987613d1c2bd1", "text": "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.", "title": "" }, { "docid": "b6fa1ee8c2f07b34768a78591c33bbbe", "text": "We prove that there are arbitrarily long arithmetic progressions of primes. There are three major ingredients. [. . . ] [. . . ] for all x ∈ ZN (here (m0, t0, L0) = (3, 2, 1)) and E ( ν((x− y)/2)ν((x− y + h2)/2)ν(−y)ν(−y − h1)× × ν((x− y′)/2)ν((x− y′ + h2)/2)ν(−y)ν(−y − h1)× × ν(x)ν(x + h1)ν(x + h2)ν(x + h1 + h2) ∣∣∣∣ x, h1, h2, y, y′ ∈ ZN) = 1 + o(1) (0.1) (here (m0, t0, L0) = (12, 5, 2)). [. . . ] Proposition 0.1 (Generalised von Neumann). Suppose that ν is k-pseudorandom. Let f0, . . . , fk−1 ∈ L(ZN) be functions which are pointwise bounded by ν+νconst, or in other words |fj(x)| 6 ν(x) + 1 for all x ∈ ZN , 0 6 j 6 k − 1. (0.2) Let c0, . . . , ck−1 be a permutation of {0, 1, . . . , k − 1} (in practice we will take cj := j). Then E ( k−1 ∏ j=0 fj(x + cjr) ∣∣∣∣ x, r ∈ ZN) = O( inf 06j6k−1 ‖fj‖Uk−1) + o(1).", "title": "" }, { "docid": "36dde22c25339790e7c011ca5e8677e4", "text": "Land surface temperature and emissivity (LST&E) products are generated by the Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on the National Aeronautics and Space Administration's Terra satellite. These products are generated at different spatial, spectral, and temporal resolutions, resulting in discrepancies between them that are difficult to quantify, compounded by the fact that different retrieval algorithms are used to produce them. The highest spatial resolution MODIS emissivity product currently produced is from the day/night algorithm, which has a spatial resolution of 5 km. The lack of a high-spatial-resolution emissivity product from MODIS limits the usefulness of the data for a variety of applications and limits utilization with higher resolution products such as those from ASTER. This paper aims to address this problem by using the ASTER Temperature Emissivity Separation (TES) algorithm, combined with an improved atmospheric correction method, to generate the LST&E products for MODIS at 1-km spatial resolution and for ASTER in a consistent manner. The rms differences between the ASTER and MODIS emissivities generated from TES over the southwestern U.S. were 0.013 at 8.6 μm and 0.0096 at 11 μm, with good correlations of up to 0.83. The validation with laboratory-measured sand samples from the Algodones and Kelso Dunes in CA showed a good agreement in spectral shape and magnitude, with mean emissivity differences in all bands of 0.009 and 0.010 for MODIS and ASTER, respectively. These differences are equivalent to approximately 0.6 K in the LST for a material at 300 K and at 11 μm.", "title": "" }, { "docid": "be502c3ea5369f31293f691bca6df775", "text": "Projects in the area of architectural design and urban planning typically engage several architects as well as experts from other professions. While the design and review meetings thus often involve a large number of cooperating participants, the actual design is still done by the individuals in the time between those meetings using desktop PCs and CAD applications. A real collaborative approach to architectural design and urban planning is often limited to early paper-based sketches. In order to overcome these limitations we designed and realized the Augmented Round Table, a new approach to support complex design and planning decisions for architects. While AR has been applied to this area earlier, our approach does not try to replace the use of CAD systems but rather integrates them seamlessly into the collaborative AR environment. The approach is enhanced by intuitive interaction mechanisms that can be easily configured for different application scenarios.", "title": "" }, { "docid": "6ba91269b707f64d2a45729161f44807", "text": "The article is related to the development of techniques for automatic recognition of bird species by their sounds. It has been demonstrated earlier that a simple model of one time-varying sinusoid is very useful in classification and recognition of typical bird sounds. However, a large class of bird sounds are not pure sinusoids but have a clear harmonic spectrum structure. We introduce a way to classify bird syllables into four classes by their harmonic structure.", "title": "" }, { "docid": "11b05bd0c0b5b9319423d1ec0441e8a7", "text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.", "title": "" }, { "docid": "b51021e995fc4be50028a0a152db7e7a", "text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.", "title": "" }, { "docid": "41135401a2f04797ea2b4989065613bd", "text": "With the rapid expansion of new available information presented to us online on a daily basis, text classification becomes imperative in order to classify and maintain it. Word2vec offers a unique perspective to the text mining community. By converting words and phrases into a vector representation, word2vec takes an entirely new approach on text classification. Based on the assumption that word2vec brings extra semantic features that helps in text classification, our work demonstrates the effectiveness of word2vec by showing that tf-idf and word2vec combined can outperform tf-idf because word2vec provides complementary features (e.g. semantics that tf-idf can't capture) to tf-idf. Our results show that the combination of word2vec weighted by tf-idf and tf-idf does not outperform tf-idf consistently. It is consistent enough to say the combination of the two can outperform either individually.", "title": "" }, { "docid": "bfdf6e8e98793388dcf8f13b7147faf0", "text": "Recently, Long Term Evolution (LTE) has developed a femtocell for indoor coverage extension. However, interference problem between the femtocell and the macrocell should be solved in advance. In this paper, we propose an interference management scheme in the LTE femtocell systems using Fractional Frequency Reuse (FFR). Under the macrocell allocating frequency band by the FFR, the femtocell chooses sub-bands which are not used in the macrocell sub-area to avoid interference. Simulation results show that proposed scheme enhances total/edge throughputs and reduces the outage probability in overall network, especially for the cell edge users.", "title": "" }, { "docid": "4a098609770618240fbaebbbc891883d", "text": "We present CHARAGRAM embeddings, a simple approach for learning character-based compositional models to embed textual sequences. A word or sentence is represented using a character n-gram count vector, followed by a single nonlinear transformation to yield a low-dimensional embedding. We use three tasks for evaluation: word similarity, sentence similarity, and part-of-speech tagging. We demonstrate that CHARAGRAM embeddings outperform more complex architectures based on character-level recurrent and convolutional neural networks, achieving new state-of-the-art performance on several similarity tasks. 1", "title": "" }, { "docid": "0eb61ddeca941e34b40bfe3e58b70497", "text": "This article surveys the literature on analyses of mobile traffic collected by operators within their network infrastructure. This is a recently emerged research field, and, apart from a few outliers, relevant works cover the period from 2005 to date, with a sensible densification over the last three years. We provide a thorough review of the multidisciplinary activities that rely on mobile traffic datasets, identifying major categories and sub-categories in the literature, so as to outline a hierarchical classification of research lines. When detailing the works pertaining to each class, we balance a comprehensive view of state-of-the-art results with punctual focuses on the methodological aspects. Our approach provides a complete introductory guide to the research based on mobile traffic analysis. It allows summarizing the main findings of the current state-of-the-art, as well as pinpointing important open research directions.", "title": "" }, { "docid": "9e3263866208bbc6a9019b3c859d2a66", "text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.", "title": "" }, { "docid": "d1237eb5ebdfafac5a80215868dee206", "text": "Multipath is exploited to image targets that are hidden due to lack of line of sight (LOS) path in urban environments. Urban radar scenes include building walls, therefore creating reflections causing multipath returns. Conventional processing via synthetic aperture beamforming algorithms do not detect or localize the target at its true position. To remove these limitations, two multipath exploitation techniques to image a hidden target at its true location are presented under the assumptions that the locations of the reflecting walls are known and that the target multipath is resolvable and detectable. The first technique directly operates on the radar returns, whereas the second operates on the traditional beamformed image. Both these techniques mitigate the false alarms arising from the multipath while simultaneously permitting the shadowed target to be detected at its true location. While these techniques are general, they are examined for two important urban radar applications: detecting shadowed targets in an urban canyon, and detecting shadowed targets around corners.", "title": "" }, { "docid": "5b7930de475b6f83f8333439fd0f9c3b", "text": "Cloud applications are increasingly built from a mixture of runtime technologies. Hosted functions and service-oriented web hooks are among the most recent ones which are natively supported by cloud platforms. They are collectively referred to as serverless computing by application engineers due to the transparent on-demand instance activation and microbilling without the need to provision infrastructure explicitly. This half-day tutorial explains the use cases for serverless computing and the drivers and existing software solutions behind the programming and deployment model also known as Function-as-a-Service in the overall cloud computing stack. Furthermore, it presents practical open source tools for deriving functions from legacy code and for the management and execution of functions in private and public clouds.", "title": "" } ]
scidocsrr
eb1a80981b9b86b523dda13cfc2d674d
Japanese Society for Cancer of the Colon and Rectum (JSCCR) Guidelines 2014 for treatment of colorectal cancer
[ { "docid": "b966af7f15e104865944ac44fad23afc", "text": "Five cases are described where minute foci of adenocarcinoma have been demonstrated in the mesorectum several centimetres distal to the apparent lower edge of a rectal cancer. In 2 of these there was no other evidence of lymphatic spread of the tumour. In orthodox anterior resection much of this tissue remains in the pelvis, and its is suggested that these foci might lead to suture-line or pelvic recurrence. Total excision of the mesorectum has, therefore, been carried out as a part of over 100 consecutive anterior resections. Fifty of these, which were classified as 'curative' or 'conceivably curative' operations, have now been followed for over 2 years with no pelvic or staple-line recurrence.", "title": "" }, { "docid": "bc4a72d96daf03f861b187fa73f57ff6", "text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.", "title": "" } ]
[ { "docid": "29c8c8abf86b2d7358a1cd70751f3f93", "text": "Data domain description concerns the characterization of a data set. A good description covers all target data but includes no superfluous space. The boundary of a dataset can be used to detect novel data or outliers. We will present the Support Vector Data Description (SVDD) which is inspired by the Support Vector Classifier. It obtains a spherically shaped boundary around a dataset and analogous to the Support Vector Classifier it can be made flexible by using other kernel functions. The method is made robust against outliers in the training set and is capable of tightening the description by using negative examples. We show characteristics of the Support Vector Data Descriptions using artificial and real data.", "title": "" }, { "docid": "c4183c8b08da8d502d84a650d804cac8", "text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>", "title": "" }, { "docid": "381d42fca0f242c10d115113c7a33c67", "text": "Abstract. We present a detailed workload characterization of a multi-tiered system that hosts an e-commerce site. Using the TPC-W workload and via experimental measurements, we illustrate how workload characteristics affect system behavior and operation, focusing on the statistical properties of dynamic page generation. This analysis allows to identify bottlenecks and the system conditions under which there is degradation in performance. Consistent with the literature, we find that the distribution of the dynamic page generation is heavy-tailed, which is caused by the interaction of the database server with the storage system. Furthermore, by examining the queuing behavior at the database server, we present experimental evidence of the existence of statistical correlation in the distribution of dynamic page generation times, especially under high load conditions. We couple this observation with the existence (and switching) of bottlenecks in the system.", "title": "" }, { "docid": "dcc10f93667d23ed3af321086114f261", "text": "Background: Silver nanoparticles (SNPs) are used extensively in areas such as medicine, catalysis, electronics, environmental science, and biotechnology. Therefore, facile synthesis of SNPs from an eco-friendly, inexpensive source is a prerequisite. In the present study, fabrication of SNPs from the leaf extract of Butea monosperma (Flame of Forest) has been performed. SNPs were synthesized from 1% leaf extract solution and characterized by ultraviolet-visible (UV-vis) spectroscopy and transmission electron microscopy (TEM). The mechanism of SNP formation was studied by Fourier transform infrared (FTIR), and anti-algal properties of SNPs on selected toxic cyanobacteria were evaluated. Results: TEM analysis indicated that size distribution of SNPs was under 5 to 30 nm. FTIR analysis indicated the role of amide I and II linkages present in protein in the reduction of silver ions. SNPs showed potent anti-algal properties on two cyanobacteria, namely, Anabaena spp. and Cylindrospermum spp. At a concentration of 800 μg/ml of SNPs, maximum anti-algal activity was observed in both cyanobacteria. Conclusions: This study clearly demonstrates that small-sized, stable SNPs can be synthesized from the leaf extract of B. monosperma. SNPs can be effectively employed for removal of toxic cyanobacteria.", "title": "" }, { "docid": "9d33565dbd5148730094a165bb2e968f", "text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.", "title": "" }, { "docid": "ba2cc10384c8be27ca0251c574998a1b", "text": "As the extension of Distributed Denial-of-Service (DDoS) attacks to application layer in recent years, researchers pay much interest in these new variants due to a low-volume and intermittent pattern with a higher level of stealthiness, invaliding the state-of-the-art DDoS detection/defense mechanisms. We describe a new type of low-volume application layer DDoS attack--Tail Attacks on Web Applications. Such attack exploits a newly identified system vulnerability of n-tier web applications (millibottlenecks with sub-second duration and resource contention with strong dependencies among distributed nodes) with the goal of causing the long-tail latency problem of the target web application (e.g., 95th percentile response time > 1 second) and damaging the long-term business of the service provider, while all the system resources are far from saturation, making it difficult to trace the cause of performance degradation.\n We present a modified queueing network model to analyze the impact of our attacks in n-tier architecture systems, and numerically solve the optimal attack parameters. We adopt a feedback control-theoretic (e.g., Kalman filter) framework that allows attackers to fit the dynamics of background requests or system state by dynamically adjusting attack parameters. To evaluate the practicality of such attacks, we conduct extensive validation through not only analytical, numerical, and simulation results but also real cloud production setting experiments via a representative benchmark website equipped with state-of-the-art DDoS defense tools. We further proposed a solution to detect and defense the proposed attacks, involving three stages: fine-grained monitoring, identifying bursts, and blocking bots.", "title": "" }, { "docid": "bf7b3cdb178fd1969257f56c0770b30b", "text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.", "title": "" }, { "docid": "e50d156bde3479c27119231073705f70", "text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.", "title": "" }, { "docid": "112f7444f0881bf940d056a96c6f5ee3", "text": "This paper describes our approach on “Information Extraction from Microblogs Posted during Disasters”as an attempt in the shared task of the Microblog Track at Forum for Information Retrieval Evaluation (FIRE) 2016 [2]. Our method uses vector space word embeddings to extract information from microblogs (tweets) related to disaster scenarios, and can be replicated across various domains. The system, which shows encouraging performance, was evaluated on the Twitter dataset provided by the FIRE 2016 shared task. CCS Concepts •Computing methodologies→Natural language processing; Information extraction;", "title": "" }, { "docid": "a9242c3fca5a8ffdf0e03776b8165074", "text": "This paper presents inexpensive computer vision techniques allowing to measure the texture characteristics of woven fabric, such as weave repeat and yarn counts, and the surface roughness. First, we discuss the automatic recognition of weave pattern and the accurate measurement of yarn counts by analyzing fabric sample images. We propose a surface roughness indicator FDFFT, which is the 3-D surface fractal dimension measurement calculated from the 2-D fast Fourier transform of high-resolution 3-D surface scan. The proposed weave pattern recognition method was validated by using computer-simulated woven samples and real woven fabric images. All weave patterns of the tested fabric samples were successfully recognized, and computed yarn counts were consistent with the manual counts. The rotation invariance and scale invariance of FDFFT were validated with fractal Brownian images. Moreover, to evaluate the correctness of FDFFT, we provide a method of calculating standard roughness parameters from the 3-D fabric surface. According to the test results, we demonstrated that FDFFT is a fast and reliable parameter for fabric roughness measurement based on 3-D surface data.", "title": "" }, { "docid": "237a88ea092d56c6511bb84604e6a7c7", "text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.", "title": "" }, { "docid": "5350ffea7a4187f0df11fd71562aba43", "text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.", "title": "" }, { "docid": "7d9162b079a155f48688a1d70af5482a", "text": "Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. However, as intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbances, 590 nm over 450 nm, is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantitation down to 50 ng of bovine serum albumin. Furthermore, protein assay in presence of up to 35-fold weight excess of sodium dodecyl sulfate (detergent) over bovine serum albumin (protein) can be performed. A linear equation that perfectly fits the experimental data is provided on the basis of mass action and Beer's law.", "title": "" }, { "docid": "867c8c0286c0fed4779f550f7483770d", "text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.", "title": "" }, { "docid": "348c62670a729da42654f0cf685bba53", "text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.", "title": "" }, { "docid": "1a99b71b6c3c33d97c235a4d72013034", "text": "Crowdfunding systems are social media websites that allow people to donate small amounts of money that add up to fund valuable larger projects. These websites are structured around projects: finite campaigns with welldefined goals, end dates, and completion criteria. We use a dataset from an existing crowdfunding website — the school charity Donors Choose — to understand the value of completing projects. We find that completing a project is an important act that leads to larger donations (over twice as large), greater likelihood of returning to donate again, and few projects that expire close but not complete. A conservative estimate suggests that this completion bias led to over $15 million in increased donations to Donors Choose, representing approximately 16% of the total donations for the period under study. This bias suggests that structuring many types of collaborative work as a series of projects might increase contribution significantly. Many social media creators find it rather difficult to motivate users to actively participate and contribute their time, energy, or money to make a site valuable to others. The value in social media largely derives from interactions between and among people who are working together to achieve common goals. To encourage people to participate and contribute, social media creators regularly look for different ways of structuring participation. Some use a blog-type format, such as Facebook, Twitter, or Tumblr. Some use a collaborative document format like Wikipedia. And some use a project-based format. A project is a well-defined set of tasks that needs to be accomplished. Projects usually have a well-defined end goal — something that needs to be accomplished for the project to be considered a success — and an end date — a day by which the project needs to be completed. Much work in society is structured around projects; for example, Hollywood makes movies by organizing each movie’s production as a project, hiring a new crew for each movie. Construction companies organize their work as a sequence of projects. And projects are common in knowledge-work based businesses (?). Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Another important place we see project-based organization is in crowdfunding websites. Crowdfunding is a relatively new phenomenon that merges modern social web technologies with project-based fundraising. It is a new form of social media that publicizes projects that need money, and allows the crowd to each make a small contribution toward the larger project. By aggregating many small donations, crowdfunding websites can fund large and interesting projects of all kinds. Kickstarter, IndieGoGo, Spot.Us, and Donors Choose are examples of crowdfunding websites targeted at specific types of projects (creative, entrepreneurial, journalism, and classroom projects respectively). Crowdfunding is becoming an increasingly popular tool for enabling project-based work. Kickstarter, for example, has raised over $400 million for over 35,000 creative projects, and Donors Choose has raised over $90 million for over 200,000 classroom projects. Additionally, crowdfunding websites represent potential new business models for a number of industries, including some struggling to find viable revenue streams: Sellaband has proven successful in helping musicians fund the creation and distribution of their music; and Spot.Us enables journalists to fund and publish investigative news. In this paper, I seek to understand why crowdfunding systems that are organized around projects are successful. Using a dataset from Donors Choose, a crowdfunding charity that funds classroom projects for K–12 school teachers, I find that completing a project is a powerful motivator that helps projects succeed in the presence of a crowd: donations that complete a project are over twice as large as normal donations. People who make these donations are more likely to return and donate in the future, and their future donations are larger. And few projects get close to completion but fail. Together, these results suggest that completing the funding for a project is an important act for the crowd, and structuring the fundraising around completable projects helps enable success. This also has implications for other types of collaborative technologies. Background and Related Ideas", "title": "" }, { "docid": "26052ad31f5ccf55398d6fe3b9850674", "text": "An electroneurographic study performed on the peripheral nerves of 25 patients with severe cirrhosis following viral hepatitis showed slight slowing (P > 0.05) of motor conduction velocity (CV) and significant diminution (P < 0.001) of sensory CV and mixed sensorimotor-evoked potentials, associated with a significant decrease in the amplitude of sensory evoked potentials. The slowing was about equal in the distal (digital) and in the proximal segments of the same nerve. A mixed axonal degeneration and segmental demyelination is presumed to explain these findings. The CV measurements proved helpful for an early diagnosis of hepatic polyneuropathy showing subjective symptoms in the subclinical stage. Elektroneurographische Untersuchungen der peripheren Nerven bei 25 Patienten mit postviralen Leberzirrhosen ergaben folgendes: geringe Verminderung (P > 0.05) der motorischen Leitgeschwindigkeit (LG) und eine signifikant verlangsamte LG in sensiblen Fasern (P < 0.001), in beiden proximalen und distalen Fasern. Es wurde in den gemischten evozierten Potentialen eine Verlangsamung der LG festgestellt, zwischen den Werten der motorischen und sensiblen Fasern. Gleichzeitig wurde eine Minderung der Amplitude des NAP beobachtet. Diese Befunde sprechen für eine axonale Degeneration und eine Demyelinisierung in den meisten untersuchten peripheren Nerven. Elektroneurographische Untersuchungen erlaubten den funktionellen Zustand des peripheren Nervens abzuschätzen und bestimmte Veränderungen bereits im Initialstadium der Erkrankung aufzudecken, wenn der Patient noch keine klinischen Zeichen einer peripheren Neuropathie bietet.", "title": "" }, { "docid": "709aa1bc4ace514e46f7edbb07fb03a9", "text": "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy (less than 1 kcal/mol mean absolute error) and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.", "title": "" }, { "docid": "8eb0f822b4e8288a6b78abf0bf3aecbb", "text": "Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.", "title": "" }, { "docid": "9e6bfc7b5cc87f687a699c62da013083", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" } ]
scidocsrr
71c48aa46500ce1636999a2fd0180dab
Multi-Sentence Compression: Finding Shortest Paths in Word Graphs
[ { "docid": "fc164dc2d55cec2867a99436d37962a1", "text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.", "title": "" } ]
[ { "docid": "f041a02b565ca9100d20b479fb6951c8", "text": "Linear blending is a very popular skinning technique for virtual characters, even though it does not always generate realistic deformations. Recently, nonlinear blending techniques (such as dual quaternions) have been proposed in order to improve upon the deformation quality of linear skinning. The trade-off consists of the increased vertex deformation time and the necessity to redesign parts of the 3D engine. In this paper, we demonstrate that any nonlinear skinning technique can be approximated to an arbitrary degree of accuracy by linear skinning, using just a few samples of the nonlinear blending function (virtual bones). We propose an algorithm to compute this linear approximation in an automatic fashion, requiring little or no interaction with the user. This enables us to retain linear skinning at the core of our 3D engine without compromising the visual quality or character setup costs.", "title": "" }, { "docid": "da74e402f4542b6cbfb27f04c7640eb4", "text": "Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.", "title": "" }, { "docid": "3d4633e9c26d46fb7ef1e5865835bde5", "text": "A multiple input, multiple output (MIMO) radar emits probings signals with multiple transmit antennas and records the reflections from targets with multiple receive antennas. Estimating the relative angles, delays, and Doppler shifts from the received signals allows to determine the locations and velocities of the targets. Standard approaches to MIMO radar based on digital matched filtering or compressed sensing only resolve the angle-delay-Doppler triplets on a (1/(NTNR), 1/B, 1/T ) grid, where NT and NR are the number of transmit and receive antennas, B is the bandwidth of the probing signals, and T is the length of the time interval over which the reflections are observed. In this work, we show that the continuous angle-delay-Doppler triplets and the corresponding attenuation factors can be recovered perfectly by solving a convex optimization problem. This result holds provided that the angle-delay-Doppler triplets are separated either by 10/(NTNR - 1) in angle, 10.01/B in delay, or 10.01/T in Doppler direction. Furthermore, this result is optimal (up to log factors) in the number of angle-delay-Doppler triplets that can be recovered.", "title": "" }, { "docid": "350cda71dae32245b45d96b5fdd37731", "text": "In this work, we focus on cyclic codes over the ring F2+uF2+vF2+uvF2, which is not a finite chain ring. We use ideas from group rings and works of AbuAlrub et al. in (Des Codes Crypt 42:273–287, 2007) to characterize the ring (F2 + uF2 + vF2 + uvF2)/(x − 1) and cyclic codes of odd length. Some good binary codes are obtained as the images of cyclic codes over F2+uF2+vF2+uvF2 under two Gray maps that are defined. We also characterize the binary images of cyclic codes over F2 + uF2 + vF2 + uvF2 in general.", "title": "" }, { "docid": "f918ca37dcf40512c4efa013567a126b", "text": "In the field of robots' obstacle avoidance and navigation, indirect contact sensors such as visual, ultrasonic and infrared detection are widely used. However, the performance of these sensors is always influenced by the severe environment, especially under the dark, dense fog, underwater conditions. The obstacle avoidance robot based on tactile sensor is proposed in this paper to realize the autonomous obstacle avoidance navigation by only using three dimensions force sensor. In addition, the mathematical model and algorithm are optimized to make up the deficiency of tactile sensor. Finally, the feasibility and reliability of this study are verified by the simulation results.", "title": "" }, { "docid": "40d4bd1bc3876a772cfbb2ed5b17052d", "text": "Adaptive cruise control is one of the most widely used vehicle driver assistance systems. However, uncertainty about drivers' lane change maneuvers in surrounding vehicles, such as unexpected cut-in, remains a challenge. We propose a novel adaptive cruise control framework combining convolution neural network (CNN)-based lane-change-intention inference and a predictive controller. We transform real-world driving data, collected on public roads with only standard production sensors, to a simplified bird's-eye view. This enables a CNN-based inference approach with low computational cost and robustness to noisy input. The predicted inference of traffic participants' lane change intention is utilized to improve safety and ride comfort with model predictive control. Simulation results based on driving scene reconstruction demonstrate the superior performance of inference using the proposed CNN-based approach, as well as enhanced safety and ride comfort.", "title": "" }, { "docid": "9ed2f6172271c6ccdba2ab16e2d6b3d6", "text": "An important problem in analyzing big data is subspace clustering, i.e., to represent a collection of points in a high-dimensional space via the union of low-dimensional subspaces. Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are the state-of-the-art methods for this task. These two methods are fundamentally similar in that both are based on convex optimization exploiting the intuition of “Self-Expressiveness”. The main difference is that SSC minimizes the vector `1 norm of the representation matrix to induce sparsity while LRR minimizes the nuclear norm (aka trace norm) to promote a low-rank structure. Because the representation matrix is often simultaneously sparse and low-rank, we propose a new algorithm, termed Low-Rank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develop theoretical guarantees of the success of the algorithm. The results reveal interesting insights into the strengths and weaknesses of SSC and LRR, and demonstrate how LRSSC can take advantage of both methods in preserving the “Self-Expressiveness Property” and “Graph Connectivity” at the same time. A byproduct of our analysis is that it also expands the theoretical guarantee of SSC to handle cases when the subspaces have arbitrarily small canonical angles but are “nearly independent”.", "title": "" }, { "docid": "5d85e552841fe415daa72dff2a5f9706", "text": "M any security faculty members and practitioners bemoan the lack of good books in the field. Those of us who teach often find ourselves forced to rely on collections of papers to fortify our courses. In the last few years, however, we've started to see the appearance of some high-quality books to support our endeavors. Matt Bishop's book—Com-puter Security: Art and Science—is definitely hefty and packed with lots of information. It's a large book (with more than 1,000 pages), and it covers most any computer security topic that might be of interest. section discusses basic security issues at the definitional level. The Policy section addresses the relationship between policy and security, examining several types of policies in the process. Implementation I covers cryptography and its role in security. Implementation II describes how to apply policy requirements in systems. The Assurance section, which Elisabeth Sullivan wrote, introduces assurance basics and formal methods. The Special Topics section discusses malicious logic, vulnerability analysis , auditing, and intrusion detection. Finally, the Practicum ties all the previously discussed material to real-world examples. A ninth additional section, called End Matter, discusses miscellaneous supporting mathematical topics and concludes with an example. At a publisher's list price of US$74.99, you'll want to know why you should consider buying such an expensive book. Several things set it apart from other, similar, offerings. Most importantly , the book provides numerous examples and, refreshingly, definitions. A vertical bar alongside the examples distinguishes them from other text, so picking them out is easy. The book also includes a bibliography of over 1,000 references. Additionally, each chapter includes a summary, suggestions for further reading, research issues, and practice exercises. The format and layout are good, and the fonts are readable. The book is aimed at several audiences , and the preface describes many roadmaps, one of which discusses dependencies among the various chapters. Instructors can use it at the advanced undergraduate level or for introductory graduate-level computer-security courses. The preface also includes a mapping of suggested topics for undergraduate and graduate courses, presuming a certain amount of math and theoretical computer-science background as prerequisites. Practitioners can use the book as a resource for information on specific topics; the examples in the Practicum are ideally suited for them. So, what's the final verdict? Practitioners will want to consider this book as a reference to add to their bookshelves. Teachers of advanced undergraduate or introductory …", "title": "" }, { "docid": "290796519b7757ce7ec0bf4d37290eed", "text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.", "title": "" }, { "docid": "f83f5eaa47f4634311297886b8e2228c", "text": "Purpose of this study is to determine whether cash flow impacts business failure prediction using the BP models (Altman z-score, or Neural Network, or any of the BP models which could be implemented having objective to predict the financial distress or more complex financial failure-bankruptcy of the banks or companies). Units of analysis are financial ratios derived from raw financial data: B/S, P&L statements (income statements) and cash flow statements of both failed and non-failed companies/corporates that have been collected from the auditing resources and reports performed. A number of these studies examined whether a cash flow improve the prediction of business failure. The authors would have the objective to show the evidence and usefulness and efficacy of statistical models such as Altman Z-score discriminant analysis bankruptcy predictive models to assess client on going concern status. Failed and non-failed companies were selected for analysis to determine whether the cash flow improves the business failure prediction aiming to proof that the cash flow certainly makes better financial distress and bankruptcy prediction possible. Key-Words: bankruptcy prediction, financial distress, financial crisis, transition economy, auditing statement, balance sheet, profit and loss accounts, income statements", "title": "" }, { "docid": "6ecc241a25fdbf30a0f6e31c4a6f3361", "text": "Widespread personalized computing systems play an already important and fast-growing role in diverse contexts, such as location-based services, recommenders, commercial Web-based services, and teaching systems. The personalization in these systems is driven by information about the user, a user model. Moreover, as computers become both ubiquitous and pervasive, personalization operates across the many devices and information stores that constitute the user's personal digital ecosystem. This enables personalization, and the user models driving it, to play an increasing role in people's everyday lives. This makes it critical to establish ways to address key problems of personalization related to privacy, invisibility of personalization, errors in user models, wasted user models, and the broad issue of enabling people to control their user models and associated personalization. We offer scrutable user models as a foundation for tackling these problems.\n This article argues the importance of scrutable user modeling and personalization, illustrating key elements in case studies from our work. We then identify the broad roles for scrutable user models. The article describes how to tackle the technical and interface challenges of designing and building scrutable user modeling systems, presenting design principles and showing how they were established over our twenty years of work on the Personis software framework. Our contributions are the set of principles for scrutable personalization linked to our experience from creating and evaluating frameworks and associated applications built upon them. These constitute a general approach to tackling problems of personalization by enabling users to scrutinize their user models as a basis for understanding and controlling personalization.", "title": "" }, { "docid": "ea49d288ffefd549f77519c90de51fbc", "text": "Text line detection is a prerequisite procedure of mathematical formula recognition, however, many incorrectly segmented text lines are often produced due to the two-dimensional structures of mathematics when using existing segmentation methods such as Projection Profiles Cutting or white space analysis. In consequence, mathematical formula recognition is adversely affected by these incorrectly detected text lines, with errors propagating through further processes. Aimed at mathematical formula recognition, we propose a text line detection method to produce reliable line segmentation. Based on the results produced by PPC, a learning based merging strategy is presented to combine incorrectly split text lines. In the merging strategy, the features of layout and text for a text line and those between successive lines are utilised to detect the incorrectly split text lines. Experimental results show that the proposed approach obtains good performance in detecting text lines from mathematical documents. Furthermore, the error rate in mathematical formula identification is reduced significantly through adopting the proposed text line detection method.", "title": "" }, { "docid": "05fc7d05e4ea933a47f5fe81d68cf876", "text": "The unprecedented success of deep learning is largely dependent on the availability of massive amount of training data. In many cases, these data are crowd-sourced and may contain sensitive and confidential information, therefore, pose privacy concerns. As a result, privacy-preserving deep learning has been gaining increasing focus nowadays. One of the promising approaches for privacy-preserving deep learning is to employ differential privacy during model training which aims to prevent the leakage of sensitive information about the training data via the trained model. While these models are considered to be immune to privacy attacks, with the advent of recent and sophisticated attack models, it is not clear how well these models trade-off utility for privacy. In this paper, we systematically study the impact of a sophisticated machine learning based privacy attack called the membership inference attack against a state-of-the-art differentially private deep model. More specifically, given a differentially private deep model with its associated utility, we investigate how much we can infer about the model’s training data. Our experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility. For evaluating our experiments, we use the CIFAR-10 and MNIST datasets and the corresponding classification tasks.", "title": "" }, { "docid": "165fa890775b64cb923e959824f183f5", "text": "We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.", "title": "" }, { "docid": "c9be394df8b4827c57c5413fc28b47e8", "text": "An important prerequisite for successful usage of computer systems and other interactive technology is a basic understanding of the symbols and interaction patterns used in them. This aspect of the broader construct “computer literacy” is used as indicator in the computer literacy scale, which proved to be an economical, reliable and valid instrument for the assessment of computer literacy in older adults.", "title": "" }, { "docid": "164bedabbfcfba283ab26a01511e8777", "text": "The airline industry is undergoing a very difficult time and many companies are in search of service segmentation strategies that will satisfy different target market segments. This study attempts to identify the service dimensions that matter most to current airline passengers. The research measures and compares differences in passengers’ expectations of the desired airline service quality in terms of the dimensions of reliability; assurance; facilities; employees; flight patterns; customization and responsiveness. Primary data were collected from passengers departing Hong Kong airport. Regarding the service dimension expectations, differences analysis shows that there are no statistically significant differences between passengers who made their own airline choice (decision makers) and those who did not (non-decision makers). However, there are significant differences among passengers of different ethnic groups/nationalities as well as among passengers who travel for different purposes, such as business, holiday and visiting friends/relatives. The findings also indicate that passengers consistently rank ‘assurance’ as the most important service dimension. This indicates that passengers are concerned about the safety and security aspect and this may indicate why there has been such a downturn in demand as this study was conducted just prior to the World Trade Center incident on the 11th September 2001. r 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "d18faf207a0dbccc030e5dcc202949ab", "text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.", "title": "" }, { "docid": "0b024671e04090051292b5e76a4690ae", "text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.", "title": "" }, { "docid": "cd8c1c24d4996217c8927be18c48488f", "text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.", "title": "" }, { "docid": "280d9caa58ec97e5b0866d90b22dd35a", "text": "Term structures of default probabilities are omnipresent in credit risk modeling: time-dynamic credit portfolio models, default times, and multi-year pricing models, they all need the time evolution of default probabilities as a basic model input. Although people tend to believe that from an economic point of view the Markov property as underlying model assumption is kind of questionable it seems to be common market practice to model PD term structures via Markov chain techniques. In this paper we illustrate that the Markov assumption carries us quite far if we allow for nonhomogeneous time behaviour of the Markov chain generating the PD term structures. As a ‘proof of concept’ we calibrate a nonhomogeneous continuous-time Markov chain (NHCTMC) to observed one-year rating migrations and multi-year default frequencies, hereby achieving convincing approximation quality. 1 Markov Chains in Credit Risk Modeling The probability of default (PD) for a client is a fundamental risk parameter in credit risk management. It is common practice to assign to every rating grade in a bank’s masterscale a one-year PD in line with regulatory requirements; see [1]. Table 1 shows an example for default frequencies assigned to rating grades from Standard and Poor’s (S&P). D AAA 0.00% AA 0.01% A 0.04% BBB 0.29% BB 1.28% B 6.24% CCC 32.35% Table 1: One-year default frequencies (D) assigned to S&P ratings; see [17], Table 9. Moreover, credit risk modeling concepts like dependent default times, multi-year credit pricing, and multi-horizon economic capital require more than just one-year PDs. For multi-year credit risk modeling, banks need a whole term structure (p R )t≥0 of (cumulative) PDs for every rating grade R; see, e.g., [2] for an introduction to PD term structures and [3] for their application to structured credit products. Every bank has its own (proprietary) way to calibrate PD term structures to bank-internal and external data. A look into the literature reveals that for the generation of PD term structures various Markov chain approaches, most often based on homogeneous chains, dominate current market practice. A landmarking paper in this direction is the work by Jarrow, Lando, and Turnbull [7]. Further research has been done by various authors, see, e.g., Kadam [8], Lando [10], Sarfaraz et al. [12], Schuermann and Jafry [14, 15], Trueck and Oezturkmen [18], just to mention a few examples. A new approach via Markov mixtures has been presented recently by Frydman and Schuermann [5]. In Markov chain theory (see [11]) one distinguishes between discrete-time and continuous-time chains. For instance, a discrete-time chain can be specified by a one-year migration or transition 1In the literature, PD term structures are sometimes called credit curves. 2A Markov chain is called homogeneous if transition probabilities do not depend on time.", "title": "" } ]
scidocsrr
c74a659d2827f50f182900e73c02ad44
Mindfulness-based stress reduction for stress management in healthy people: a review and meta-analysis.
[ { "docid": "b5360df245a0056de81c89945f581f14", "text": "The inability to cope successfully with the enormous stress of medical education may lead to a cascade of consequences at both a personal and professional level. The present study examined the short-term effects of an 8-week meditation-based stress reduction intervention on premedical and medical students using a well-controlled statistical design. Findings indicate that participation in the intervention can effectively (1) reduce self-reported state and trait anxiety, (2) reduce reports of overall psychological distress including depression, (3) increase scores on overall empathy levels, and (4) increase scores on a measure of spiritual experiences assessed at termination of intervention. These results (5) replicated in the wait-list control group, (6) held across different experiments, and (7) were observed during the exam period. Future research should address potential long-term effects of mindfulness training for medical and premedical students.", "title": "" }, { "docid": "6f0ffda347abfd11dc78c0b76ceb11f8", "text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.", "title": "" }, { "docid": "58359b7b3198504fa2475cc0f20ccc2d", "text": "OBJECTIVES\nTo review and synthesize the state of research on a variety of meditation practices, including: the specific meditation practices examined; the research designs employed and the conditions and outcomes examined; the efficacy and effectiveness of different meditation practices for the three most studied conditions; the role of effect modifiers on outcomes; and the effects of meditation on physiological and neuropsychological outcomes.\n\n\nDATA SOURCES\nComprehensive searches were conducted in 17 electronic databases of medical and psychological literature up to September 2005. Other sources of potentially relevant studies included hand searches, reference tracking, contact with experts, and gray literature searches.\n\n\nREVIEW METHODS\nA Delphi method was used to develop a set of parameters to describe meditation practices. Included studies were comparative, on any meditation practice, had more than 10 adult participants, provided quantitative data on health-related outcomes, and published in English. Two independent reviewers assessed study relevance, extracted the data and assessed the methodological quality of the studies.\n\n\nRESULTS\nFive broad categories of meditation practices were identified (Mantra meditation, Mindfulness meditation, Yoga, Tai Chi, and Qi Gong). Characterization of the universal or supplemental components of meditation practices was precluded by the theoretical and terminological heterogeneity among practices. Evidence on the state of research in meditation practices was provided in 813 predominantly poor-quality studies. The three most studied conditions were hypertension, other cardiovascular diseases, and substance abuse. Sixty-five intervention studies examined the therapeutic effect of meditation practices for these conditions. Meta-analyses based on low-quality studies and small numbers of hypertensive participants showed that TM(R), Qi Gong and Zen Buddhist meditation significantly reduced blood pressure. Yoga helped reduce stress. Yoga was no better than Mindfulness-based Stress Reduction at reducing anxiety in patients with cardiovascular diseases. No results from substance abuse studies could be combined. The role of effect modifiers in meditation practices has been neglected in the scientific literature. The physiological and neuropsychological effects of meditation practices have been evaluated in 312 poor-quality studies. Meta-analyses of results from 55 studies indicated that some meditation practices produced significant changes in healthy participants.\n\n\nCONCLUSIONS\nMany uncertainties surround the practice of meditation. Scientific research on meditation practices does not appear to have a common theoretical perspective and is characterized by poor methodological quality. Firm conclusions on the effects of meditation practices in healthcare cannot be drawn based on the available evidence. Future research on meditation practices must be more rigorous in the design and execution of studies and in the analysis and reporting of results.", "title": "" } ]
[ { "docid": "ca6e39436be1b44ab0e20e0024cd0bbe", "text": "This paper introduces a new approach, named micro-crowdfunding, for motivating people to participate in achieving a sustainable society. Increasing people's awareness of how they participate in maintaining the sustainability of common resources, such as public sinks, toilets, shelves, and office areas, is central to achieving a sustainable society. Micro-crowdfunding, as proposed in the paper, is a new type of community-based crowdsourcing architecture that is based on the crowdfunding concept and uses the local currency idea as a tool for encouraging people who live in urban environments to increase their awareness of how important it is to sustain small, common resources through their minimum efforts. Because our approach is lightweight and uses a mobile phone, people can participate in micro-crowdfunding activities with little effort anytime and anywhere.\n We present the basic concept of micro-crowdfunding and a prototype system. We also describe our experimental results, which show how economic and social factors are effective in facilitating micro-crowdfunding. Our results show that micro-crowdfunding increases the awareness about social sustainability, and we believe that micro-crowdfunding makes it possible to motivate people for achieving a sustainable society.", "title": "" }, { "docid": "d0ec144c5239b532987157a64d499f61", "text": "(1) Disregard pseudo-queries that do not retrieve their pseudo-relevant document in the top nrank. (2) Select the top nneg retrieved documents are negative training examples. General Approach: Generate mock interaction embeddings and filter training examples down to those the most nearly match a set of template query-document pairs (given a distance function). Since interaction embeddings specific to what a model “sees,” interaction filters are model-specific.", "title": "" }, { "docid": "75d9b0e67b57a8be7675854b19b50915", "text": "In the paper, we describe analysis of Vivaldi antenna array aimed for microwave image application and SAR application operating at Ka band. The antenna array is fed by a SIW feed network for its low insertion loss and broadband performances in millimeter wave range. In our proposal we have replaced the large feed network by a simple relatively broadband network of compact size to reduce the losses in substrate integrated waveguide (SIW) and save space on PCB. The feed network is power 8-way divider fed by a wideband SIW-GCPW transition and directly connected to the antenna elements. The final antenna array will be designed, fabricated and obtained measured results will be compared with numerical ones.", "title": "" }, { "docid": "108e4cc0358076fac20d7f9395c9f1e3", "text": "This paper presents a novel algorithm aiming at analysis and identification of faces viewed from different poses and illumination conditions. Face analysis from a single image is performed by recovering the shape and textures parameters of a 3D Morphable Model in an analysis-by-synthesis fashion. The shape parameters are computed from a shape error estimated by optical flow and the texture parameters are obtained from a texture error. The algorithm uses linear equations to recover the shape and texture parameters irrespective of pose and lighting conditions of the face image. Identification experiments are reported on more than 5000 images from the publicly available CMU-PIE database which includes faces viewed from 13 different poses and under 22 different illuminations. Extensive identification results are available on our web page for future comparison with novel algorithms.", "title": "" }, { "docid": "cb4518f95b82e553b698ae136362bd59", "text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the …eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:", "title": "" }, { "docid": "919d1554ac7d18d5cb765c0ee808d3a6", "text": "Pythium species were isolated from seedlings of strawberry with root and crown rot. The isolates were identified as P. helicoides on the basis of morphological characteristics and sequences of the ribosomal DNA internal transcribed spacer regions. In pathogenicity tests, the isolates caused root and crown rot similar to the original disease symptoms. Multiplex PCR was used to survey pathogen occurrence in strawberry production areas of Japan. Pythium helicoides was detected in 11 of 82 fields. The pathogen is distributed over six prefectures.", "title": "" }, { "docid": "71b9722200c92901d8ec3c7e6195c931", "text": "Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background \"noise.\" Thus, enterprises are seeking solutions to \"connect the suspicious dots\" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.", "title": "" }, { "docid": "d2b5f28a7f32de167ec4c907472af90b", "text": "Brain-computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies.", "title": "" }, { "docid": "fdc18ccdccefc1fd9c3f79daf549f015", "text": "An overview of the current design practices in the field of Renewable Energy (RE) is presented; also paper delineates the background to the development of unique and novel techniques for power generation using the kinetic energy of tidal streams and other marine currents. Also this study focuses only on vertical axis tidal turbine. Tidal stream devices have been developed as an alternative method of extracting the energy from the tides. This form of tidal power technology poses less threat to the environment and does not face the same limiting factors associated with tidal barrage schemes, therefore making it a more feasible method of electricity production. Large companies are taking interest in this new source of power. There is a rush to research and work with this new energy source. Marine scientists are looking into how much these will affect the environment, while engineers are developing turbines that are harmless for the environment. In addition, the progression of technological advancements tracing several decades of R & D efforts on vertical axis turbines is highlighted.", "title": "" }, { "docid": "44a5ea6fee136e66e1d89fb681f84805", "text": "The content of images users post to their social media is driven in part by personality. In this study, we analyze how Twitter profile images vary with the personality of the users posting them. In our main analysis, we use profile images from over 66,000 users whose personality we estimate based on their tweets. To facilitate interpretability, we focus our analysis on aesthetic and facial features and control for demographic variation in image features and personality. Our results show significant differences in profile picture choice between personality traits, and that these can be harnessed to predict personality traits with robust accuracy. For example, agreeable and conscientious users display more positive emotions in their profile pictures, while users high in openness prefer more aesthetic photos.", "title": "" }, { "docid": "9a1665cff530d93c84598e7df947099f", "text": "The algorithmic Markov condition states that the most likely causal direction between two random variables X and Y can be identified as the direction with the lowest Kolmogorov complexity. This notion is very powerful as it can detect any causal dependency that can be explained by a physical process. However, due to the halting problem, it is also not computable. In this paper we propose an computable instantiation that provably maintains the key aspects of the ideal. We propose to approximate Kolmogorov complexity via the Minimum Description Length (MDL) principle, using a score that is mini-max optimal with regard to the model class under consideration. This means that even in an adversarial setting, the score degrades gracefully, and we are still maximally able to detect dependencies between the marginal and the conditional distribution. As a proof of concept, we propose CISC, a linear-time algorithm for causal inference by stochastic complexity, for pairs of univariate discrete variables. Experiments show that CISC is highly accurate on synthetic, benchmark, as well as real-world data, outperforming the state of the art by a margin, and scales extremely well with regard to sample and domain sizes.", "title": "" }, { "docid": "2eefc7adc055f4fc1013199c38b0b91c", "text": "Parametric methods are commonly used despite evidence that model assumptions are often violated. Various statistical procedures have been suggested for analyzing data from multiple-group repeated measures (i.e., split-plot) designs when parametric model assumptions are violated (e.g., Akritas and Arnold (J. Amer. Statist. Assoc. 89 (1994) 336); Brunner and Langer (Biometrical J. 42 (2000) 663)), including the use of Friedman ranks. The e8ects of Friedman ranking on data and the resultant test statistics for single sample repeated measures designs have been examined (e.g., Harwell and Serlin (Comput. Statist. Data Anal. 17 (1994) 35; Comm. Statist. Simulation Comput. 26 (1997) 605); Zimmerman and Zumbo (J. Experiment. Educ. 62 (1993) 75)). However, there have been fewer investigations concerning Friedman ranks applied to multiple groups of repeated measures data (e.g., Beasley (J. Educ. Behav. Statist. 25 (2000) 20); Rasmussen (British J. Math. Statist. Psych. 42 (1989) 91)). We investigate the use of Friedman ranks for testing the interaction in a split-plot design as a robust alternative to parametric procedures. We demonstrated that the presence of a repeated measures main e8ect may reduce the power of interaction tests performed on Friedman ranks. Aligning the data before applying Friedman ranks was shown to produce more statistical power than simply analyzing Friedman ranks. Results from a simulation study showed that aligning the data (i.e., removing main e8ects) before applying Friedman ranks and then performing either a univariate or multivariate test can provide more statistical power than parametric tests if the error distributions are skewed. c © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "3dfe5099c72f3ef3341c2d053ee0d2c2", "text": "In this paper, the authors introduce a type of transverse flux reluctance machines. These machines work without permanent magnets or electric rotor excitation and hold several advantages, including a high power density, high torque, and compact design. Disadvantages are a high fundamental frequency and a high torque ripple that complicates the control of the motor. The device uses soft magnetic composites (SMCs) for the magnetic circuit, which allows complex stator geometries with 3-D magnetic flux paths. The winding is made from hollow copper tubes, which also form the main heat sink of the machine by using water as a direct copper coolant. Models concerning the design and computation of the magnetic circuit, torque, and the power output are described. A crucial point in this paper is the determination of hysteresis and eddy-current losses in the SMC and the calculation of power losses and current displacement in the copper winding. These are calculated with models utilizing a combination of analytic approaches and finite-element method simulations. Finally, a thermal model based on lumped parameters is introduced, and calculated temperature rises are presented.", "title": "" }, { "docid": "b0fcde53d86560ce4d97145d2de2632d", "text": "Silicon carbide (SiC) power devices have been investigated extensively in the past two decades, and there are many devices commercially available now. Owing to the intrinsic material advantages of SiC over silicon (Si), SiC power devices can operate at higher voltage, higher switching frequency, and higher temperature. This paper reviews the technology progress of SiC power devices and their emerging applications. The design challenges and future trends are summarized at the end of the paper.", "title": "" }, { "docid": "279268e31da13abeed25b78062a71907", "text": "Ridesharing platforms match drivers and riders to trips, using dynamic prices to balance supply and demand. A challenge is to set prices that are appropriately smooth in space and time, so that drivers will choose to accept their dispatched trips, rather than drive to another area or wait for higher prices or a better trip. We work in a complete information, discrete time, multiperiod, multi-location model, and introduce the Spatio-Temporal Pricing (STP) mechanism. The mechanism is incentive-aligned, in that it is a subgame-perfect equilibrium for drivers to accept their dispatches. The mechanism is also welfare-optimal, envy-free, individually rational, budget balanced and core-selecting from any history onward. The proof of incentive alignment makes use of the M ♮ concavity of min-cost flow objectives. We also give an impossibility result, that there can be no dominant-strategy mechanism with the same economic properties. An empirical analysis conducted in simulation suggests that the STP mechanism can achieve significantly higher social welfare than a myopic pricing mechanism.", "title": "" }, { "docid": "1c7ca008292880e6f698d281a1f3d747", "text": "Experimental evidence has pointed toward a negative effect of violent video games on social behavior. Given that the availability and presence of video games is pervasive, negative effects from playing them have potentially large implications for public policy. It is, therefore, important that violent video game effects are thoroughly and experimentally explored, with the current experiment focusing on prosocial behavior. 120 undergraduate volunteers (Mage = 19.01, 87.5% male) played an ultra-violent, violent, or non-violent video game and were then assessed on two distinct measures of prosocial behavior: how much they donated to a charity and how difficult they set a task for an ostensible participant. It was hypothesized that participants playing the ultra-violent games would show the least prosocial behavior and those playing the non-violent game would show the most. These hypotheses were not supported, with participants responding in similar ways, regardless of the type of game played. While null effects are difficult to interpret, samples of this nature (undergraduate volunteers, high male skew) may be problematic, and participants were possibly sensitive to the hypothesis at some level, this experiment adds to the growing body of evidence suggesting that violent video game effects are less clear than initially", "title": "" }, { "docid": "109a84ad1c1a541e2a0b4972b21caca2", "text": "Our brain is a network. It consists of spatially distributed, but functionally linked regions that continuously share information with each other. Interestingly, recent advances in the acquisition and analysis of functional neuroimaging data have catalyzed the exploration of functional connectivity in the human brain. Functional connectivity is defined as the temporal dependency of neuronal activation patterns of anatomically separated brain regions and in the past years an increasing body of neuroimaging studies has started to explore functional connectivity by measuring the level of co-activation of resting-state fMRI time-series between brain regions. These studies have revealed interesting new findings about the functional connections of specific brain regions and local networks, as well as important new insights in the overall organization of functional communication in the brain network. Here we present an overview of these new methods and discuss how they have led to new insights in core aspects of the human brain, providing an overview of these novel imaging techniques and their implication to neuroscience. We discuss the use of spontaneous resting-state fMRI in determining functional connectivity, discuss suggested origins of these signals, how functional connections tend to be related to structural connections in the brain network and how functional brain communication may form a key role in cognitive performance. Furthermore, we will discuss the upcoming field of examining functional connectivity patterns using graph theory, focusing on the overall organization of the functional brain network. Specifically, we will discuss the value of these new functional connectivity tools in examining believed connectivity diseases, like Alzheimer's disease, dementia, schizophrenia and multiple sclerosis.", "title": "" }, { "docid": "06909d0ffbc52e14e0f6f1c9ffe29147", "text": "DistributedLog is a high performance, strictly ordered, durably replicated log. It is multi-tenant, designed with a layered architecture that allows reads and writes to be scaled independently and supports OLTP, stream processing and batch workloads. It also supports a globally synchronous consistent replicated log spanning multiple geographically separated regions. This paper describes how DistributedLog is structured, its components and the rationale underlying various design decisions. We have been using DistributedLog in production for several years, supporting applications ranging from transactional database journaling, real-time data ingestion, and analytics to general publish-subscribe messaging.", "title": "" }, { "docid": "9f6ab40fb1f1c331e72b275e3cf614e3", "text": "The Internet of things (IoT) is still in its infancy and has attracted much interest in many industrial sectors including medical fields, logistics tracking, smart cities and automobiles. However as a paradigm, it is susceptible to a range of significant intrusion threats. This paper presents a threat analysis of the IoT and uses an Artificial Neural Network (ANN) to combat these threats. A multi-level perceptron, a type of supervised ANN, is trained using internet packet traces, then is assessed on its ability to thwart Distributed Denial of Service (DDoS/DoS) attacks. This paper focuses on the classification of normal and threat patterns on an IoT Network. The ANN procedure is validated against a simulated IoT network. The experimental results demonstrate 99.4% accuracy and can successfully detect various DDoS/DoS attacks.", "title": "" } ]
scidocsrr
de79780405e5472df23ace00ec371380
A comprehensive study of the predictive accuracy of dynamic change-impact analysis
[ { "docid": "cc9686bac7de957afe52906763799554", "text": "A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by Chawathe et al. for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by Chawathe et al. We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.", "title": "" } ]
[ { "docid": "96051404d2ca32f67c86f0eb96a87f38", "text": "Male (N = 248) and female (N = 282) subjects were given the Personal Attributes Questionnaire consisting of 55 bipolar attributes drawn from the Sex Role Stereotype Questionnaire by Rosenkrantz, Vogel, Bee, Broverman, and Broverman and were asked to rate themselves and then to compare directly the typical male and female college student. Self-ratings were divided into male-valued (stereotypically masculine attributes judged more desirable for both sexes), female-valued, and sex-specific items. Also administered was the Attitudes Toward Women Scale and a measure of social self-esteem. Correlations of the self-ratings with stereotype scores and the Attitudes Toward Women Scale were low in magnitude, suggesting that sex role expectations do not distort self-concepts. For both men and women, \"femininity\" on the female-valued self items and \"masculinity\" on the male-valued items were positively correlated, and both significantly related to self-esteem. The implications of the results for a concept of masculinity and femininity as a duality, characteristic of all individuals, and the use of the self-rating scales for measuring masculinity, femininity, and androgyny were discussed.", "title": "" }, { "docid": "cc76afb929bdffe1b084843a6b267602", "text": "Software applications continue to grow in terms of the number of features they offer, making personalization increasingly important. Research has shown that most users prefer the control afforded by an adaptable approach to personalization rather than a system-controlled adaptive approach. Both types of approaches offer advantages and disadvantages. No study, however, has compared the efficiency of the two approaches. In two controlled lab studies, we measured the efficiency of static, adaptive and adaptable interfaces in the context of pull-down menus. These menu conditions were implemented as split menus, in which the top four items remained static, were adaptable by the subject, or adapted according to the subject’s frequently and recently used items. The results of Study 1 showed that a static split menu was significantly faster than an adaptive split menu. Also, when the adaptable split menu was not the first condition presented to subjects, it was significantly faster than the adaptive split menu, and not significantly different from the static split menu. The majority of users preferred the adaptable menu overall. Several implications for personalizing user interfaces based on these results are discussed. One question which arose after Study 1 was whether prior exposure to the menus and task has an effect on the efficiency of the adaptable menus. A second study was designed to follow-up on the theory that prior exposure to different types of menu layouts influences a user’s willingness to customize. Though the observed power of this study was low and no statistically significant effect of type of exposure was found, a possible trend arose: that exposure to an adaptive interface may have a positive impact on the user’s willingness to customize. This and other secondary results are discussed, along with several areas for future work. The research presented in this thesis should be seen as an initial step towards a more thorough comparison of adaptive and adaptable interfaces, and should provide motivation for further development of adaptable interaction techniques.", "title": "" }, { "docid": "4709a4e1165abb5d0018b74495218fc7", "text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a26d47a7d0330e6252986358bd2f41e0", "text": "The American College of Prosthodontists (ACP) has developed a classification system for partial edentulism based on diagnostic findings. This classification system is similar to the classification system for complete edentulism previously developed by the ACP. These guidelines are intended to help practitioners determine appropriate treatments for their patients. Four categories of partial edentulism are defined, Class I to Class IV, with Class I representing an uncomplicated clinical situation and class IV representing a complex clinical situation. Each class is differentiated by specific diagnostic criteria. This system is designed for use by dental professionals involved in the diagnosis and treatment of partially edentulous patients. Potential benefits of the system include (1) improved intraoperator consistency, (2) improved professional communication, (3) insurance reimbursement commensurate with complexity of care, (4) improved screening tool for dental school admission clinics, (5) standardized criteria for outcomes assessment and research, (6) enhanced diagnostic consistency, and (7) simplified aid in the decision to refer a patient.", "title": "" }, { "docid": "570fcf7ba739ffb6ea07e5c58c8154c7", "text": "E-learning is emerging as the new paradigm of modern education. Worldwide, the e-learning market has a growth rate of 35.6%, but failures exist. Little is known about why many users stop their online learning after their initial experience. Previous research done under different task environments has suggested a variety of factors affecting user satisfaction with e-Learning. This study developed an integrated model with six dimensions: learners, instructors, courses, technology, design, and environment. A survey was conducted to investigate the critical factors affecting learners’ satisfaction in e-Learning. The results revealed that learner computer anxiety, instructor attitude toward e-Learning, e-Learning course flexibility, e-Learning course quality, perceived usefulness, perceived ease of use, and diversity in assessments are the critical factors affecting learners’ perceived satisfaction. The results show institutions how to improve learner satisfaction and further strengthen their e-Learning implementation. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5feea8e7bcb96c826bdf19922e47c922", "text": "This chapter is a review of conceptions of knowledge as they appear in selected bodies of research on teaching. Writing as a philosopher of education, my interest is in how notions of knowledge are used and analyzed in a number of research programs that study teachers and their teaching. Of particular interest is the growing research literature on the knowledge that teachers generate as a result of their experience as teachers, in contrast to the knowledge of teaching that is generated by those who specialize in research on teaching. This distinction, as will become apparent, is one that divides more conventional scientific approaches to the study of teaching from what might be thought of as alternative approaches.", "title": "" }, { "docid": "d0c5bb905973b3098b06f55232ed9c8f", "text": "In recent years, theoretical and computational linguistics has paid much attention to linguistic items that form scales. In NLP, much research has focused on ordering adjectives by intensity (tiny < small). Here, we address the task of automatically ordering English adverbs by their intensifying or diminishing effect on adjectives (e.g. extremely small < very small). We experiment with 4 different methods: 1) using the association strength between adverbs and adjectives; 2) exploiting scalar patterns (such as not only X but Y); 3) using the metadata of product reviews; 4) clustering. The method that performs best is based on the use of metadata and ranks adverbs by their scaling factor relative to unmodified adjectives.", "title": "" }, { "docid": "f8ac1e028ec61c8b1dcf8ce138ea1776", "text": "This paper presents power-control strategies of a grid-connected hybrid generation system with versatile power transfer. The hybrid system is the combination of photovoltaic (PV) array, wind turbine, and battery storage via a common dc bus. Versatile power transfer was defined as multimodes of operation, including normal operation without use of battery, power dispatching, and power averaging, which enables grid- or user-friendly operation. A supervisory control regulates power generation of the individual components so as to enable the hybrid system to operate in the proposed modes of operation. The concept and principle of the hybrid system and its control were described. A simple technique using a low-pass filter was introduced for power averaging. A modified hysteresis-control strategy was applied in the battery converter. Modeling and simulations were based on an electromagnetic-transient-analysis program. A 30-kW hybrid inverter and its control system were developed. The simulation and experimental results were presented to evaluate the dynamic performance of the hybrid system under the proposed modes of operation.", "title": "" }, { "docid": "f82a57baca9a0381c9b2af0368a5531e", "text": "We tested the hypothesis derived from eye blink literature that when liars experience cognitive demand, their lies would be associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has ceased after the lie is told. A total of 13 liars and 13 truth tellers lied or told the truth in a target period; liars and truth tellers both told the truth in two baseline periods. Their eye blinks during the target and baseline periods and directly after the target period (target offset period) were recorded. The predicted pattern (compared to the baseline periods, a decrease in eye blinks during the target period and an increase in eye blinks during the target offset period) was found in liars and was strikingly different from the pattern obtained in truth tellers. They showed an increase in eye blinks during the target period compared to the baseline periods, whereas their pattern of eye blinks in the target offset period did not differ from baseline periods. The implications for lie detection are discussed.", "title": "" }, { "docid": "4bb4bbd91925d2faafe5516519d6cc62", "text": "Cyclic GMP (cGMP) modulates important cerebral processes including some forms of learning and memory. cGMP pathways are strongly altered in hyperammonemia and hepatic encephalopathy (HE). Patients with liver cirrhosis show reduced intracellular cGMP in lymphocytes, increased cGMP in plasma and increased activation of soluble guanylate cyclase by nitric oxide (NO) in lymphocytes, which correlates with minimal HE assessed by psychometric tests. Activation of soluble guanylate cyclase by NO is also increased in cerebral cortex, but reduced in cerebellum, from patients who died with HE. This opposite alteration is reproduced in vivo in rats with chronic hyperammonemia or HE. A main pathway modulating cGMP levels in brain is the glutamate-NO-cGMP pathway. The function of this pathway is impaired both in cerebellum and cortex of rats with hyperammonemia or HE. Impairment of this pathway is responsible for reduced ability to learn some types of tasks. Restoring the pathway and cGMP levels in brain restores learning ability. This may be achieved by administering phosphodiesterase inhibitors (zaprinast, sildenafil), cGMP, anti-inflammatories (ibuprofen) or antagonists of GABAA receptors (bicuculline). These data support that increasing cGMP by safe pharmacological means may be a new therapeutic approach to improve cognitive function in patients with minimal or clinical HE.", "title": "" }, { "docid": "4c1798f0fd65b8d7e60a04a9a3df5201", "text": "This study examined linkages between divorce, depressive/withdrawn parenting, and child adjustment problems at home and school. Middle class divorced single mother families (n = 35) and 2-parent families (n = 174) with a child in the fourth grade participated. Mothers and teachers completed yearly questionnaires and children were interviewed when they were in the fourth, fifth, and sixth grades. Structural equation modeling suggested that the association between divorce and child externalizing and internalizing behavior was partially mediated by depressive/withdrawn parenting when the children were in the fourth and fifth grades.", "title": "" }, { "docid": "d735547a7b3a79f5935f15da3e51f361", "text": "We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.", "title": "" }, { "docid": "7bdebaf86fd679ae00520dc8f7ee3afa", "text": "Studies show that attractive women demonstrate stronger preferences for masculine men than relatively unattractive women do. Such condition-dependent preferences may occur because attractive women can more easily offset the costs associated with choosing a masculine partner, such as lack of commitment and less interest in parenting. Alternatively, if masculine men display negative characteristics less to attractive women than to unattractive women, attractive women may perceive masculine men to have more positive personality traits than relatively unattractive women do. We examined how two indices of women’s attractiveness, body mass index (BMI) and waist–hip ratio (WHR), relate to perceptions of both the attractiveness and trustworthiness of masculinized versus feminized male faces. Consistent with previous studies, women with a low (attractive) WHR had stronger preferences for masculine male faces than did women with a relatively high (unattractive) WHR. This relationship remained significant when controlling for possible effects of BMI. Neither WHR nor BMI predicted perceptions of trustworthiness. These findings present converging evidence for condition-dependent mate preferences in women and suggest that such preferences do not reflect individual differences in the extent to which pro-social traits are ascribed to feminine versus masculine men. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "73fb3c79018795777a0fca6d5e7d3ebe", "text": "Congruence, the state in which a software development organization harbors sufficient coordination capabilities to meet the coordination demands of the technical products under development, is increasingly recognized as critically important to the performance of an organization. To date, it has been shown that a variety of states of incongruence may exist in an organization, with possibly serious negative effects on product quality, development progress, cost, and so on. Exactly how to achieve congruence, or knowing what steps to take to achieve congruence, is less understood. In this paper, we introduce a series of key challenges that we believe must be comprehensively addressed in order for congruence research to result in wellunderstood approaches, tactics, and tools – so these can be infused in the day-to-day practices of development organizations to improve their coordination capabilities with better aligned social and technical structures. This effort is partially funded by the National Science Foundation under grant number IIS-0534775, IIS0329090, and the Software Industry Center and its sponsors, particularly the Alfred P. Sloan Foundation. Effort also supported by a 2007 Jazz Faculty Grant. The views and conclusions are those of the authors and do not reflect the opinions of any sponsoring organizations/agencies.", "title": "" }, { "docid": "2d02e5bc08c2b5d18c787880898e9af2", "text": "Speech recognition systems have used the concept of states as a way to decompose words into sub-word units for decades. As the number of such states now reaches the number of words used to train acoustic models, it is interesting to consider approaches that relax the assumption that words are made of states. We present here an alternative construction, where words are projected into a continuous embedding space where words that sound alike are nearby in the Euclidean sense. We show how embeddings can still allow to score words that were not in the training dictionary. Initial experiments using a lattice rescoring approach and model combination on a large realistic dataset show improvements in word error rate.", "title": "" }, { "docid": "36828667ce43ab5d489f74e112045639", "text": "Zero-shot learning has received increasing interest as a means to alleviate the often prohibitive expense of annotating training data for large scale recognition problems. These methods have achieved great success via learning intermediate semantic representations in the form of attributes and more recently, semantic word vectors. However, they have thus far been constrained to the single-label case, in contrast to the growing popularity and importance of more realistic multi-label data. In this paper, for the first time, we investigate and formalise a general framework for multi-label zero-shot learning, addressing the unique challenge therein: how to exploit multi-label correlation at test time with no training data for those classes? In particular, we propose (1) a multi-output deep regression model to project an image into a semantic word space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors; (2) a novel zero-shot learning algorithm for multi-label data that exploits the unique compositionality property of semantic word vector representations; and (3) a transductive learning strategy to enable the regression model learned from seen classes to generalise well to unseen classes. Our zero-shot learning experiments on a number of standard multi-label datasets demonstrate that our method outperforms a variety of baselines.", "title": "" }, { "docid": "698dca642840f47081b1e9a54775c5cc", "text": "Background: Many popular educational programmes claim to be ‘brain-based’, despite pleas from the neuroscience community that these neuromyths do not have a basis in scientific evidence about the brain. Purpose: The main aim of this paper is to examine several of the most popular neuromyths in the light of the relevant neuroscientific and educational evidence. Examples of neuromyths include: 10% brain usage, leftand right-brained thinking, VAK learning styles and multiple intelligences Sources of evidence: The basis for the argument put forward includes a literature review of relevant cognitive neuroscientific studies, often involving neuroimaging, together with several comprehensive education reviews of the brain-based approaches under scrutiny. Main argument: The main elements of the argument are as follows. We use most of our brains most of the time, not some restricted 10% brain usage. This is because our brains are densely interconnected, and we exploit this interconnectivity to enable our primitively evolved primate brains to live in our complex modern human world. Although brain imaging delineates areas of higher (and lower) activation in response to particular tasks, thinking involves coordinated interconnectivity from both sides of the brain, not separate leftand right-brained thinking. High intelligence requires higher levels of inter-hemispheric and other connected activity. The brain’s interconnectivity includes the senses, especially vision and hearing. We do not learn by one sense alone, hence VAK learning styles do not reflect how our brains actually learn, nor the individual differences we observe in classrooms. Neuroimaging studies do not support multiple intelligences; in fact, the opposite is true. Through the activity of its frontal cortices, among other areas, the human brain seems to operate with general intelligence, applied to multiple areas of endeavour. Studies of educational effectiveness of applying any of these ideas in the classroom have failed to find any educational benefits. Conclusions: The main conclusions arising from the argument are that teachers should seek independent scientific validation before adopting brain-based products in their classrooms. A more sceptical approach to educational panaceas could contribute to an enhanced professionalism of the field.", "title": "" }, { "docid": "a52ac0402ca65a4e7a239c343f79df44", "text": "How does the brain cause positive affective reactions to sensory pleasure? An answer to pleasure causation requires knowing not only which brain systems are activated by pleasant stimuli, but also which systems actually cause their positive affective properties. This paper focuses on brain causation of behavioral positive affective reactions to pleasant sensations, such as sweet tastes. Its goal is to understand how brain systems generate 'liking,' the core process that underlies sensory pleasure and causes positive affective reactions. Evidence suggests activity in a subcortical network involving portions of the nucleus accumbens shell, ventral pallidum, and brainstem causes 'liking' and positive affective reactions to sweet tastes. Lesions of ventral pallidum also impair normal sensory pleasure. Recent findings regarding this subcortical network's causation of core 'liking' reactions help clarify how the essence of a pleasure gloss gets added to mere sensation. The same subcortical 'liking' network, via connection to brain systems involved in explicit cognitive representations, may also in turn cause conscious experiences of sensory pleasure.", "title": "" }, { "docid": "42cfbb2b2864e57d59a72ec91f4361ff", "text": "Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.", "title": "" }, { "docid": "fd1b82c69a3182ab7f8c0a7cf2030b6f", "text": "Lenz-Majewski hyperostotic dwarfism (LMHD) is an ultra-rare Mendelian craniotubular dysostosis that causes skeletal dysmorphism and widely distributed osteosclerosis. Biochemical and histopathological characterization of the bone disease is incomplete and nonexistent, respectively. In 2014, a publication concerning five unrelated patients with LMHD disclosed that all carried one of three heterozygous missense mutations in PTDSS1 encoding phosphatidylserine synthase 1 (PSS1). PSS1 promotes the biosynthesis of phosphatidylserine (PTDS), which is a functional constituent of lipid bilayers. In vitro, these PTDSS1 mutations were gain-of-function and increased PTDS production. Notably, PTDS binds calcium within matrix vesicles to engender hydroxyapatite crystal formation, and may enhance mesenchymal stem cell differentiation leading to osteogenesis. We report an infant girl with LMHD and a novel heterozygous missense mutation (c.829T>C, p.Trp277Arg) within PTDSS1. Bone turnover markers suggested that her osteosclerosis resulted from accelerated formation with an unremarkable rate of resorption. Urinary amino acid quantitation revealed a greater than sixfold elevation of phosphoserine. Our findings affirm that PTDSS1 defects cause LMHD and support enhanced biosynthesis of PTDS in the pathogenesis of LMHD.", "title": "" } ]
scidocsrr
66932f4285195f1694e5835e5f716cf9
BUP: A Bottom-Up parser embedded in Prolog
[ { "docid": "0b18f7966a57e266487023d3a2f3549d", "text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful", "title": "" } ]
[ { "docid": "c1a4da111d6e3496845b4726dfabcb5b", "text": "A growing number of information technology systems and services are being developed to change users’ attitudes or behavior or both. Despite the fact that attitudinal theories from social psychology have been quite extensively applied to the study of user intentions and behavior, these theories have been developed for predicting user acceptance of the information technology rather than for providing systematic analysis and design methods for developing persuasive software solutions. This article is conceptual and theory-creating by its nature, suggesting a framework for Persuasive Systems Design (PSD). It discusses the process of designing and evaluating persuasive systems and describes what kind of content and software functionality may be found in the final product. It also highlights seven underlying postulates behind persuasive systems and ways to analyze the persuasion context (the intent, the event, and the strategy). The article further lists 28 design principles for persuasive system content and functionality, describing example software requirements and implementations. Some of the design principles are novel. Moreover, a new categorization of these principles is proposed, consisting of the primary task, dialogue, system credibility, and social support categories.", "title": "" }, { "docid": "e42dece8d8870739249d19a5d84c6a79", "text": "In this paper, we propose a method for extracting travelrelated event information, such as an event name or a schedule from automatically identified newspaper articles, in which particular events are mentioned. We analyze news corpora using our method, extracting venue names from them. We then find web pages that refer to event schedules for these venues. To confirm the effectiveness of our method, we conducted several experiments. From the experimental results, we obtained a precision of 91.5% and a recall of 75.9% for the automatic extraction of event information from news articles, and a precision of 90.8% and a recall of 52.8% for the automatic identification of eventrelated web pages.", "title": "" }, { "docid": "56c0ce72f6672c6d0f6e37ddd019dd2a", "text": "We focus on the task of multi-hop reading comprehension where a system is required to reason over a chain of multiple facts, distributed across multiple passages, to answer a question. Inspired by graph-based reasoning, we present a path-based reasoning approach for textual reading comprehension. It operates by generating potential paths across multiple passages, extracting implicit relations along this path, and composing them to encode each path. The proposed model achieves a 2.3% gain on the WikiHop Dev set as compared to previous state-of-the-art and, as a side-effect, is also able to explain its reasoning through explicit paths of sentences.", "title": "" }, { "docid": "d580f60d48331b37c55f1e9634b48826", "text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.", "title": "" }, { "docid": "029cca0b7e62f9b52e3d35422c11cea4", "text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.", "title": "" }, { "docid": "5b579b0b46f94ecb3842dd5ca3130fd4", "text": "To assure high quality of database applications, testing database applications remains the most popularly used approach. In testing database applications, tests consist of both program inputs and database states. Assessing the adequacy of tests allows targeted generation of new tests for improving their adequacy (e.g., fault-detection capabilities). Comparing to code coverage criteria, mutation testing has been a stronger criterion for assessing the adequacy of tests. Mutation testing would produce a set of mutants (each being the software under test systematically seeded with a small fault) and then measure how high percentage of these mutants are killed (i.e., detected) by the tests under assessment. However, existing test-generation approaches for database applications do not provide sufficient support for killing mutants in database applications (in either program code or its embedded or resulted SQL queries). To address such issues, in this paper, we propose an approach called MutaGen that conducts test generation for mutation testing on database applications. In our approach, we first apply an existing approach that correlates various constraints within a database application through constructing synthesized database interactions and transforming the constraints from SQL queries into normal program code. Based on the transformed code, we generate program-code mutants and SQL-query mutants, and then derive and incorporate query-mutant-killing constraints into the transformed code. Then, we generate tests to satisfy query-mutant-killing constraints. Evaluation results show that MutaGen can effectively kill mutants in database applications, and MutaGen outperforms existing test-generation approaches for database applications in terms of strong mutant killing.", "title": "" }, { "docid": "d69b8c991e66ff274af63198dba2ee01", "text": "Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.", "title": "" }, { "docid": "28facedbdc268f253ab8ace98f0902b2", "text": "OBJECTIVE\nA wide spectrum of space-occupying soft-tissue lesions may be discovered on MRI studies, either as incidental findings or as palpable or symptomatic masses. Characterization of a lesion as benign or indeterminate is the most important step toward optimal treatment and avoidance of unnecessary biopsy or surgical intervention.\n\n\nCONCLUSION\nThe systemic MRI interpretation approach presented in this article enables the identification of cases in which sarcoma can be excluded.", "title": "" }, { "docid": "a3e88345a2bcd07bf756ca02968082f6", "text": "Bi-directional LSTMs have emerged as a standard method for obtaining per-token vector representations serving as input to various token labeling tasks (whether followed by Viterbi prediction or independent classification). This paper proposes an alternative to Bi-LSTMs for this purpose: iterated dilated convolutional neural networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. We describe a distinct combination of network structure, parameter sharing and training procedures that is not only more accurate than Bi-LSTM-CRFs, but also 8x faster at test time on long sequences. Moreover, ID-CNNs with independent classification enable a dramatic 14x testtime speedup, while still attaining accuracy comparable to the Bi-LSTM-CRF. We further demonstrate the ability of IDCNNs to combine evidence over long sequences by demonstrating their improved accuracy on whole-document (rather than per-sentence) inference. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, IDCNNs permit fixed-depth convolutions to run in parallel across entire documents. Today when many companies run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.", "title": "" }, { "docid": "dea4d96b7af9f3a2c6acb7ae38947954", "text": "The state-of-the-art object detection networks for natural images have recently demonstrated impressive performances. However the complexity of ship detection in high resolution satellite images exposes the limited capacity of these networks for strip-like rotated assembled object detection which are common in remote sensing images. In this paper, we embrace this observation and introduce the rotated region based CNN (RR-CNN), which can learn and accurately extract features of rotated regions and locate rotated objects precisely. RR-CNN has three important new components including a rotated region of interest (RRoI) pooling layer, a rotated bounding box regression model and a multi-task method for non-maximal suppression (NMS) between different classes. Experimental results on the public ship dataset HRSC2016 confirm that RR-CNN outperforms baselines by a large margin.", "title": "" }, { "docid": "024b739dc047e17310fe181591fcd335", "text": "In this paper, a Ka-Band patch sub-array structure for millimeter-wave phased array applications is demonstrated. The conventional corner truncated patch is modified to improve the impedance and CP bandwidth alignment. A new sub-array feed approach is introduced to reduce complexity of the feed line between elements and increase the radiation efficiency. A sub-array prototype is built and tested. Good agreement with the theoretical results is obtained.", "title": "" }, { "docid": "43398874a34c7346f41ca7a18261e878", "text": "This article investigates transitions at the level of societal functions (e.g., transport, communication, housing). Societal functions are fulfilled by sociotechnical systems, which consist of a cluster of aligned elements, e.g., artifacts, knowledge, markets, regulation, cultural meaning, infrastructure, maintenance networks and supply networks. Transitions are conceptualised as system innovations, i.e., a change from one sociotechnical system to another. The article describes a co-evolutionary multi-level perspective to understand how system innovations come about through the interplay between technology and society. The article makes a new step as it further refines the multi-level perspective by distinguishing characteristic patterns: (a) two transition routes, (b) fit–stretch pattern, and (c) patterns in breakthrough. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "9b8072d38753fc64199693a44297a135", "text": "We propose a segmentation algorithm for the purposes of large-scale flower species recognition. Our approach is based on identifying potential object regions at the time of detection. We then apply a Laplacian-based segmentation, which is guided by these initially detected regions. More specifically, we show that 1) recognizing parts of the potential object helps the segmentation and makes it more robust to variabilities in both the background and the object appearances, 2) segmenting the object of interest at test time is beneficial for the subsequent recognition. Here we consider a large-scale dataset containing 578 flower species and 250,000 images. This dataset is developed by our team for the purposes of providing a flower recognition application for general use and is the largest in its scale and scope. We tested the proposed segmentation algorithm on the well-known 102 Oxford flowers benchmark [11] and on the new challenging large-scale 578 flower dataset, that we have collected. We observed about 4% improvements in the recognition performance on both datasets compared to the baseline. The algorithm also improves all other known results on the Oxford 102 flower benchmark dataset. Furthermore, our method is both simpler and faster than other related approaches, e.g. [3, 14], and can be potentially applicable to other subcategory recognition datasets.", "title": "" }, { "docid": "43bb109c93d7f259b11c42031cd93ad6", "text": "A compact rectangular slotted monopole antenna for ultra wideband (UWB) application is presented. The designed antenna has a simple structure and compact size of 25 × 26 mm2. This antenna consist of radiating patch with two steps and one slot introduced on it for bandwidth enhancement and a ground plane. Antenna is feed with 50Ω microstrip line. IE3D method of moments based simulation software is used for design and FR4 substrate of dielectric constant value 4.4 with loss tangent 0.02.", "title": "" }, { "docid": "c81e728d9d4c2f636f067f89cc14862c", "text": "2", "title": "" }, { "docid": "77273b82e31c0b0c361525f83814dd40", "text": "For a multiuser data communications system operating over a mutually cross-coupled linear channel with additive noise sources, we determine the following: (1) a linear cross-coupled receiver processor (filter) that yields the least-mean-squared error between the desired outputs and the actual outputs, and (2) a cross-coupled transmitting filter that optimally distributes the total available power among the different users, as well as the total available frequency spectrum. The structure of the optimizing filters is similar to the known 2 × 2 case encountered in problems associated with digital transmission over dually polarized radio channels.", "title": "" }, { "docid": "ac41c57bcb533ab5dabcc733dd69a705", "text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.", "title": "" }, { "docid": "c784bfbd522bb4c9908c3f90a31199fe", "text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.", "title": "" }, { "docid": "85d8c2190b2b999df30ee92244236805", "text": "Single document summarization is the task of producing a shorter version of a document while preserving its principal information content. In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective. We use our algorithm to train a neural summarization model on the CNN and DailyMail datasets and demonstrate experimentally that it outperforms state-of-the-art extractive and abstractive systems when evaluated automatically and by humans.1", "title": "" }, { "docid": "937d93600ad3d19afda31ada11ea1460", "text": "Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the \"block withholding attack\". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.", "title": "" } ]
scidocsrr
f19ff2d7314f21753f9d3d73491716a5
Bringing Deep Learning at the Edge of Information-Centric Internet of Things
[ { "docid": "2c4babb483ddd52c9f1333cbe71a3c78", "text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.", "title": "" }, { "docid": "08d1bc0a4e2caba4a399434f6600534c", "text": "In view of evolving the Internet infrastructure, ICN is promoting a communication model that is fundamentally different from the traditional IP address-centric model. The ICN approach consists of the retrieval of content by (unique) names, regardless of origin server location (i.e., IP address), application, and distribution channel, thus enabling in-network caching/replication and content-based security. The expected benefits in terms of improved data dissemination efficiency and robustness in challenging communication scenarios indicate the high potential of ICN as an innovative networking paradigm in the IoT domain. IoT is a challenging environment, mainly due to the high number of heterogeneous and potentially constrained networked devices, and unique and heavy traffic patterns. The application of ICN principles in such a context opens new opportunities, while requiring careful design choices. This article critically discusses potential ways toward this goal by surveying the current literature after presenting several possible motivations for the introduction of ICN in the context of IoT. Major challenges and opportunities are also highlighted, serving as guidelines for progress beyond the state of the art in this timely and increasingly relevant topic.", "title": "" }, { "docid": "1e4a86dcc05ff3d593a4bf7b88f8b23a", "text": "Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.", "title": "" } ]
[ { "docid": "55631b81d46fc3dcaad8375176cb1c68", "text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.", "title": "" }, { "docid": "8ae1ef032c0a949aa31b3ca8bc024cb5", "text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital", "title": "" }, { "docid": "c13cbc9d7b4098cb392ba8293b692a37", "text": "This paper introduces the first stiffness controller for continuum robots. The control law is based on an accurate approximation of a continuum robot's coupled kinematic and static force model. To implement a desired tip stiffness, the controller drives the actuators to positions corresponding to a deflected robot configuration that produces the required tip force for the measured tip position. This approach provides several important advantages. First, it enables the use of robot deflection sensing as a means to both sense and control tip forces. Second, it enables stiffness control to be implemented by modification of existing continuum robot position controllers. The proposed controller is demonstrated experimentally in the context of a concentric tube robot. Results show that the stiffness controller achieves the desired stiffness in steady state, provides good dynamic performance, and exhibits stability during contact transitions.", "title": "" }, { "docid": "cd224f035982a669dcd8eb0c086a1be0", "text": "In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.", "title": "" }, { "docid": "3ca057959a24245764953a6aa1b2ed84", "text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.", "title": "" }, { "docid": "636be5d5a0cc7dc4ab1906548cb53b31", "text": "Feature selection is one of the techniques in machine learning for selecting a subset of relevant features namely variables for the construction of models. The feature selection technique aims at removing the redundant or irrelevant features or features which are strongly correlated in the data without much loss of information. It is broadly used for making the model much easier to interpret and increase generalization by reducing the variance. Regression analysis plays a vital role in statistical modeling and in turn for performing machine learning tasks. The traditional procedures such as Ordinary Least Squares (OLS) regression, Stepwise regression and partial least squares regression are very sensitive to random errors. Many alternatives have been established in the literature during the past few decades such as Ridge regression and LASSO and its variants. This paper explores the features of the popular regression methods, OLS regression, ridge regression and the LASSO regression. The performance of these procedures has been studied in terms of model fitting and prediction accuracy using real data and simulated environment with the help of R package.", "title": "" }, { "docid": "c15bc15643075d75e24d81b237ed3f4c", "text": "User authentication is a crucial service in wireless sensor networks (WSNs) that is becoming increasingly common in WSNs because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. Because wireless sensor nodes are limited in computing power, data storage and communication capabilities, any user authentication protocol must be designed to operate efficiently in a resource constrained environment. In this paper, we review several proposed WSN user authentication protocols, with a detailed review of the M.L Das protocol and a cryptanalysis of Das' protocol that shows several security weaknesses. Furthermore, this paper proposes an ECC-based user authentication protocol that resolves these weaknesses. According to our analysis of security of the ECC-based protocol, it is suitable for applications with higher security requirements. Finally, we present a comparison of security, computation, and communication costs and performances for the proposed protocols. The ECC-based protocol is shown to be suitable for higher security WSNs.", "title": "" }, { "docid": "f925550d3830944b8649266292eae3fd", "text": "In the recent years antenna design appears as a mature field of research. It really is not the fact because as the technology grows with new ideas, fitting expectations in the antenna design are always coming up. A Ku-band patch antenna loaded with notches and slit has been designed and simulated using Ansoft HFSS 3D electromagnetic simulation tool. Multi-frequency band operation is obtained from the proposed microstrip antenna. The design was carried out using Glass PTFE as the substrate and copper as antenna material. The designed antennas resonate at 15GHz with return loss over 50dB & VSWR less than 1, on implementing different slots in the radiating patch multiple frequencies resonate at 12.2GHz & 15.00GHz (Return Loss -27.5, -37.73 respectively & VSWR 0.89, 0.24 respectively) and another resonate at 11.16 GHz, 15.64GHz & 17.73 GHz with return loss -18.99, -23.026, -18.156 dB respectively and VSWR 1.95, 1.22 & 2.1 respectively. All the above designed band are used in the satellite application for non-geostationary orbit (NGSO) and fixed-satellite services (FSS) providers to operate in various segments of the Ku-band.", "title": "" }, { "docid": "c2816721fa6ccb0d676f7fdce3b880d4", "text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.", "title": "" }, { "docid": "d4e5a5aa65017360db9a87590a728892", "text": "This work presents a chaotic path planning generator which is used in autonomous mobile robots, in order to cover a terrain. The proposed generator is based on a nonlinear circuit, which shows chaotic behavior. The bit sequence, produced by the chaotic generator, is converted to a sequence of planned positions, which satisfies the requirements for unpredictability and fast scanning of the entire terrain. The nonlinear circuit and the trajectory-planner are described thoroughly. Simulation tests confirm that with the proposed path planning generator better results can be obtained with regard to previous works. © 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e15405f1c0fb52be154e79a2976fbb6d", "text": "The generalized Poisson regression model has been used to model dispersed count data. It is a good competitor to the negative binomial regression model when the count data is over-dispersed. Zero-inflated Poisson and zero-inflated negative binomial regression models have been proposed for the situations where the data generating process results into too many zeros. In this paper, we propose a zero-inflated generalized Poisson (ZIGP) regression model to model domestic violence data with too many zeros. Estimation of the model parameters using the method of maximum likelihood is provided. A score test is presented to test whether the number of zeros is too large for the generalized Poisson model to adequately fit the domestic violence data.", "title": "" }, { "docid": "c283e7b1133fe0898e5d953c751d6d85", "text": "Fasting has been practiced for millennia, but, only recently, studies have shed light on its role in adaptive cellular responses that reduce oxidative damage and inflammation, optimize energy metabolism, and bolster cellular protection. In lower eukaryotes, chronic fasting extends longevity, in part, by reprogramming metabolic and stress resistance pathways. In rodents intermittent or periodic fasting protects against diabetes, cancers, heart disease, and neurodegeneration, while in humans it helps reduce obesity, hypertension, asthma, and rheumatoid arthritis. Thus, fasting has the potential to delay aging and help prevent and treat diseases while minimizing the side effects caused by chronic dietary interventions.", "title": "" }, { "docid": "8adb07a99940383139f0d4ed32f68f7c", "text": "The gene ASPM (abnormal spindle-like microcephaly associated) is a specific regulator of brain size, and its evolution in the lineage leading to Homo sapiens was driven by strong positive selection. Here, we show that one genetic variant of ASPM in humans arose merely about 5800 years ago and has since swept to high frequency under strong positive selection. These findings, especially the remarkably young age of the positively selected variant, suggest that the human brain is still undergoing rapid adaptive evolution.", "title": "" }, { "docid": "f81723af1cb8bf52b1348fe1f4d91d90", "text": "The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 81 1. 03 56 7v 2 [ cs .L G ] 2 5 N ov 2 01 8 BIOLOGICALLY-PLAUSIBLE LEARNING ALGORITHMS CAN SCALE TO LARGE DATASETS", "title": "" }, { "docid": "2e6623aa13ca5a047d888612c9a8e22a", "text": "We present a hydro-elastic actuator that has a linear spring intentionally placed in series between the hydraulic piston and actuator output. The spring strain is measured to get an accurate estimate of force. This measurement alone is used in PI feedback to control the force in the actuator. The spring allows for high force fidelity, good force control, minimum impedance, and large dynamic range. A third order linear actuator model is broken into two fundamental cases: fixed load – high force (forward transfer function), and free load – zero force (impedance). These two equations completely describe the linear characteristics of the actuator. This model is presented with dimensional analysis to allow for generalization. A prototype actuator that demonstrates force control and low impedance is also presented. Dynamic analysis of the prototype actuator correlates well with the linear mathematical model. This work done with hydraulics is an extension from previous work done with electro-mechanical actuators. Keywords— Series Elastic Actuator, Force Control, Hydraulic Force Control, Biomimetic Robots", "title": "" }, { "docid": "cf5e440f064656488506d90285c7885d", "text": "A key issue in delay tolerant networks (DTN) is to find the right node to store and relay messages. We consider messages annotated with the unique keywords describing themessage subject, and nodes also adds keywords to describe their mission interests, priority and their transient social relationship (TSR). To offset resource costs, an incentive mechanism is developed over transient social relationships which enrich enroute message content and motivate better semantically related nodes to carry and forward messages. The incentive mechanism ensures avoidance of congestion due to uncooperative or selfish behavior of nodes.", "title": "" }, { "docid": "6c08b5b172d2d322734bab615b005ab4", "text": "Inelastic collisions between the galactic cosmic rays (GCRs) and the interstellar medium (ISM) are responsible for producing essentially all of the light elements Li, Be, and B (LiBeB) observed in the cosmic rays. Previous calculations (e.g., [1]) have shown that GCR fragmentation can explain the bulk of the existing LiBeB abundance in the present day Galaxy. However, elemental abundances of LiBeB in old halo stars indicate inconsistencies with this explanation. We have used a simple leaky-box model to predict the cosmic-ray elemental and isotopic abundances of LiBeB in the present epoch. We conducted a survey of recent scientific literature on fragmentation cross sections and have calculated the amount of uncertainty they introduce into our model. The predicted particle intensities of this model were compared with high energy (EisM=200-500 MeV/nucleon) cosmic-ray data from the Cosmic Ray Isotope Spectrometer (CRIS), which indicates fairly good agreement with absolute fluxes for Z?:. 5 and relative isotopic abundances for all LiBeB species.", "title": "" }, { "docid": "5cb8b8d4c228d0f75543ae1b4d5a0e5c", "text": "Clustering is an important data mining task for exploration and visualization of different data types like news stories, scientific publications, weblogs, etc. Due to the evolving nature of these data, evolutionary clustering, also known as dynamic clustering, has recently emerged to cope with the challenges of mining temporally smooth clusters over time. A good evolutionary clustering algorithm should be able to fit the data well at each time epoch, and at the same time results in a smooth cluster evolution that provides the data analyst with a coherent and easily interpretable model. In this paper we introduce the temporal Dirichlet process mixture model (TDPM) as a framework for evolutionary clustering. TDPM is a generalization of the DPM framework for clustering that automatically grows the number of clusters with the data. In our framework, the data is divided into epochs; all data points inside the same epoch are assumed to be fully exchangeable, whereas the temporal order is maintained across epochs. Moreover, The number of clusters in each epoch is unbounded: the clusters can retain, die out or emerge over time, and the actual parameterization of each cluster can also evolve over time in a Markovian fashion. We give a detailed and intuitive construction of this framework using the recurrent Chinese restaurant process (RCRP) metaphor, as well as a Gibbs sampling algorithm to carry out posterior inference in order to determine the optimal cluster evolution. We demonstrate our model over simulated data by using it to build an infinite dynamic mixture of Gaussian factors, and over real dataset by using it to build a simple non-parametric dynamic clustering-topic model and apply it to analyze the NIPS12 document collection.", "title": "" }, { "docid": "ab23f66295574368ccd8fc4e1b166ecc", "text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.", "title": "" }, { "docid": "7bb17491cb10db67db09bc98aba71391", "text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.", "title": "" } ]
scidocsrr
a0e24031f03b66cf7151caa726854b22
Individual differences in executive control relate to metaphor processing: an eye movement study of sentence reading
[ { "docid": "8230ddd7174a2562c0fe0f83b1bf7cf7", "text": "Metaphors are fundamental to creative thought and expression. Newly coined metaphors regularly infiltrate our collective vocabulary and gradually become familiar, but it is unclear how this shift from novel to conventionalized meaning happens in the brain. We investigated the neural career of metaphors in a functional magnetic resonance imaging study using extensively normed new metaphors and simulated the ordinary, gradual experience of metaphor conventionalization by manipulating participants' exposure to these metaphors. Results showed that the conventionalization of novel metaphors specifically tunes activity within bilateral inferior prefrontal cortex, left posterior middle temporal gyrus, and right postero-lateral occipital cortex. These results support theoretical accounts attributing a role for the right hemisphere in processing novel, low salience figurative meanings, but also show that conventionalization of metaphoric meaning is a bilaterally-mediated process. Metaphor conventionalization entails a decreased neural load within semantic networks rather than a hemispheric or regional shift across brain areas.", "title": "" }, { "docid": "8feb5dce809acf0efb63d322f0526fcf", "text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.", "title": "" } ]
[ { "docid": "b403f37f0c27d4fe2b0f398c4c72f7a6", "text": "In this work we present a novel approach to predict the function of proteins in protein-protein interaction (PPI) networks. We classify existing approaches into inductive and transductive approaches, and into local and global approaches. As of yet, among the group of inductive approaches, only local ones have been proposed for protein function prediction. We here introduce a protein description formalism that also includes global information, namely information that locates a protein relative to specific important proteins in the network. We analyze the effect on function prediction accuracy of selecting a different number of important proteins. With around 70 important proteins, even in large graphs, our method makes good and stable predictions. Furthermore, we investigate whether our method also classifies proteins accurately on more detailed function levels. We examined up to five different function levels. The method is benchmarked on four datasets where we found classification performance according to F-measure values indeed improves by 9 percent over the benchmark methods employed.", "title": "" }, { "docid": "bdc8cf5c66c4e0c29de33d3d1fcb5234", "text": "In order to fully understand the sensory, perceptual, and cognitive issues associated with helmet-/head-mounted displays (HMDs), it is essential to possess an understanding of exactly what constitutes an HMD, the various design types, their advantages and limitations, and their applications. It also is useful to explore the developmental history of these systems. Such an exploration can reveal the major engineering, human factors, and ergonomic issues encountered in the development cycle. These identified issues usually are indicators of where the most attention needs to be placed when evaluating the usefulness of such systems. New HMD systems are implemented because they are intended to provide some specific capability or performance enhancement. However, these improvements always come at a cost. In reality, the introduction of technology is a tradeoff endeavor. It is necessary to identify and assess the tradeoffs that impact overall system and user sensory systems performance. HMD developers have often and incorrectly assumed that the human visual and auditory systems are fully capable of accepting the added sensory and cognitive demands of an HMD system without incurring performance degradation or introducing perceptual illusions. Situation awareness (SA), essential in preventing actions or inactions that lead to catastrophic outcomes, may be degraded if the HMD interferes with normal perceptual processes, resulting in misinterpretations or misperceptions (illusions). As HMD applications increase, it is important to maintain an awareness of both current and future programs. Unfortunately, in these developmental programs, one factor still is often minimized. This factor is how the user accepts and eventually uses the HMD. In the demanding rigors of warfare, the user rapidly decides whether using a new HMD, intended to provide tactical and other information, outweighs the impact the HMD has on survival and immediate mission success. If the system requires an unacceptable compromise in any aspect of mission completion deemed critical to the Warfighter, the HMD will not be used. Technology in which the Warfighter does have confidence or determines to be a liability will go unused.", "title": "" }, { "docid": "095dd4efbb23bc91b72dea1cd1c627ab", "text": "Cell-cell communication is critical across an assortment of physiological and pathological processes. Extracellular vesicles (EVs) represent an integral facet of intercellular communication largely through the transfer of functional cargo such as proteins, messenger RNAs (mRNAs), microRNA (miRNAs), DNAs and lipids. EVs, especially exosomes and shed microvesicles, represent an important delivery medium in the tumour micro-environment through the reciprocal dissemination of signals between cancer and resident stromal cells to facilitate tumorigenesis and metastasis. An important step of the metastatic cascade is the reprogramming of cancer cells from an epithelial to mesenchymal phenotype (epithelial-mesenchymal transition, EMT), which is associated with increased aggressiveness, invasiveness and metastatic potential. There is now increasing evidence demonstrating that EVs released by cells undergoing EMT are reprogrammed (protein and RNA content) during this process. This review summarises current knowledge of EV-mediated functional transfer of proteins and RNA species (mRNA, miRNA, long non-coding RNA) between cells in cancer biology and the EMT process. An in-depth understanding of EVs associated with EMT, with emphasis on molecular composition (proteins and RNA species), will provide fundamental insights into cancer biology.", "title": "" }, { "docid": "f1df8b69dfec944b474b9b26de135f55", "text": "Background:There are currently two million cancer survivors in the United Kingdom, and in recent years this number has grown by 3% per annum. The aim of this paper is to provide long-term projections of cancer prevalence in the United Kingdom.Methods:National cancer registry data for England were used to estimate cancer prevalence in the United Kingdom in 2009. Using a model of prevalence as a function of incidence, survival and population demographics, projections were made to 2040. Different scenarios of future incidence and survival, and their effects on cancer prevalence, were also considered. Colorectal, lung, prostate, female breast and all cancers combined (excluding non-melanoma skin cancer) were analysed separately.Results:Assuming that existing trends in incidence and survival continue, the number of cancer survivors in the United Kingdom is projected to increase by approximately one million per decade from 2010 to 2040. Particularly large increases are anticipated in the oldest age groups, and in the number of long-term survivors. By 2040, almost a quarter of people aged at least 65 will be cancer survivors.Conclusion:Increasing cancer survival and the growing/ageing population of the United Kingdom mean that the population of survivors is likely to grow substantially in the coming decades, as are the related demands upon the health service. Plans must, therefore, be laid to ensure that the varied needs of cancer survivors can be met in the future.", "title": "" }, { "docid": "56179ddce0ba91184cca226d482a2da4", "text": "An original differential structure using exclusively MOS devices working in the saturation region will be further presented. Performing the great advantage of an excellent linearity, obtained by a proper biasing of the differential core (using original translation and arithmetical mean blocks), the proposed circuit is designed for low-voltage low- power operation. The estimated linearity is obtained for an extended range of the differential input voltage and in the worst case of considering second-order effects that affect MOS transistors operation. The frequency response of the new differential structure is strongly increased by operating all MOS devices in the saturation region. The circuit is implemented in 0.35 mum CMOS technology, SPICE simulations confirming the theoretical estimated results.", "title": "" }, { "docid": "e5f2a33ef8952e1b8c5129e8aa65045c", "text": "This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.", "title": "" }, { "docid": "dfa51004b99bce29e644fbcca4b833a5", "text": "This paper presents Latent Sampling-based Motion Planning (L-SBMP), a methodology towards computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this paper we combine these recent advances with techniques from samplingbased motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system’s states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space – we refer to this exploration algorithm as Learned Latent RRT (L2RRT). This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem.", "title": "" }, { "docid": "d06dc916942498014f9d00498c1d1d1f", "text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦", "title": "" }, { "docid": "e1096df0a86d37c11ed4a31d9e67ac6e", "text": "............................................................................................................................................... 4", "title": "" }, { "docid": "0be273eb8dfec6a6f71a44f38e8207ba", "text": "Clustering is a powerful tool which has been used in several forecasting works, such as time series forecasting, real time storm detection, flood forecasting and so on. In this paper, a generic methodology for weather forecasting is proposed by the help of incremental K-means clustering algorithm. Weather forecasting plays an important role in day to day applications.Weather forecasting of this paper is done based on the incremental air pollution database of west Bengal in the years of 2009 and 2010. This paper generally uses typical Kmeans clustering on the main air pollution database and a list of weather category will be developed based on the maximum mean values of the clusters.Now when the new data are coming, the incremental K-means is used to group those data into those clusters whose weather category has been already defined. Thus it builds up a strategy to predict the weather of the upcoming data of the upcoming days. This forecasting database is totally based on the weather of west Bengal and this forecasting methodology is developed to mitigating the impacts of air pollutions and launch focused modeling computations for prediction and forecasts of weather events. Here accuracy of this approach is also measured.", "title": "" }, { "docid": "8c007238a61730cc2fb20d091d561aea", "text": "The Class II division 2 (Class II/2) malocclusion as originally defined by E.H. Angle is relatively rare. The orthodontic literature does not agree on the skeletal characteristics of this malocclusion. Several researchers claim that it is characterized by an orthognathic facial pattern and that the malocclusion is dentoalveolar per se. Others claim that the Class II/2 malocclusion has unique skeletal and dentoalveolar characteristics. The present study describes the skeletal and dentoalveolar cephalometric characteristics of 50 patients clinically diagnosed as having Class II/2 malocclusion according to Angle's original criteria. The study compares the findings with those of both a control group of 54 subjects with Class II division I (Class II/1) malocclusion and a second control group of 34 subjects with Class I (Class I) malocclusion. The findings demonstrate definite skeletal and dentoalveolar patterns with the following characteristics: (1) the maxilla is orthognathic, (2) the mandible has relatively short and retrognathic parameters, (3) the chin is relatively prominent, (4) the facial pattern is hypodivergent, (5) the upper central incisors are retroclined, and (6) the overbite is deep. The results demonstrate that, in a sagittal direction, the entity of Angle Class II/2 malocclusion might actually be located between the Angle Class I and the Angle Class II/1 malocclusions. with unique vertical skeletal characteristics.", "title": "" }, { "docid": "29c32c8c447b498f43ec215633305923", "text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.", "title": "" }, { "docid": "ed13193df5db458d0673ccee69700bc0", "text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).", "title": "" }, { "docid": "54fc5bc85ef8022d099fff14ab1b7ce0", "text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.", "title": "" }, { "docid": "e964d88be0270bc6ee7eb7748868dd3c", "text": "The standard serial algorithm for strongly connected components is based on depth rst search, which is di cult to parallelize. We describe a divide-and-conquer algorithm for this problem which has signi cantly greater potential for parallelization. For a graph with n vertices in which degrees are bounded by a constant, we show the expected serial running time of our algorithm to be O(n log n).", "title": "" }, { "docid": "18ffa160ffce386993b5c2da5070b364", "text": "This paper presents a new approach for facial attribute classification using a multi-task learning approach. Unlike other approaches that uses hand engineered features, our model learns a shared feature representation that is wellsuited for multiple attribute classification. Learning a joint feature representation enables interaction between different tasks. For learning this shared feature representation we use a Restricted Boltzmann Machine (RBM) based model, enhanced with a factored multi-task component to become Multi-Task Restricted Boltzmann Machine (MT-RBM). Our approach operates directly on faces and facial landmark points to learn a joint feature representation over all the available attributes. We use an iterative learning approach consisting of a bottom-up/top-down pass to learn the shared representation of our multi-task model and at inference we use a bottom-up pass to predict the different tasks. Our approach is not restricted to any type of attributes, however, for this paper we focus only on facial attributes. We evaluate our approach on three publicly available datasets, the Celebrity Faces (CelebA), the Multi-task Facial Landmarks (MTFL), and the ChaLearn challenge dataset. We show superior classification performance improvement over the state-of-the-art.", "title": "" }, { "docid": "16b64bf865bae192b604faaf6f916ff1", "text": "Recurrent Neural Networks (RNNs) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state of the art by a large margin.1", "title": "" }, { "docid": "b9300a58c4b55bfb0f57b36e5054e5c6", "text": "The problem of designing, coordinating, and managing complex systems has been central to the management and organizations literature. Recent writings have tended to offer modularity as, at least, a partial solution to this design problem. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the destabilizing effects of overly refined modularization and the modest levels of search and a premature fixation on inferior designs that can result from excessive levels of integration. The analysis highlights an asymmetry in this trade-off, with excessively refined modules leading to cycling behavior and a lack of performance improvement. We discuss the implications of these arguments for product and organization design.", "title": "" }, { "docid": "5ca886592c6bb484bf04847ecfb3469d", "text": "In power transistor switching circuits, shunt snubbers (dv/dt limiting capacitors) are often used to reduce the turn-off switching loss or prevent reverse-biased second breakdown. Similarly, series snubbers (di/dt limiting inductors) are used to reduce the turn-on switching loss or prevent forward-biased second breakdown. In both cases energy is stored in the reactive element of the snubber and is dissipated during its discharge. If the circuit includes a transformer, a voltage clamp across the transistor may be needed to absorb the energy trapped in the leakage inductance. The action of these typical snubber and clamp arrangements is analyzed and applied to optimize the design of a flyback converter used as a battery charger.", "title": "" }, { "docid": "fee50f8ab87f2b97b83ca4ef92f57410", "text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.", "title": "" } ]
scidocsrr
799d0d9f3135a816fa864421c1a62204
Towards Creation of a Corpus for Argumentation Mining the Biomedical Genetics Research Literature
[ { "docid": "5f7adc28fab008d93a968b6a1e5ad061", "text": "This paper describes recent approaches using text-mining to automatically profile and extract arguments from legal cases. We outline some of the background context and motivations. We then turn to consider issues related to the construction and composition of a corpora of legal cases. We show how a Context-Free Grammar can be used to extract arguments, and how ontologies and Natural Language Processing can identify complex information such as case factors and participant roles. Together the results bring us closer to automatic identification of legal arguments.", "title": "" } ]
[ { "docid": "85e4a8dc8f27c5b73d147a36cace80d4", "text": "REQUIRED) In this paper, we present a social/behavioral study of individual information security practices of internet users in Latin America, specifically presenting the case of Bolivia. The research model uses social cognitive theory in order to explain the individual cognitive factors that influence information security behavior. The model includes individuals’ beliefs about their abilities to competently use computer information security tools and information security awareness in the determination of effective information security practices. The operationalization of constructs that are part of our research model, such as information security practice as the dependent variable, self-efficacy and information security awareness as independent variables , are presented both in Spanish and English. In this study, we offer the analysis of a survey of 255 Internet users from Bolivia who replied to our survey and provided responses about their information security behavior. A discussion about information security awareness and practices is presented.", "title": "" }, { "docid": "fdfea6d3a5160c591863351395929a99", "text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "title": "" }, { "docid": "f2707d7fcd5d8d9200d4cc8de8ff1042", "text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.", "title": "" }, { "docid": "1de1631bb0da37f2c3ddd856fcdbb0f1", "text": "J.E. Dietrich (ed.), Female Puberty: A Comprehensive Guide for Clinicians, DOI 10.1007/978-1-4939-0912-4_2, © Springer Science+Business Media New York 2014 Abstract The development of a female child into an adult woman is a complex process. Puberty, and the hormones that fuel the physical and psychological changes which are its hallmarks, is generally viewed as a rough and often unpredictable storm that must be weathered by the surrounding adults. The more we learn, however, about the intricate interplay between the endocrine regulators and the endorgan responses to this hormonal symphony, puberty seems less like chaos, and more of an incredible metamorphosis that leads to reproductive capacity and psychosocial maturation. Physically, female puberty is marked by accelerated growth and the development of secondary sexual characteristics. Secondary sexual characteristics are those that distinguish two different sexes in a species, but are not directly part of the reproductive system. Analogies from the animal kingdom include manes in male lions and the elaborate tails of male peacocks. The visible/external sequence of events is generally: breast budding (thelarche), onset of pubic hair (pubarche), maximal growth velocity, menarche, development of axillary hair, attainment of the adult breast type, adult pubic hair pattern. Underlying these external developments is the endocrine axis orchestrating the increase in gonadal steroid production (gonadarche), the increase in adrenal androgen production (adrenarche) and the associated changes in the reproductive tract that allow fertility. Meanwhile, the brain is rapidly adapting to the new hormonal milieu. The extent of variation in this scenario is enormous. On average, the process from accelerated growth and breast budding to menarche is approximately 4.5 years with a range from 1.5 to 6 years. There are differences in timing and expression of maturation based on ethnicity, geography, and genetics. Being familiar with the spectrum that encompasses normal development is Chapter 2 Normal Pubertal Physiology in Females", "title": "" }, { "docid": "3564941b9e2bcbd43a464bd8a2385311", "text": "Adult patients seeking orthodontic treatment are increasingly motivated by esthetic considerations. The majority of these patients reject wearing labial fixed appliances and are looking instead to more esthetic treatment options, including lingual orthodontics and Invisalign appliances. Since Align Technology introduced the Invisalign appliance in 1999 in an extensive public campaign, the appliance has gained tremendous attention from adult patients and dental professionals. The transparency of the Invisalign appliance enhances its esthetic appeal for those adult patients who are averse to wearing conventional labial fixed orthodontic appliances. Although guidelines about the types of malocclusions that this technique can treat exist, few clinical studies have assessed the effectiveness of the appliance. A few recent studies have outlined some of the limitations associated with this technique that clinicians should recognize early before choosing treatment options.", "title": "" }, { "docid": "3b903b284e6a7bfb54113242b1143ddc", "text": "Hypertension — the chronic elevation of blood pressure — is a major human health problem. In most cases, the root cause of the disease remains unknown, but there is mounting evidence that many forms of hypertension are initiated and maintained by an elevated sympathetic tone. This review examines how the sympathetic tone to cardiovascular organs is generated, and discusses how elevated sympathetic tone can contribute to hypertension.", "title": "" }, { "docid": "92ae99edf23f41ffcf2f1b091132ac3c", "text": "Restricted Boltzmann machines (RBMs) are powerful machine learning models, but learning and some kinds of inference in the model require sampling-based approximations, which, in classical digital computers, are implemented using expensive MCMC. Physical computation offers the opportunity to reduce the cost of sampling by building physical systems whose natural dynamics correspond to drawing samples from the desired RBM distribution. Such a system avoids the burn-in and mixing cost of a Markov chain. However, hardware implementations of this variety usually entail limitations such as low-precision and limited range of the parameters and restrictions on the size and topology of the RBM. We conduct software simulations to determine how harmful each of these restrictions is. Our simulations are based on the D-Wave Two computer, but the issues we investigate arise in most forms of physical computation. Our findings suggest that designers of new physical computing hardware and algorithms for physical computers should focus their efforts on overcoming the limitations imposed by the topology restrictions of currently existing physical computers.", "title": "" }, { "docid": "76cef1b6d0703127c3ae33bcf71cdef8", "text": "Risks have a significant impact on a construction project’s performance in terms of cost, time and quality. As the size and complexity of the projects have increased, an ability to manage risks throughout the construction process has become a central element preventing unwanted consequences. How risks are shared between the project actors is to a large extent governed by the procurement option and the content of the related contract documents. Therefore, selecting an appropriate project procurement option is a key issue for project actors. The overall aim of this research is to increase the understanding of risk management in the different procurement options: design-bid-build contracts, designbuild contracts and collaborative form of partnering. Deeper understanding is expected to contribute to a more effective risk management and, therefore, a better project output and better value for both clients and contractors. The study involves nine construction projects recently performed in Sweden and comprises a questionnaire survey and a series of interviews with clients, contractors and consultants involved in these construction projects. The findings of this work show a lack of an iterative approach to risk management, which is a weakness in current procurement practices. This aspect must be addressed if the risk management process is to serve projects and, thus, their clients. The absence of systematic risk management is especially noted in the programme phase, where it arguably has the greatest potential impact. The production phase is where most interest and activity are to be found. As a matter of practice, the communication of risks between the actors simply does not work to the extent that it must if projects are to be delivered with certainty, irrespective of the form of procurement. A clear connection between the procurement option and risk management in construction projects has been found. Traditional design-bid-build contracts do not create opportunities for open discussion of project risks and joint risk management. A number of drivers of and obstacles to effective risk management have been explored in the study. Every actor’s involvement in dialogue, effective communication and information exchange, open attitudes and trustful relationship are the factors that support open discussion of project risks and, therefore, contribute to successful risk management. Based on the findings, a number of recommendations facilitating more effective risk management have been developed for the industry practitioners. Keywords--Risk Management, Risk Allocation, Construction Project, Construction Contract, Design-BidBuild, Design-Build, Partnering", "title": "" }, { "docid": "fb09d91b8e572cc9d0179f14bdd74b53", "text": "Being grateful has been associated with many positive outcomes, including greater happiness, positive affect, optimism, and self-esteem. There is limited research, however, on the associations between gratitude and different domains of life satisfaction across cultures. The current study examined the associations between gratitude and three domains of life satisfaction, including satisfaction in relationships, work, and health, and overall life satisfaction, in the United States and Japan. A total of 945 participants were drawn from two samples of middle aged and older adults, the Midlife Development in the United States and the Midlife Development in Japan. There were significant positive bivariate associations between gratitude and all four measures of life satisfaction. In addition, after adjusting for demographics, neuroticism, extraversion, and the other measures of satisfaction, gratitude was uniquely and positively associated with satisfaction with relationships and life overall but not with satisfaction with work or health. Furthermore, results indicated that women and individuals who were more extraverted and lived in the United States were more grateful and individuals with less than a high school degree were less grateful. The findings from this study suggest that gratitude is uniquely associated with specific domains of life satisfaction. Results are discussed with respect to future research and the design and implementation of gratitude interventions, particularly when including individuals from different cultures.", "title": "" }, { "docid": "6f370d729b8e8172b218071af89af7ad", "text": "In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.", "title": "" }, { "docid": "e4000835f1870399c4270492fb81694b", "text": "In this paper, a new design of mm-Wave phased array 5G antenna for multiple-input multiple-output (MIMO) applications has been introduced. Two identical linear phased arrays with eight leaf-shaped bow-tie antenna elements have been used at different sides of the mobile-phone PCB. An Arlon AR 350 dielectric with properties of h=0.5 mm, ε=3.5, and δ=0.0026 has been used as a substrate of the proposed design. The antenna is working in the frequency range of 25 to 40 GHz (more than 45% FBW) and can be easily fit into current handheld devices. The proposed MIMO antenna has good radiation performances at 28 and 38 GHz which both are powerful candidates to be the carrier frequency of the future 5G cellular networks.", "title": "" }, { "docid": "cbe3a584e8fcabbd42f732b5fe247736", "text": "Wall‐climbing welding robots (WCWRs) can replace workers in manufacturing and maintaining large unstructured equipment, such as ships. The adhesion mechanism is the key component of WCWRs. As it is directly related to the robot’s ability in relation to adsorbing, moving flexibly and obstacle‐passing. In this paper, a novel non‐contact adjustably magnetic adhesion mechanism is proposed. The magnet suckers are mounted under the robot’s axils and the sucker and wall are in non‐contact. In order to pass obstacles, the sucker and the wheel unit can be pulled up and pushed down by a lifting mechanism. The magnetic adhesion force can be adjusted by changing the height of the gap between the sucker and the wall by the lifting mechanism. In order to increase the adhesion force, the value of the sucker’s magnetic energy density (MED) is maximized by optimizing the magnet sucker’s structure parameters with a finite element method. Experiments prove that the magnetic adhesion mechanism has enough adhesion force and that the WCWR can complete wall‐climbing work within a large unstructured environment.", "title": "" }, { "docid": "c61877099eddc31a281fa82fd942072e", "text": "The trend of bring your own device (BYOD) has been rapidly adopted by organizations. Despite the pros and cons of BYOD adoption, this trend is expected to inevitably keep increasing. Yet, BYOD has raised significant concerns about information system security as employees use their personal devices to access organizational resources. This study aims to examine employees' intention to comply with an organization’s IS security policy in the context of BYOD. We derived our research model from reactance, protection motivation and organizational justice theories. The results of this study demonstrate that an employee’s perceived response efficacy and perceived justice positively affect an employee’s intention to comply with BYOD security policy. Perceived security threat appraisal was found to marginally promote the intention to comply. Conversely, perceived freedom threat due to imposed security policy negatively affects an employee’s intention to comply with the security policy. We also found that an employee’s perceived cost associated with compliance behavior positively affects an employee’s perceptions of threat to an individual freedom. An interesting double-edged sword effect of a security awareness program was confirmed by the results. BYOD security awareness program increases an employee’s response efficacy (a positive effect) and response cost (a negative effect). The study also demonstrates the importance of having an IT support team for BYOD, as it increases an employee’s response-efficacy and perceived justice.", "title": "" }, { "docid": "96c1da4e4b52014e4a9c5df098938c98", "text": "Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.", "title": "" }, { "docid": "faca51b6762e4d7c3306208ad800abd3", "text": "Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.", "title": "" }, { "docid": "0e6fd08318cf94ea683892d737ae645a", "text": "We present simulations and demonstrate experimentally a new concept in winding a planar induction heater. The winding results in minimal ac magnetic field below the plane of the heater, while concentrating the flux above. Ferrites and other types of magnetic shielding are typically not required. The concept of a one-sided ac field can generalized to other geometries as well.", "title": "" }, { "docid": "6893ce06d616d08cf0a9053dc9ea493d", "text": "Hope is the sum of goal thoughts as tapped by pathways and agency. Pathways reflect the perceived capability to produce goal routes; agency reflects the perception that one can initiate action along these pathways. Using trait and state hope scales, studies explored hope in college student athletes. In Study 1, male and female athletes were higher in trait hope than nonathletes; moreover, hope significantly predicted semester grade averages beyond cumulative grade point average and overall self-worth. In Study 2, with female cross-country athletes, trait hope predicted athletic outcomes; further, weekly state hope tended to predict athletic outcomes beyond dispositional hope, training, and self-esteem, confidence, and mood. In Study 3, with female track athletes, dispositional hope significantly predicted athletic outcomes beyond variance related to athletic abilities and affectivity; moreover, athletes had higher hope than nonathletes.", "title": "" }, { "docid": "36d79b2b2640d1b2ac7f8ef057abc75c", "text": "Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval.", "title": "" }, { "docid": "e82681b5140f3a9b283bbd02870f18d5", "text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization", "title": "" }, { "docid": "4d99090b874776b89092f63f21c8ea93", "text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.", "title": "" } ]
scidocsrr
a857e42a4a0e2239a01c6dbf6af91f14
Multi-task , Multi-Kernel Learning for Estimating Individual Wellbeing
[ { "docid": "c8b1a0d5956ced6deaefe603efc523ba", "text": "What can wearable sensors and usage of smart phones tell us about academic performance, self-reported sleep quality, stress and mental health condition? To answer this question, we collected extensive subjective and objective data using mobile phones, surveys, and wearable sensors worn day and night from 66 participants, for 30 days each, totaling 1,980 days of data. We analyzed daily and monthly behavioral and physiological patterns and identified factors that affect academic performance (GPA), Pittsburg Sleep Quality Index (PSQI) score, perceived stress scale (PSS), and mental health composite score (MCS) from SF-12, using these month-long data. We also examined how accurately the collected data classified the participants into groups of high/low GPA, good/poor sleep quality, high/low self-reported stress, high/low MCS using feature selection and machine learning techniques. We found associations among PSQI, PSS, MCS, and GPA and personality types. Classification accuracies using the objective data from wearable sensors and mobile phones ranged from 67-92%.", "title": "" }, { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" } ]
[ { "docid": "c77d76834c3aa8ace82cb15b6f882365", "text": "A multidatabase system provides integrated access to heterogeneous, autonomous local databases in a distributed system. An important problem in current multidatabase systems is identification of semantically similar data in different local databases. The Summary Schemas Model (SSM) is proposed as an extension to multidatabase systems to aid in semantic identification. The SSM uses a global data structure to abstract the information available in a multidatabase system. This abstracted form allows users to use their own terms (imprecise queries) when accessing data rather than being forced to use system-specified terms. The system uses the global data structure to match the user's terms to the semantically closest available system terms. A simulation of the SSM is presented to compare imprecise-query processing with corresponding query-processing costs in a standard multidatabase system. The costs and benefits of the SSM are discussed, and future research directions are presented.", "title": "" }, { "docid": "7021db9b0e77b2df2576f0cc5eda8d7d", "text": "Provides an abstract of the tutorial presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.", "title": "" }, { "docid": "8ad57ca3fa0063033fae25e4bad0a90e", "text": "The neural network, using an unsupervised generalized Hebbian algorithm (GHA), is adopted to find the principal eigenvectors of a covariance matrix in different kinds of seismograms. We have shown that the extensive computer results of the principal components analysis (PCA) using the neural net of GHA can extract the information of seismic reflection layers and uniform neighboring traces. The analyzed seismic data are the seismic traces with 20-, 25-, and 30-Hz Ricker wavelets, the fault, the reflection and diffraction patterns after normal moveout (NMO) correction, the bright spot pattern, and the real seismogram at Mississippi Canyon. The properties of high amplitude, low frequency, and polarity reversal can be shown from the projections on the principal eigenvectors. For PCA, a theorem is proposed, which states that adding an extra point along the direction of the existing eigenvector can enhance that eigenvector. The theorem is applied to the interpretation of a fault seismogram and the uniform property of other seismograms. The PCA also provides a significant seismic data compression.", "title": "" }, { "docid": "e8f3dd4d2758da22d54114ec021b56dd", "text": "Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget k, the rumor blocking problem asks for k seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of (1 − 1/e− δ) by a classic greedy algorithm combined with Monte Carlo simulation with the running time of O(k3 mn ln n/δ2), where n and m are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulation-based methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in O(km ln n/δ2) expected time and provides a (1 − 1/e − δ)-approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-the-art method and it is able to find the seed nodes which are effective in limiting the spread of rumor.", "title": "" }, { "docid": "6b6f82399472a6f019c506a549f5ffe6", "text": "T. Ribot's (1881) law of retrograde amnesia states that brain damage impairs recently formed memories to a greater extent than older memories, which is generally taken to imply that memories need time to consolidate. A. Jost's (1897) law of forgetting states that if 2 memories are of the same strength but different ages, the older will decay more slowly than the younger. The main theoretical implication of this venerable law has never been worked out, but it may be the same as that implied by Ribot's law. A consolidation interpretation of Jost's law implies an interference theory of forgetting that is altogether different from the cue-overload view that has dominated thinking in the field of psychology for decades.", "title": "" }, { "docid": "3ccc5fd5bbf570a361b40afca37cec92", "text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.", "title": "" }, { "docid": "d311bfc22c30e860c529b2aeb16b6d40", "text": "We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.", "title": "" }, { "docid": "8d7cb4e8fd243f3cd091c1866a18fc5c", "text": "We develop graphene-based devices fabricated by alternating current dielectrophoresis (ac-DEP) for highly sensitive nitric oxide (NO) gas detection. The novel device comprises the sensitive channels of palladium-decorated reduced graphene oxide (Pd-RGO) and the electrodes covered with chemical vapor deposition (CVD)-grown graphene. The highly sensitive, recoverable, and reliable detection of NO gas ranging from 2 to 420 ppb with response time of several hundred seconds has been achieved at room temperature. The facile and scalable route for high performance suggests a promising application of graphene devices toward the human exhaled NO and environmental pollutant detections.", "title": "" }, { "docid": "99381ce7535bb8e654b276c0a4e06432", "text": "Steganography, coming from the Greek words stegos, meaning roof or covered and graphia which means writing, is the art and science of hiding the fact that communication is taking place. Using steganography, you can embed a secret message inside a piece of unsuspicious information and send it without anyone knowing of the existence of the secret message. Steganography and cryptography are closely related. Cryptography scrambles messages so they cannot be understood. Steganography on the other hand, will hide the message so there is no knowledge of the existence of the message in the first place. In some situations, sending an encrypted message will arouse suspicion while an ”invisible” message wil not do so. Both sciences can be combined to produce better protection of the message. In this case, when the steganography fails and the message can be detected, it is still of no use as it is encrypted using cryptography techniques. Therefore, the principle defined once by Kerckhoffs for cryptography, also stands for steganography: the quality of a cryptographic system should only depend on a small part of information, namely the secret key. The same is valid for good steganographic systems: knowledge of the system that is used, should not give any information about the existence of hidden messages. Finding a message should only be possible with knowledge of the key that is required to uncover it.", "title": "" }, { "docid": "080f76412f283fb236c28678bf9dada8", "text": "We describe a new algorithm for robot localization, efficient both in terms of memory and processing time. It transforms a stream of laser range sensor data into a probabilistic calculation of the robot’s position, using a bidirectional Long Short-Term Memory (LSTM) recurrent neural network (RNN) to learn the structure of the environment and to answer queries such as: in which room is the robot? To achieve this, the RNN builds an implicit map of the environment.", "title": "" }, { "docid": "d79688b7906c34e7b74a9e93ee3f639e", "text": "We explore di‚erent approaches to integrating a simple convolutional neural network (CNN) with the Lucene search engine in a multi-stage ranking architecture. Our models are trained using the PyTorch deep learning toolkit, which is implemented in C/C++ with a Python frontend. One obvious integration strategy is to expose the neural network directly as a service. For this, we use Apache Œri‰, a so‰ware framework for building scalable cross-language services. In exploring alternative architectures, we observe that once trained, the feedforward evaluation of neural networks is quite straightforward. Œerefore, we can extract the parameters of a trained CNN from PyTorch and import the model into Java, taking advantage of the Java Deeplearning4J library for feedforward evaluation. Œis has the advantage that the entire end-to-end system can be implemented in Java. As a third approach, we can extract the neural network from PyTorch and “compile” it into a C++ program that exposes a Œri‰ service. We evaluate these alternatives in terms of performance (latency and throughput) as well as ease of integration. Experiments show that feedforward evaluation of the convolutional neural network is signi€cantly slower in Java, while the performance of the compiled C++ network does not consistently beat the PyTorch implementation.", "title": "" }, { "docid": "fb0fa5f3b6d2391495eb1a6a7c63b0fc", "text": "The demographic change towards an ageing population is introducing significant impact and drastic challenge to our society. We therefore need to find ways to assist older people to stay independently and prevent social isolation of these population. Information and Communication Technologies (ICT) can provide various solutions to help older adults to improve their quality of life, stay healthier, and live independently for longer time. The term of Ambient Assist Living (AAL) becomes a field to investigate innovative technologies to provide assistance as well as healthcare and rehabilitation to senior people with impairment. The paper provides a review of research background and technologies of AAL.", "title": "" }, { "docid": "472605bc322f1fd2c90ad50baf19fffb", "text": "Wireless sensor networks (WSNs) use the unlicensed industrial, scientific, and medical (ISM) band for transmissions. However, with the increasing usage and demand of these networks, the currently available ISM band does not suffice for their transmissions. This spectrum insufficiency problem has been overcome by incorporating the opportunistic spectrum access capability of cognitive radio (CR) into the existing WSN, thus giving birth to CR sensor networks (CRSNs). The sensor nodes in CRSNs depend on power sources that have limited power supply capabilities. Therefore, advanced and intelligent radio resource allocation schemes are very essential to perform dynamic and efficient spectrum allocation among sensor nodes and to optimize the energy consumption of each individual node in the network. Radio resource allocation schemes aim to ensure QoS guarantee, maximize the network lifetime, reduce the internode and internetwork interferences, etc. In this paper, we present a survey of the recent advances in radio resource allocation in CRSNs. Radio resource allocation schemes in CRSNs are classified into three major categories, i.e., centralized, cluster-based, and distributed. The schemes are further divided into several classes on the basis of performance optimization criteria that include energy efficiency, throughput maximization, QoS assurance, interference avoidance, fairness and priority consideration, and hand-off reduction. An insight into the related issues and challenges is provided, and future research directions are clearly identified.", "title": "" }, { "docid": "6e67329e4f678ae9dc04395ae0a5b832", "text": "This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.", "title": "" }, { "docid": "63cc929e358746526b157ded5ff4b2c8", "text": "This paper asks how internet use, citizen satisfaction with e-government and citizen trust in government are interrelated. Prior research has found that agencies stress information and service provision on the Web (oneway e-government strategy), but have generally ignore applications that would enhance citizen-government interaction (two-way e-government strategy). Based on a review of the literature, we develop hypotheses about how two facets of e-democracy – transparency and interactivity – may affect citizen trust in government. Using data obtained from the Council on Excellence in Government, we apply a two stage multiple equation model. Findings indicate that internet use is positively associated with transparency satisfaction but negatively associated with interactivity satisfaction, and that both interactivity and transparency are positively associated with citizen trust in government. We conclude that the one-way e-transparency strategy may be insufficient, and that in the future agencies should make and effort to enhance e-interactivity.", "title": "" }, { "docid": "a1c859b44c46ebf4d2d413f4303cb4f7", "text": "We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker andWeir (1994) apart from weakly equivalent formalisms such as Tree-Adjoining Grammar (TAG), for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.", "title": "" }, { "docid": "8fb10190ba586026ff5235432c438c47", "text": "This paper presents the various crop yield prediction methods using data mining techniques. Agricultural system is very complex since it deals with large data situation which comes from a number of factors. Crop yield prediction has been a topic of interest for producers, consultants, and agricultural related organizations. In this paper our focus is on the applications of data mining techniques in agricultural field. Different Data Mining techniques such as K-Means, K-Nearest Neighbor(KNN), Artificial Neural Networks(ANN) and Support Vector Machines(SVM) for very recent applications of data mining techniques in agriculture field. Data mining technology has received a great progress with the rapid development of computer science, artificial intelligence. Data Mining is an emerging research field in agriculture crop yield analysis. Data Mining is the process of identifying the hidden patterns from large amount of data. Yield prediction is a very important agricultural problem that remains to be solved based on the available data. The problem of yield prediction can be solved by employing data mining techniques.", "title": "" }, { "docid": "69de2f8098a0618c75baeb259cb94ca1", "text": "Medicine may stand at the cusp of a mobile transformation. Mobile health, or “mHealth,” is the use of portable devices such as smartphones and tablets for medical purposes, including diagnosis, treatment, or support of general health and well-being. Users can interface with mobile devices through software applications (“apps”) that typically gather input from interactive questionnaires, separate medical devices connected to the mobile device, or functionalities of the device itself, such as its camera, motion sensor, or microphone. Apps may even process these data with the use of medical algorithms or calculators to generate customized diagnoses and treatment recommendations. Mobile devices make it possible to collect more granular patient data than can be collected from devices that are typically used in hospitals or physicians’ offices. The experiences of a single patient can then be measured against large data sets to provide timely recommendations about managing both acute symptoms and chronic conditions.1,2 To give but a few examples: One app allows users who have diabetes to plug glucometers into their iPhones as it tracks insulin doses and sends alerts for abnormally high or low blood sugar levels.3,4 Another app allows patients to use their smartphones to record electrocardiograms,5 using a single lead that snaps to the back of the phone. Users can hold the phone against their chests, record cardiac events, and transmit results to their cardiologists.6 An imaging app allows users to analyze diagnostic images in multiple modalities, including positronemission tomography, computed tomography, magnetic resonance imaging, and ultrasonography.7 An even greater number of mHealth products perform health-management functions, such as medication reminders and symptom checkers, or administrative functions, such as patient scheduling and billing. The volume and variety of mHealth products are already immense and defy any strict taxonomy. More than 97,000 mHealth apps were available as of March 2013, according to one estimate.8 The number of mHealth apps, downloads, and users almost doubles every year.9 Some observers predict that by 2018 there could be 1.7 billion mHealth users worldwide.8 Thus, mHealth technologies could have a profound effect on patient care. However, mHealth has also become a challenge for the Food and Drug Administration (FDA), the regulator responsible for ensuring that medical devices are safe and effective. The FDA’s oversight of mHealth devices has been controversial to members of Congress and industry,10 who worry that “applying a complex regulatory framework could inhibit future growth and innovation in this promising market.”11 But such oversight has become increasingly important. A bewildering array of mHealth products can make it difficult for individual patients or physicians to evaluate their quality or utility. In recent years, a number of bills have been proposed in Congress to change FDA jurisdiction over mHealth products, and in April 2014, a key federal advisory committee laid out its recommendations for regulating mHealth and other health-information technologies.12 With momentum toward legislation building, this article focuses on the public health benefits and risks of mHealth devices under FDA jurisdiction and considers how to best use the FDA’s authority.", "title": "" }, { "docid": "bb8ca605a714d71be903d46bf6e1fa40", "text": "Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.", "title": "" }, { "docid": "acafc9d077d48511ea351ded56527df9", "text": "The problem of testing programs without test oracles is well known. A commonly used approach is to use special values in testing but this is often insufficient to ensure program correctness. This paper demonstrates the use of metamorphic testing to uncover faults in programs, which could not be detected by special test values. Metamorphic testing can be used as a complementary test method to special value testing. In this paper, the sine function and a search function are used as examples to demonstrate the usefulness of metamorphic testing. This paper also examines metamorphic relationships and the extent of their usefulness in program testing.", "title": "" } ]
scidocsrr
0a013908ff4b03b4a5a3c690be904efe
Sensing and coverage for a network of heterogeneous robots
[ { "docid": "45d496fe8762fa52bbf6430eda2b7cfd", "text": "This paper presents deployment algorithms for multiple mobile robots with line-of-sight sensing and communication capabilities in a simple nonconvex polygonal environment. The objective of the proposed algorithms is to achieve full visibility of the environment. We solve the problem by constructing a novel data structure called the vertex-induced tree and designing schemes to deploy over the nodes of this tree by means of distributed algorithms. The agents are assumed to have access to a local memory and their operation is partially asynchronous", "title": "" } ]
[ { "docid": "f0285873e91d0470e8fbd8ce4430742f", "text": "Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage — and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve. CCS Concepts •Computing methodologies → Image processing;", "title": "" }, { "docid": "21ec8a3ea14829c0c21b4caaad08d508", "text": "OBJECTIVE\nWe investigated the effect of low-fat (2.5%) dahi containing probiotic Lactobacillus acidophilus and Lactobacillus casei on progression of high fructose-induced type 2 diabetes in rats.\n\n\nMETHODS\nDiabetes was induced in male albino Wistar rats by feeding 21% fructose in water. The body weight, food and water intakes, fasting blood glucose, glycosylated hemoglobin, oral glucose tolerance test, plasma insulin, liver glycogen content, and blood lipid profile were recorded. The oxidative status in terms of thiobarbituric acid-reactive substances and reduced glutathione contents in liver and pancreatic tissues were also measured.\n\n\nRESULTS\nValues for blood glucose, glycosylated hemoglobin, glucose intolerance, plasma insulin, liver glycogen, plasma total cholesterol, triacylglycerol, low-density lipoprotein cholesterol, very low-density lipoprotein cholesterol, and blood free fatty acids were increased significantly after 8 wk of high fructose feeding; however, the dahi-supplemented diet restricted the elevation of these parameters in comparison with the high fructose-fed control group. In contrast, high-density lipoprotein cholesterol decreased slightly and was retained in the dahi-fed group. The dahi-fed group also exhibited lower values of thiobarbituric acid-reactive substances and higher values of reduced glutathione in liver and pancreatic tissues compared with the high fructose-fed control group.\n\n\nCONCLUSION\nThe probiotic dahi-supplemented diet significantly delayed the onset of glucose intolerance, hyperglycemia, hyperinsulinemia, dyslipidemia, and oxidative stress in high fructose-induced diabetic rats, indicating a lower risk of diabetes and its complications.", "title": "" }, { "docid": "df02dafb455e2b68035cf8c150e28a0a", "text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.", "title": "" }, { "docid": "0418d5ce9f15a91aeaacd65c683f529d", "text": "We propose a novel cancelable biometric approach, known as PalmHashing, to solve the non-revocable biometric proposed method hashes palmprint templates with a set of pseudo-random keys to obtain a unique code called palmhash. The palmhash code can be stored in portable devices such tokens and smartcards for verification. Multiple sets of palmha can be maintained in multiple applications. Thus the privacy and security of the applications can be greatly enhance compromised, revocation can also be achieved via direct replacement of a new set of palmhash code. In addition, PalmHashin offers several advantages over contemporary biometric approaches such as clear separation of the genuine-imposter and zero EER occurrences. In this paper, we outline the implementation details of this method and also highlight its p in security-critical applications.  2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3e5312f6d3c02d8df2903ea80c1bbae5", "text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.", "title": "" }, { "docid": "03d1ffa6be8d26dc03a95fc89ea61943", "text": "Recent years have witnessed an increasing interest in image-based question-answering (QA) tasks. However, due to data limitations, there has been much less work on video-based QA. In this paper, we present TVQA, a largescale video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs from 21,793 clips, spanning over 460 hours of video. Questions are designed to be compositional in nature, requiring systems to jointly localize relevant moments within a clip, comprehend subtitle-based dialogue, and recognize relevant visual concepts. We provide analyses of this new dataset as well as several baselines and a multi-stream end-to-end trainable neural network framework for the TVQA task. The dataset is publicly available at http://tvqa.cs.unc.edu.", "title": "" }, { "docid": "298df39e9b415bc1eed95ed56d3f32df", "text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.", "title": "" }, { "docid": "ed9b027bafedfa9305d11dca49ecc930", "text": "This paper announces and discusses the experimental results from the Noisy Iris Challenge Evaluation (NICE), an iris biometric evaluation initiative that received worldwide participation and whose main innovation is the use of heavily degraded data acquired in the visible wavelength and uncontrolled setups, with subjects moving and at widely varying distances. The NICE contest included two separate phases: 1) the NICE.I evaluated iris segmentation and noise detection techniques and 2) the NICE:II evaluated encoding and matching strategies for biometric signatures. Further, we give the performance values observed when fusing recognition methods at the score level, which was observed to outperform any isolated recognition strategy. These results provide an objective estimate of the potential of such recognition systems and should be regarded as reference values for further improvements of this technology, which-if successful-may significantly broaden the applicability of iris biometric systems to domains where the subjects cannot be expected to cooperate.", "title": "" }, { "docid": "6a94bd02742b43102c25f874ba309bc9", "text": "Reward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, speci cation of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via uniformization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are speci ed at the SAN level, and solved in a single model. Furthermore, we propose a new technique for discarding paths in the uniformized process whose contribution to the reward variable is minimal, which greatly reduces the time and space required for a solution. A bound is calculated on the error introduced by this discarding, and its e ectiveness is illustrated through the study of the performability and availability of a degradable multi-processor system.", "title": "" }, { "docid": "bc9fcd462ad5c0519731380a2729c0b6", "text": "We extend the reach of functional encryption schemes that are provably secure under simple assumptions against unbounded collusion to include function-hiding inner product schemes. Our scheme is a private key functional encryption scheme, where ciphertexts correspond to vectors ~x, secret keys correspond to vectors ~y, and a decryptor learns 〈~x, ~y〉. Our scheme employs asymmetric bilinear maps and relies only on the SXDH assumption to satisfy a natural indistinguishability-based security notion where arbitrarily many key and ciphertext vectors can be simultaneously changed as long as the key-ciphertext dot product relationships are all preserved.", "title": "" }, { "docid": "13bd6515467934ba7855f981fd4f1efd", "text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.", "title": "" }, { "docid": "26032527ca18ef5a8cdeff7988c6389c", "text": "This paper aims to develop a load forecasting method for short-term load forecasting, based on an adaptive two-stage hybrid network with self-organized map (SOM) and support vector machine (SVM). In the first stage, a SOM network is applied to cluster the input data set into several subsets in an unsupervised manner. Then, groups of 24 SVMs for the next day's load profile are used to fit the training data of each subset in the second stage in a supervised way. The proposed structure is robust with different data types and can deal well with the nonstationarity of load series. In particular, our method has the ability to adapt to different models automatically for the regular days and anomalous days at the same time. With the trained network, we can straightforwardly predict the next-day hourly electricity load. To confirm the effectiveness, the proposed model has been trained and tested on the data of the historical energy load from New York Independent System Operator.", "title": "" }, { "docid": "dae9d92671b2379837a9bcd16bb57098", "text": "Natural locomotion in room-scale virtual reality (VR) is constrained by the user's immediate physical space. To overcome this obstacle, researchers have established the use of the impossible space design mechanic. This game illustrates the applied use of impossible spaces for enhancing the aesthetics of, and presence within, a room-scale VR game. This is done by creating impossible spaces with a gaming narrative intent. First, locomotion and impossible spaces in VR are surveyed; second, a VR game called Ares is put forth as a prototype; and third, a user study is briefly explored.", "title": "" }, { "docid": "40a87654ac33c46f948204fd5c7ef4c1", "text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.", "title": "" }, { "docid": "58873aa177cc69d13afa70c413af9efa", "text": "In vitro drug metabolism studies, which are inexpensive and readily carried out, serve as an adequate screening mechanism to characterize drug metabolites, elucidate their pathways, and make suggestions for further in vivo testing. This publication is a sequel to part I in a series and aims at providing a general framework to guide designs and protocols of the in vitro drug metabolism studies considered good practice in an efficient manner such that it would help researchers avoid common pitfalls and misleading results. The in vitro models include hepatic and non-hepatic microsomes, cDNA-expressed recombinant human CYPs expressed in insect cells or human B lymphoblastoid, chemical P450 inhibitors, S9 fraction, hepatocytes and liver slices. Important conditions for conducting the in vitro drug metabolism studies using these models are stated, including relevant concentrations of enzymes, co-factors, inhibitors and test drugs; time of incubation and sampling in order to establish kinetics of reactions; appropriate control settings, buffer selection and method validation. Separate in vitro data should be logically integrated to explain results from animal and human studies and to provide insights into the nature and consequences of in vivo drug metabolism. This article offers technical information and data and addresses scientific rationales and practical skills related to in vitro evaluation of drug metabolism to meet regulatory requirements for drug development.", "title": "" }, { "docid": "861c78c3886af55657cc21cb9dc8d8f7", "text": "According the universal serial cyclic redundancy check (CRC) technology, one of the new CRC algorithm based on matrix is referred, which describe an new parallel CRC coding circuit structure with r matrix transformation and pipeline technology. According to the method of parallel CRC coding in high-speed data transmitting, it requires a lot of artificial calculation. Due to the large amount of calculation, it is easy to produce some calculation error. According to the traditional thought of the serial CRC, the algorithm of parallel CRC based on the thought of matrix transformation and iterative has been deduced and expressed. The improved algorithm by pipeline technology has been applied in other systems which require high timing requirements of problem, The design has been implemented through Verilog hardware description language in FPGA device, which has achieved a good validation. It has become a very good method for high-speed CRC coding and decoding.", "title": "" }, { "docid": "70a293a975ec358f48c1b2fda1dfa3eb", "text": "This paper presents a novel approach for inducing lexical taxonomies automatically from text. We recast the learning problem as that of inferring a hierarchy from a graph whose nodes represent taxonomic terms and edges their degree of relatedness. Our model takes this graph representation as input and fits a taxonomy to it via combination of a maximum likelihood approach with a Monte Carlo Sampling algorithm. Essentially, the method works by sampling hierarchical structures with probability proportional to the likelihood with which they produce the input graph. We use our model to infer a taxonomy over 541 nouns and show that it outperforms popular flat and hierarchical clustering algorithms.", "title": "" }, { "docid": "98f76e0ea0f028a1423e1838bdebdccb", "text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.", "title": "" }, { "docid": "bd18a2a92781344dc9821f98559a9c69", "text": "The increasing complexity of Database Management Systems (DBMSs) and the dearth of their experienced administrators make an urgent call for an Autonomic DBMS that is capable of managing and maintaining itself. In this paper, we examine the characteristics that a DBMS should have in order to be considered autonomic and assess the position of today’s commercial DBMSs such as DB2, SQL Server, and Oracle.", "title": "" }, { "docid": "1bdd050958754ef19dd35f53dd055b5a", "text": "We present a method for isotropic remeshing of arbitrary genus surfaces. The method is based on a mesh adaptation process, namely, a sequence of local modifications performed on a copy of the original mesh, while referring to the original mesh geometry. The algorithm has three stages. In the first stage the required number or vertices are generated by iterative simplification or refinement. The second stage performs an initial vertex partition using an area-based relaxation method. The third stage achieves precise isotropic vertex sampling prescribed by a given density function on the mesh. We use a modification of Lloyd’s relaxation method to construct a weighted centroidal Voronoi tessellation of the mesh. We apply these iterations locally on small patches of the mesh that are parameterized into the 2D plane. This allows us to handle arbitrary complex meshes with any genus and any number of boundaries. The efficiency and the accuracy of the remeshing process is achieved using a patch-wise parameterization technique. Key-words: Surface mesh generation, isotropic triangle meshing, centroidal Voronoi tessellation, local parameterization. ∗ Technion, Haifa, Israel † INRIA Sophia-Antipolis ‡ Technion, Haifa, Israel Remaillage isotrope de surfaces utilisant une paramétrisation locale Résumé : Cet article décrit une méthode de remaillage isotrope de surfaces triangulées. L’approche repose sur une technique d’adaptation locale du maillage. L’idée consiste à opérer une séquence d’opérations élémentaires sur une copie du maillage original, tout en faisant référence au maillage original pour la géométrie. L’algorithme comporte trois étapes. La première étape ramène la complexité du maillage au nombre de sommets désiré par raffinement ou décimation itérative. La seconde étape opère une première répartition des sommets via une technique de relaxation optimisant un équilibrage local des aires sur les triangles. La troisième étape opère un placement isotrope des sommets via une relaxation de Lloyd pour construire une tessellation de Voronoi centrée. Les itérations de relaxation de Lloyd sont appliquées localement dans un espace paramétrique 2D calculé à la volée sur un sous-ensemble de la triangulation originale de telle que sorte que les triangulations de complexité et de genre arbitraire puissent être efficacement remaillées. Mots-clés : Maillage de surfaces, maillage triangulaire isotrope, diagrammes de Voronoi centrés, paramétrisation locale. Isotropic Remeshing of Surfaces", "title": "" } ]
scidocsrr
390f817ebe88bff3be540c4282ffbc25
Automatic Facial Expression Recognition Using Gabor Filter and Expression Analysis
[ { "docid": "7ab87738e0dc081d26a8cf223b957833", "text": "We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis. We also explored feature selection techniques, including the use of AdaBoost for feature selection prior to classification by SVM or LDA. Best results were obtained by selecting a subset of Gabor filters using AdaBoost followed by classification with support vector machines. The system operates in real-time, and obtained 93% correct generalization to novel subjects for a 7-way forced choice on the Cohn-Kanade expression dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics. We applied the system to to fully automated recognition of facial actions (FACS). The present system classifies 17 action units, whether they occur singly or in combination with other actions, with a mean accuracy of 94.8%. We present preliminary results for applying this system to spontaneous facial expressions.", "title": "" } ]
[ { "docid": "8aa92d178ff383742c1f3cc12d2d8539", "text": "Hypertext documents, such as web pages and academic papers, are of great importance in delivering information in our daily life. Although being effective on plain documents, conventional text embedding methods suffer from information loss if directly adapted to hyper-documents. In this paper, we propose a general embedding approach for hyper-documents, namely, hyperdoc2vec, along with four criteria characterizing necessary information that hyper-document embedding models should preserve. Systematic comparisons are conducted between hyperdoc2vec and several competitors on two tasks, i.e., paper classification and citation recommendation, in the academic paper domain. Analyses and experiments both validate the superiority of hyperdoc2vec to other models w.r.t. the four criteria.", "title": "" }, { "docid": "9d7a441731e9d0c62dd452ccb3d19f7b", "text": " In many countries, especially in under developed and developing countries proper health care service is a major concern. The health centers are far and even the medical personnel are deficient when compared to the requirement of the people. For this reason, health services for people who are unhealthy and need health monitoring on regular basis is like impossible. This makes the health monitoring of healthy people left far more behind. In order for citizens not to be deprived of the primary care it is always desirable to implement some system to solve this issue. The application of Internet of Things (IoT) is wide and has been implemented in various areas like security, intelligent transport system, smart cities, smart factories and health. This paper focuses on the application of IoT in health care system and proposes a novel architecture of making use of an IoT concept under fog computing. The proposed architecture can be used to acknowledge the underlying problem of deficient clinic-centric health system and change it to smart patientcentric health system.", "title": "" }, { "docid": "1ef1e20f24fa75b40bcc88a40a544c5b", "text": "Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. Monitoring grid resources is a lively research area given the challenges and manifold applications. The aim of this paper is to advance the understanding of grid monitoring by introducing the involved concepts, requirements, phases, and related standardisation activities, including Global Grid Forum’s Grid Monitoring Architecture. Based on a refinement of the latter, the paper proposes a taxonomy of grid monitoring systems, which is employed to classify a wide range of projects and frameworks. The value of the offered taxonomy lies in that it captures a given system’s scope, scalability, generality and flexibility. The paper concludes with, among others, a discussion of the considered systems, as well as directions for future research. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2936f8e1f9a6dcf2ba4fdbaee73684e2", "text": "Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.", "title": "" }, { "docid": "36a6c72e049ce551fcf302e19eb5063b", "text": "We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin.", "title": "" }, { "docid": "4f1111b33789e25ed896ad366f0d98de", "text": "As an ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words but the vector corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget’s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the Thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. We also demonstrate the preservation of semantic coherence of the resulting vector space by using word-analogy and word-similarity tests. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.", "title": "" }, { "docid": "ff91ed2072c93eeae5f254fb3de0d780", "text": "Machine learning requires access to all the data used for training. Recently, Google Research proposed Federated Learning as an alternative, where the training data is distributed over a federation of clients that each only access their own training data; the partially trained model is updated in a distributed fashion to maintain a situation where the data from all participating clients remains unknown. In this research we construct different distributions of the DMOZ dataset over the clients in the network and compare the resulting performance of Federated Averaging when learning a classifier. We find that the difference in spread of topics for each client has a strong correlation with the performance of the Federated Averaging algorithm.", "title": "" }, { "docid": "d5fbbd249842b40f3a81f1229213c528", "text": "In recent years, spatial applications have become more and more important in both scientific research and industry. Spatial query processing is the fundamental functioning component to support spatial applications. However, the state-of-the-art techniques of spatial query processing are facing significant challenges as the data expand and user accesses increase. In this paper we propose and implement a novel scheme (named VegaGiStore) to provide efficient spatial query processing over big spatial data and numerous concurrent user queries. Firstly, a geography-aware approach is proposed to organize spatial data in terms of geographic proximity, and this approach can achieve high aggregate I/O throughput. Secondly, in order to improve data retrieval efficiency, we design a two-tier distributed spatial index for efficient pruning of the search space. Thirdly, we propose an \"indexing + MapReduce'' data processing architecture to improve the computation capability of spatial query. Performance evaluations of the real-deployed VegaGiStore system confirm its effectiveness.", "title": "" }, { "docid": "670556463e3204a98b1e407ea0619a1f", "text": "1 Ekaterina Prasolova-Forland, IDI, NTNU, Sem Salandsv 7-9, N-7491 Trondheim, Norway ekaterip@idi.ntnu.no Abstract  This paper discusses awareness support in educational context, focusing on the support offered by collaborative virtual environments. Awareness plays an important role in everyday educational activities, especially in engineering courses where projects and group work is an integral part of the curriculum. In this paper we will provide a general overview of awareness in computer supported cooperative work and then focus on the awareness mechanisms offered by CVEs. We will also discuss the role and importance of these mechanisms in educational context and make some comparisons between awareness support in CVEs and in more traditional tools.", "title": "" }, { "docid": "f205f1760e33faebf2ded8065ff3c717", "text": "An audience effect arises when a person's behaviour changes because they believe someone else is watching them. Though these effects have been known about for over 110 years, the cognitive mechanisms of the audience effect and how it might vary across different populations and cultures remains unclear. In this review, we examine the hypothesis that the audience effect draws on implicit mentalising abilities. Behavioural and neuroimaging data from a number of tasks are consistent with this hypothesis. We further review data suggest that how people respond to audiences may vary over development, personality factors, cultural background and clinical diagnosis including autism and anxiety disorder. Overall, understanding and exploring the audience effect may contribute to our models of social interaction, including reputation management and mentalising.", "title": "" }, { "docid": "98689a2f03193a2fb5cc5195ef735483", "text": "Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts (\\em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.", "title": "" }, { "docid": "9326b7c1bd16e7db931131f77aaad687", "text": "We argue in this article that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalized grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution, and inference.", "title": "" }, { "docid": "3b9af99b33c15188a8ec50c7decd3b28", "text": "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5% on BDDS (drive-cam videos) in an unsupervised setting.", "title": "" }, { "docid": "df97ff54b80a096670c7771de1f49b6d", "text": "In recent times, Bitcoin has gained special attention both from industry and academia. The underlying technology that enables Bitcoin (or more generally crypto-currency) is called blockchain. At the core of the blockchain technology is a data structure that keeps record of the transactions in the network. The special feature that distinguishes it from existing technology is its immutability of the stored records. To achieve immutability, it uses consensus and cryptographic mechanisms. As the data is stored in distributed nodes this technology is also termed as \"Distributed Ledger Technology (DLT)\". As many researchers and practitioners are joining the hype of blockchain, some of them are raising the question about the fundamental difference between blockchain and traditional database and its real value or potential. In this paper, we present a critical analysis of both technologies based on a survey of the research literature where blockchain solutions are applied to various scenarios. Based on this analysis, we further develop a decision tree diagram that will help both practitioners and researchers to choose the appropriate technology for their use cases. Using our proposed decision tree we evaluate a sample of the existing works to see to what extent the blockchain solutions have been used appropriately in the relevant problem domains.", "title": "" }, { "docid": "06518637c2b44779da3479854fdbb84d", "text": "OBJECTIVE\nThe relative short-term efficacy and long-term benefits of pharmacologic versus psychotherapeutic interventions have not been studied for posttraumatic stress disorder (PTSD). This study compared the efficacy of a selective serotonin reup-take inhibitor (SSRI), fluoxetine, with a psychotherapeutic treatment, eye movement desensitization and reprocessing (EMDR), and pill placebo and measured maintenance of treatment gains at 6-month follow-up.\n\n\nMETHOD\nEighty-eight PTSD subjects diagnosed according to DSM-IV criteria were randomly assigned to EMDR, fluoxetine, or pill placebo. They received 8 weeks of treatment and were assessed by blind raters posttreatment and at 6-month follow-up. The primary outcome measure was the Clinician-Administered PTSD Scale, DSM-IV version, and the secondary outcome measure was the Beck Depression Inventory-II. The study ran from July 2000 through July 2003.\n\n\nRESULTS\nThe psychotherapy intervention was more successful than pharmacotherapy in achieving sustained reductions in PTSD and depression symptoms, but this benefit accrued primarily for adult-onset trauma survivors. At 6-month follow-up, 75.0% of adult-onset versus 33.3% of child-onset trauma subjects receiving EMDR achieved asymptomatic end-state functioning compared with none in the fluoxetine group. For most childhood-onset trauma patients, neither treatment produced complete symptom remission.\n\n\nCONCLUSIONS\nThis study supports the efficacy of brief EMDR treatment to produce substantial and sustained reduction of PTSD and depression in most victims of adult-onset trauma. It suggests a role for SSRIs as a reliable first-line intervention to achieve moderate symptom relief for adult victims of childhood-onset trauma. Future research should assess the impact of lengthier intervention, combination treatments, and treatment sequencing on the resolution of PTSD in adults with childhood-onset trauma.", "title": "" }, { "docid": "f2239ebff484962c302b00faf24374e4", "text": "In this paper, a methodology for the automated detection and classification of transient events in electroencephalographic (EEG) recordings is presented. It is based on association rule mining and classifies transient events into four categories: epileptic spikes, muscle activity, eye blinking activity, and sharp alpha activity. The methodology involves four stages: 1) transient event detection; 2) clustering of transient events and feature extraction; 3) feature discretization and feature subset selection; and 4) association rule mining and classification of transient events. The methodology is evaluated using 25 EEG recordings, and the best obtained accuracy was 87.38%. The proposed approach combines high accuracy with the ability to provide interpretation for the decisions made, since it is based on a set of association rules", "title": "" }, { "docid": "cd6fce2e64ba8933339dd59491b9ef1d", "text": "The first micrometer-sized graphene flakes extracted from graphite demonstrated outstanding electrical, mechanical and chemical properties, but they were too small for practical applications. However, the recent advances in graphene synthesis and transfer techniques have enabled various macroscopic applications such as transparent electrodes for touch screens and light-emitting diodes (LEDs) and thin-film transistors for flexible electronics in particular. With such exciting potential, a great deal of effort has been put towards producing larger size graphene in the hopes of industrializing graphene production. Little less than a decade after the first discovery, graphene now can be synthesized up to 30 inches in its diagonal size using chemical vapour deposition methods. In making this possible, it was not only the advances in the synthesis techniques but also the transfer methods that deliver graphene onto target substrates without significant mechanical damage. In this article, the recent advancements in transferring graphene to arbitrary substrates will be extensively reviewed. The methods are categorized into mechanical exfoliation, polymer-assisted transfer, continuous transfer by roll-to-roll process, and transfer-free techniques including direct synthesis on insulating substrates.", "title": "" }, { "docid": "02e961880a7925eb9d41c372498cb8d0", "text": "Since debt is typically riskier in recessions, transfers from equity holders to debt holders associated with each investment also tend to concentrate in recessions. Such systematic risk exposure of debt overhang has important implications for the investment and financing decisions of firms and on the ex ante costs of debt overhang. Using a calibrated dynamic capital structure/real option model, we show that the costs of debt overhang become significantly higher in the presence of macroeconomic risk. We also provide several new predictions that relate the cyclicality of a firm’s assets in place and growth options to its investment and capital structure decisions. We are grateful to Santiago Bazdresch, Bob Goldstein, David Mauer (WFA discussant), Erwan Morellec, Stew Myers, Chris Parsons, Michael Roberts, Antoinette Schoar, Neng Wang, Ivo Welch, and seminar participants at MIT, Federal Reserve Bank of Boston, Boston University, Dartmouth, University of Lausanne, University of Minnesota, the Third Risk Management Conference at Mont Tremblant, the Minnesota Corporate Finance Conference, and the WFA for their comments. MIT Sloan School of Management and NBER. Email: huichen@mit.edu. Tel. 617-324-3896. MIT Sloan School of Management. Email: manso@mit.edu. Tel. 617-253-7218.", "title": "" }, { "docid": "40beda0d1e99f4cc5a15a3f7f6438ede", "text": "One of the major challenges with electric shipboard power systems (SPS) is preserving the survivability of the system under fault situations. Some minor faults in SPS can result in catastrophic consequences. Therefore, it is essential to investigate available fault management techniques for SPS applications that can enhance SPS robustness and reliability. Many recent studies in this area take different approaches to address fault tolerance in SPSs. This paper provides an overview of the concepts and methodologies that are utilized to deal with faults in the electric SPS. First, a taxonomy of the types of faults and their sources in SPS is presented; then, the methods that are used to detect, identify, isolate, and manage faults are reviewed. Furthermore, common techniques for designing a fault management system in SPS are analyzed and compared. This paper also highlights several possible future research directions.", "title": "" }, { "docid": "1d5a91029960f267b49831bee80e348f", "text": "Deep neural networks (DNNs) have become the dominant technique for acoustic-phonetic modeling due to their markedly improved performance over other models. Despite this, little is understood about the computation they implement in creating phonemic categories from highly variable acoustic signals. In this paper, we analyzed a DNN trained for phoneme recognition and characterized its representational properties, both at the single node and population level in each layer. At the single node level, we found strong selectivity to distinct phonetic features in all layers. Node selectivity to specific manners and places of articulation appeared from the first hidden layer and became more explicit in deeper layers. Furthermore, we found that nodes with similar phonetic feature selectivity were differentially activated to different exemplars of these features. Thus, each node becomes tuned to a particular acoustic manifestation of the same feature, providing an effective representational basis for the formation of invariant phonemic categories. This study reveals that phonetic features organize the activations in different layers of a DNN, a result that mirrors the recent findings of feature encoding in the human auditory system. These insights may provide better understanding of the limitations of current models, leading to new strategies to improve their performance.", "title": "" } ]
scidocsrr
44f0de3b4bb4c34188a380aad7efbf34
Effect of Iyengar yoga therapy for chronic low back pain
[ { "docid": "9876e4298f674a617f065f348417982a", "text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.", "title": "" } ]
[ { "docid": "80ed0585f1b040f2af895f1067502899", "text": "In this paper, we present the concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is in order to reduce the cost, transmission and distribution losses. This concept is known as Microwave Power transmission (MPT). We also discussed the technological developments in Wireless Power Transmission (WPT) which are required for the improment .The components which are requiredfor the development of Microwave Power transmission(MPT)are also mentioned along with the performance when they are connected to various devices at different frequency levels . The advantages, disadvantages, biological impacts and applications of WPT are also presented.", "title": "" }, { "docid": "a799bba2a5d56d45e3b0569119ee8ad2", "text": "There has been much research investigating team cognition, naturalistic decision making, and collaborative technology as it relates to real world, complex domains of practice. However, there has been limited work in incorporating naturalistic decision making models for supporting distributed team decision making. The aim of this research is to support human decision making teams using cognitive agents empowered by a collaborative Recognition-Primed Decision model. In this paper, we first describe an RPD-enabled agent architecture (R-CAST), in which we have implemented an internal mechanism of decision-making adaptation based on collaborative expectancy monitoring, and an information exchange mechanism driven by relevant cue analysis. We have evaluated R-CAST agents in a real-time simulation environment, feeding teams with frequent decision-making tasks under different tempo situations. While the result conforms to psychological findings that human team members are extremely sensitive to their workload in high-tempo situations, it clearly indicates that human teams, when supported by R-CAST agents, can perform better in the sense that they can maintain team performance at acceptable levels in high time pressure situations.", "title": "" }, { "docid": "9607eff43d60837e407d7fa07eb4650f", "text": "Given a network with node attributes, how can we identify communities and spot anomalies? How can we characterize, describe, or summarize the network in a succinct way? Community extraction requires a measure of quality for connected subgraphs (e.g., social circles). Existing subgraph measures, however, either consider only the connectedness of nodes inside the community and ignore the cross-edges at the boundary (e.g., density) or only quantify the structure of the community and ignore the node attributes (e.g., conductance). In this work, we focus on node-attributed networks and introduce: (1) a new measure of subgraph quality for attributed communities called normality, (2) a community extraction algorithm that uses normality to extract communities and a few characterizing attributes per community, and (3) a summarization and interactive visualization approach for attributed graph exploration. More specifically, (1) we first introduce a new measure to quantify the normality of an attributed subgraph. Our normality measure carefully utilizes structure and attributes together to quantify both the internal consistency and external separability. We then formulate an objective function to automatically infer a few attributes (called the “focus”) and respective attribute weights, so as to maximize the normality  score of a given subgraph. Most notably, unlike many other approaches, our measure allows for many cross-edges as long as they can be “exonerated;” i.e., either (i) are expected under a null graph model, and/or (ii) their boundary nodes do not exhibit the focus attributes. Next, (2) we propose AMEN (for Attributed Mining of Entity Networks), an algorithm that simultaneously discovers the communities and their respective focus in a given graph, with a goal to maximize the total normality. Communities for which a focus that yields high normality  cannot be found are considered low quality or anomalous. Last, (3) we formulate a summarization task with a multi-criteria objective, which selects a subset of the communities that (i) cover the entire graph well, are (ii) high quality and (iii) diverse in their focus attributes. We further design an interactive visualization interface that presents the communities to a user in an interpretable, user-friendly fashion. The user can explore all the communities, analyze various algorithm-generated summaries, as well as devise their own summaries interactively to characterize the network in a succinct way. As the experiments on real-world attributed graphs show, our proposed approaches effectively find anomalous communities and outperform several existing measures and methods, such as conductance, density, OddBall, and SODA. We also conduct extensive user studies to measure the capability and efficiency that our approach provides to the users toward network summarization, exploration, and sensemaking.", "title": "" }, { "docid": "b540cb8f0f0825662d21a5e2ed100012", "text": "Social media platforms are popular venues for fashion brand marketing and advertising. With the introduction of native advertising, users don’t have to endure banner ads that hold very little saliency and are unattractive. Using images and subtle text overlays, even in a world of ever-depreciating attention span, brands can retain their audience and have a capacious creative potential. While an assortment of marketing strategies are conjectured, the subtle distinctions between various types of marketing strategies remain under-explored. This paper presents a qualitative analysis on the influence of social media platforms on different behaviors of fashion brand marketing. We employ both linguistic and computer vision techniques while comparing and contrasting strategic idiosyncrasies. We also analyze brand audience retention and social engagement hence providing suggestions in adapting advertising and marketing strategies over Twitter and Instagram.", "title": "" }, { "docid": "2eba831751ae88cfb69b7c4463df438a", "text": "ÐSoftware engineers use a number of different types of software development technical review (SDTR) for the purpose of detecting defects in software products. This paper applies the behavioral theory of group performance to explain the outcomes of software reviews. A program of empirical research is developed, including propositions to both explain review performance and identify ways of improving review performance based on the specific strengths of individuals and groups. Its contributions are to clarify our understanding of what drives defect detection performance in SDTRs and to set an agenda for future research. In identifying individuals' task expertise as the primary driver of review performance, the research program suggests specific points of leverage for substantially improving review performance. It points to the importance of understanding software reading expertise and implies the need for a reconsideration of existing approaches to managing reviews. Index TermsÐInspections, walkthroughs, technical reviews, defects, defect detection, groups, group process, group size, expertise, reading, training, behavioral research, theory, research program.", "title": "" }, { "docid": "a8b26d719b7512634383c71c1e57c960", "text": "The method of finding high-quality answers has significant impact on user satisfaction in community question answering systems. However, due to the lexical gap between questions and answers as well as spam typically existing in user-generated content, filtering and ranking answers is very challenging. Previous solutions mainly focus on generating redundant features, or finding textual clues using machine learning techniques; none of them ever consider questions and their answers as relational data but instead model them as independent information. Moreover, they only consider the answers of the current question, and ignore any previous knowledge that would be helpful to bridge the lexical and semantic gap. We assume that answers are connected to their questions with various types of latent links, i.e. positive indicating high-quality answers, negative links indicating incorrect answers or user-generated spam, and propose an analogical reasoning-based approach which measures the analogy between the new question-answer linkages and those of relevant knowledge which contains only positive links; the candidate answer which has the most analogous link is assumed to be the best answer. We conducted experiments based on 29.8 million Yahoo!Answer question-answer threads and showed the effectiveness of our approach.", "title": "" }, { "docid": "5fafb56408b75344fe7e55260a758180", "text": "This paper presents a new conversion method to automatically transform a constituent-based Vietnamese Treebank into dependency trees. On a dependency Treebank created according to our new approach, we examine two stateof-the-art dependency parsers: the MSTParser and the MaltParser. Experiments show that the MSTParser outperforms the MaltParser. To the best of our knowledge, we report the highest performances published to date in the task of dependency parsing for Vietnamese. Particularly, on gold standard POS tags, we get an unlabeled attachment score of 79.08% and a labeled attachment score of 71.66%.", "title": "" }, { "docid": "db26d71ec62388e5367eb0f2bb45ad40", "text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th", "title": "" }, { "docid": "e8e8e6d288491e715177a03601500073", "text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.", "title": "" }, { "docid": "8793b4ed20f6edce8cb61af1ff0aee55", "text": "This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e., the autonomous vehicles' ability to make appropriate driving decisions in city road traffic situations. The paper explains the overall controls system architecture, the decision making task decomposition, and focuses on how Multiple Criteria Decision Making (MCDM) is used in the process of selecting the most appropriate driving maneuver from the set of feasible ones. Experimental tests show that MCDM is suitable for this new application area.", "title": "" }, { "docid": "ce7175f868e2805e9e08e96a1c9738f4", "text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.", "title": "" }, { "docid": "da4d3534f0f8cf463d4dfff9760b68f4", "text": "While recommendation approaches exploiting different input sources have started to proliferate in the literature, an explicit study of the effect of the combination of heterogeneous inputs is still missing. On the other hand, in this context there are sides to recommendation quality requiring further characterisation and methodological research –a gap that is acknowledged in the field. We present a comparative study on the influence that different types of information available in social systems have on item recommendation. Aiming to identify which sources of user interest evidence –tags, social contacts, and user-item interaction data– are more effective to achieve useful recommendations, and in what aspect, we evaluate a number of content-based, collaborative filtering, and social recommenders on three datasets obtained from Delicious, Last.fm, and MovieLens. Aiming to determine whether and how combining such information sources may enhance over individual recommendation approaches, we extend the common accuracy-oriented evaluation practice with various metrics to measure further recommendation quality dimensions, namely coverage, diversity, novelty, overlap, and relative diversity between ranked item recommendations. We report empiric observations showing that exploiting tagging information by content-based recommenders provides high coverage and novelty, and combining social networking and collaborative filtering information by hybrid recommenders results in high accuracy and diversity. This, along with the fact that recommendation lists from the evaluated approaches had low overlap and relative diversity values between them, gives insights that meta-hybrid recommenders combining the above strategies may provide valuable, balanced item suggestions in terms of performance and non-performance metrics.", "title": "" }, { "docid": "802eb80255cf85991260da72b87238e1", "text": "This paper describes the vision-based control of a small autonomous aircraft following a road. The computer vision system detects natural features of the scene and tracks the roadway in order to determine relative yaw and lateral displacement between the aircraft and the road. Using only the vision measurements and onboard inertial sensors, a control strategy stabilizes the aircraft and follows the road. The road detection and aircraft control strategies have been verified by hardware in the loop (HIL) simulations over long stretches (several kilometers) of straight roads and in conditions of up to 5 m/s of prevailing wind. Hardware experiments have also been conducted using a modified radio-controlled aircraft. Successful road following was demonstrated over an airfield runway under variable lighting and wind conditions. The development of vision-based control strategies for unmanned aerial vehicles (UAVs), such as the ones presented here, enables complex autonomous missions in environments where typical navigation sensor like GPS are unavailable.", "title": "" }, { "docid": "8da939b67039eddb24db213337a65958", "text": "Alistair S. Jump* and Josep Peñuelas Unitat d’Ecofisiologia CSICCEAB-CREAF, Centre de Recerca Ecològica i Aplicacions Forestals, Universitat Autònoma de Barcelona, E-08193, Bellaterra, Barcelona, Spain *Correspondence: E-mail: a.s.jump@creaf.uab.es Abstract Climate is a potent selective force in natural populations, yet the importance of adaptation in the response of plant species to past climate change has been questioned. As many species are unlikely to migrate fast enough to track the rapidly changing climate of the future, adaptation must play an increasingly important role in their response. In this paper we review recent work that has documented climate-related genetic diversity within populations or on the microgeographical scale. We then describe studies that have looked at the potential evolutionary responses of plant populations to future climate change. We argue that in fragmented landscapes, rapid climate change has the potential to overwhelm the capacity for adaptation in many plant populations and dramatically alter their genetic composition. The consequences are likely to include unpredictable changes in the presence and abundance of species within communities and a reduction in their ability to resist and recover from further environmental perturbations, such as pest and disease outbreaks and extreme climatic events. Overall, a range-wide increase in extinction risk is likely to result. We call for further research into understanding the causes and consequences of the maintenance and loss of climate-related genetic diversity within populations.", "title": "" }, { "docid": "ddcf9180119dfa0b26d7b6d4c0ed958e", "text": "BACKGROUND\nHandling of upper lateral cartilages (ULCs) is of prime importance in rhinoplasty. This study presents the experiences among 2500 cases of rhinoplasty in the past 10 years for managing of ULCs to minimize unwilling results of the shape and functional problems of the nose.\n\n\nMETHODS\nAll cases of rhinoplasties were done by the same surgeon from 2002 to 2013. Management of ULCs changed from resection to preserving the ULCs and to enhance their structural and functional roles. The techniques were spreader grafts, suturing of ULC together at the level or above the septum, using ULCs as auto-spreader flaps and very rarely trimming of ULCs unilaterally or bilaterally for making symmetric dorsal aesthetic lines. Fifty cases were operated based on this classification. Most cases were in type II and III. There were 7 cases in type I and 8 cases in type IV.\n\n\nRESULTS\nAmong most cases, the results were satisfactory although there were 8 cases for revision and among them, 2 cases had some fullness on dorsum and supra-tip because of inappropriate judgment on keeping the relationship between dorsum and tip. The problems in the shape and airways role of the nose reduced dramatically and a useful algorithm was presented.\n\n\nCONCLUSION\nULCs have great important roles in shape and function of nose. Preserving methods to keep these structures are of importance in surgical treatments of primary rhinoplasties. The presented algorithm helps to manage the ULCs in different anatomic types of the noses especially for surgeons who are in learning curve period.", "title": "" }, { "docid": "7731315bb30b1888caf4be87aa38a108", "text": "The problem of scheduling is concerned with searching for optimal (or near-optimal) schedules subject to a number of constraints. A variety of approaches have been developed to solve the problem of scheduling. However, many of these approaches are often impractical in dynamic real-world environments where there are complex constraints and a variety of unexpected disruptions. In most real-world environments, scheduling is an ongoing reactive process where the presence of real-time information continually forces reconsideration and revision of pre-established schedules. Scheduling research has largely ignored this problem, focusing instead on optimisation of static schedules. This paper outlines the limitations of static approaches to scheduling in the presence of real-time information and presents a number of issues that have come up in recent years on dynamic scheduling. The paper defines the problem of dynamic scheduling and provides a review of the state of the art of currently developing research on dynamic scheduling. The principles of several dynamic scheduling techniques, namely, dispatching rules, heuristics, meta-heuristics, artificial intelligence techniques, and multi-agent systems are described in detail, followed by a discussion and comparison of their potential.", "title": "" }, { "docid": "f935bdde9d4571f50e47e48f13bfc4b8", "text": "BACKGROUND\nThe incidence of microcephaly in Brazil in 2015 was 20 times higher than in previous years. Congenital microcephaly is associated with genetic factors and several causative agents. Epidemiological data suggest that microcephaly cases in Brazil might be associated with the introduction of Zika virus. We aimed to detect and sequence the Zika virus genome in amniotic fluid samples of two pregnant women in Brazil whose fetuses were diagnosed with microcephaly.\n\n\nMETHODS\nIn this case study, amniotic fluid samples from two pregnant women from the state of Paraíba in Brazil whose fetuses had been diagnosed with microcephaly were obtained, on the recommendation of the Brazilian health authorities, by ultrasound-guided transabdominal amniocentesis at 28 weeks' gestation. The women had presented at 18 weeks' and 10 weeks' gestation, respectively, with clinical manifestations that could have been symptoms of Zika virus infection, including fever, myalgia, and rash. After the amniotic fluid samples were centrifuged, DNA and RNA were extracted from the purified virus particles before the viral genome was identified by quantitative reverse transcription PCR and viral metagenomic next-generation sequencing. Phylogenetic reconstruction and investigation of recombination events were done by comparing the Brazilian Zika virus genome with sequences from other Zika strains and from flaviviruses that occur in similar regions in Brazil.\n\n\nFINDINGS\nWe detected the Zika virus genome in the amniotic fluid of both pregnant women. The virus was not detected in their urine or serum. Tests for dengue virus, chikungunya virus, Toxoplasma gondii, rubella virus, cytomegalovirus, herpes simplex virus, HIV, Treponema pallidum, and parvovirus B19 were all negative. After sequencing of the complete genome of the Brazilian Zika virus isolated from patient 1, phylogenetic analyses showed that the virus shares 97-100% of its genomic identity with lineages isolated during an outbreak in French Polynesia in 2013, and that in both envelope and NS5 genomic regions, it clustered with sequences from North and South America, southeast Asia, and the Pacific. After assessing the possibility of recombination events between the Zika virus and other flaviviruses, we ruled out the hypothesis that the Brazilian Zika virus genome is a recombinant strain with other mosquito-borne flaviviruses.\n\n\nINTERPRETATION\nThese findings strengthen the putative association between Zika virus and cases of microcephaly in neonates in Brazil. Moreover, our results suggest that the virus can cross the placental barrier. As a result, Zika virus should be considered as a potential infectious agent for human fetuses. Pathogenesis studies that confirm the tropism of Zika virus for neuronal cells are warranted.\n\n\nFUNDING\nConsellho Nacional de Desenvolvimento e Pesquisa (CNPq), Fundação de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ).", "title": "" }, { "docid": "dd7ab988d8a40e6181cd37f8a1b1acfa", "text": "In areas approaching malaria elimination, human mobility patterns are important in determining the proportion of malaria cases that are imported or the result of low-level, endemic transmission. A convenience sample of participants enrolled in a longitudinal cohort study in the catchment area of Macha Hospital in Choma District, Southern Province, Zambia, was selected to carry a GPS data logger for one month from October 2013 to August 2014. Density maps and activity space plots were created to evaluate seasonal movement patterns. Time spent outside the household compound during anopheline biting times, and time spent in malaria high- and low-risk areas, were calculated. There was evidence of seasonal movement patterns, with increased long-distance movement during the dry season. A median of 10.6% (interquartile range (IQR): 5.8-23.8) of time was spent away from the household, which decreased during anopheline biting times to 5.6% (IQR: 1.7-14.9). The per cent of time spent in malaria high-risk areas for participants residing in high-risk areas ranged from 83.2% to 100%, but ranged from only 0.0% to 36.7% for participants residing in low-risk areas. Interventions targeted at the household may be more effective because of restricted movement during the rainy season, with limited movement between high- and low-risk areas.", "title": "" }, { "docid": "eef1e51e4127ed481254f97963496f48", "text": "-Vehicular ad hoc networks (VANETs) are wireless networks that do not require any fixed infrastructure. Regarding traffic safety applications for VANETs, warning messages have to be quickly and smartly disseminated in order to reduce the required dissemination time and to increase the number of vehicles receiving the traffic warning information. Adaptive techniques for VANETs usually consider features related to the vehicles in the scenario, such as their density, speed, and position, to adapt the performance of the dissemination process. These approaches are not useful when trying to warn the highest number of vehicles about dangerous situations in realistic vehicular environments. The Profile-driven Adaptive Warning Dissemination Scheme (PAWDS) designed to improve the warning message dissemination process. PAWDS system that dynamically modifies some of the key parameters of the propagation process and it cannot detect the vehicles which are in the dangerous position. Proposed system identifies the vehicles which are in the dangerous position and to send warning messages immediately. The vehicles must make use of all the available information efficiently to predict the position of nearby vehicles. Keywords— PAWDS, VANET, Ad hoc network , OBU , RSU, GPS.", "title": "" }, { "docid": "5c11736439fe488b389e400141ccfdb0", "text": "We propose a hierarchical model for sequential data that learns a tree on-thefly, i.e. while reading the sequence. In the model, a recurrent network adapts its structure and reuses recurrent weights in a recursive manner. This creates adaptive skip-connections that ease the learning of long-term dependencies. The tree structure can either be inferred without supervision through reinforcement learning, or learned in a supervised manner. We provide preliminary experiments in a novel Math Expression Evaluation (MEE) task, which is explicitly crafted to have a hierarchical tree structure that can be used to study the effectiveness of our model. Additionally, we test our model in a wellknown propositional logic and language modelling tasks. Experimental results show the potential of our approach.", "title": "" } ]
scidocsrr
3f85ab24763b17b0e940da68b34bb844
Computational personality traits assessment: A review
[ { "docid": "1378ab6b9a77dba00beb63c27b1addf6", "text": "Whenever we listen to or meet a new person we try to predict personality attributes of the person. Our behavior towards the person is hugely influenced by the predictions we make. Personality is made up of the characteristic patterns of thoughts, feelings and behaviors that make a person unique. Your personality affects your success in the role. Recognizing about yourself and reflecting on your personality can help you to understand how you might shape your future. Various approaches like personality prediction through speech, facial expression, video, and text are proposed in literature to recognize personality. Personality predictions can be made out of one’s handwriting as well. The objective of this paper is to discuss methodology used to identify personality through handwriting analysis and present current state-of-art related to it.", "title": "" }, { "docid": "c0d794e7275e7410998115303bf0cf79", "text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.", "title": "" } ]
[ { "docid": "7ebf04cde2f938787dac4718e768efe1", "text": "With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of This work is supported by National Basic Research Program of China (973 Program Grant No. 2013CB329105), National Natural Science Foundation of China (Grants No. 61301080 and No. 61171065), Chinese National Major Scientific and Technological Specialized Project (No. 2013ZX03002001), Chinas Next Generation Internet (No. CNGI-12-03-007), and ZTE Corporation. M. Yang School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, P. R. China E-mail: yangmao210@gmail.com Y. Li · D. Jin · L. Zeng Department of Electronic Engineering, Tsinghua University, Beijing 100084, P. R. China Y. Li E-mail: liyong07@tsinghua.edu.cn D. Jin, L. Zeng E-mail: {jindp, zenglg}@mail.tsinghua.edu.cn Xin Wu Big Switch, USA E-mail: xin.wu@bigswitch.com A. V. Vasilakos Department of Computer and Telecommunications Engineering,University of Western Macedonia, Greece Electrical and Computer Engineering, National Technical University of Athens (NTUA), Greece E-mail: vasilako@ath.forthnet.gr MWN and significantly benefit the future mobile and wireless network.", "title": "" }, { "docid": "e708fc43b5ac8abf8cc2707195e8a45e", "text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.", "title": "" }, { "docid": "ac1f2a1a96ab424d9b69276efd4f1ed4", "text": "This paper describes various systems from the University of Minnesota, Duluth that participated in the CLPsych 2015 shared task. These systems learned decision lists based on lexical features found in training data. These systems typically had average precision in the range of .70 – .76, whereas a random baseline attained .47 – .49.", "title": "" }, { "docid": "19e09b1c0eb3646e5ae6484524f82e10", "text": "Results from 12 switchback field trials involving 1216 cows were combined to assess the effects of a protected B vitamin blend (BVB) upon milk yield (kg), fat percentage (%), protein %, fat yield (kg) and protein yield (kg) in primiparous and multiparous cows. Trials consisted of 3 test periods executed in the order control-test-control. No diet changes other than the inclusion of 3 grams/cow/ day of the BVB during the test period occurred. Means from the two control periods were compared to results obtained during the test period using a paired T test. Cows include in the analysis were between 45 and 300 days in milk (DIM) at the start of the experiment and were continuously available for all periods. The provision of the BVB resulted in increased (P < 0.05) milk, fat %, protein %, fat yield and protein yield. Regression models showed that the amount of milk produced had no effect upon the magnitude of the increase in milk components. The increase in milk was greatest in early lactation and declined with DIM. Protein and fat % increased with DIM in mature cows, but not in first lactation cows. Differences in fat yields between test and control feeding periods did not change with DIM, but the improvement in protein yield in mature cows declined with DIM. These results indicate that the BVB provided economically important advantages throughout lactation, but expected results would vary with cow age and stage of lactation.", "title": "" }, { "docid": "66c218bddb0bce210f8e0efa7bb457a7", "text": "The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.", "title": "" }, { "docid": "7a055093ac92c7d2fa7aa8dcbe47a8b8", "text": "In this paper, we present the design process of a smart bracelet that aims at enhancing the life of elderly people. The bracelet acts as a personal assistant during the user's everyday life, monitoring the health status and alerting him or her about abnormal conditions, reminding medications and facilitating the everyday life in many outdoor and indoor activities.", "title": "" }, { "docid": "c7a32821699ebafadb4c59e99fb3aa9e", "text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.", "title": "" }, { "docid": "60094e041c1be864ba8a636308b7ee12", "text": "This paper presents two chatbot systems, ALICE and Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the Dialogue Diversity Corpus to retrain a chatbot system with human dialogue examples. A Java program to convert from dialog transcript to AIML format provides a basic implementation of corpusbased chatbot training.. We conclude that dialogue researchers should adopt clearer standards for transcription and markup format in dialogue corpora to be used in training a chatbot system more effectively.", "title": "" }, { "docid": "5591d4842507a097e353c67c7d56262d", "text": "Reasoning about entities and their relationships from multimodal data is a key goal of Artificial General Intelligence. The visual question answering (VQA) problem is an excellent way to test such reasoning capabilities of an AI model and its multimodal representation learning. However, the current VQA models are oversimplified deep neural networks, comprised of a long short-term memory (LSTM) unit for question comprehension and a convolutional neural network (CNN) for learning single image representation. We argue that the single visual representation contains a limited and general information about the image contents and thus limits the model reasoning capabilities. In this work we introduce a modular neural network model that learns a multimodal and multifaceted representation of the image and the question. The proposed model learns to use the multimodal representation to reason about the image entities and achieves a new state-of-the-art performance on both VQA benchmark datasets, VQA v1.0 and v2.0, by a wide margin.", "title": "" }, { "docid": "ce5fc5fbb3cb0fb6e65ca530bfc097b1", "text": "The Bulgarian electricity market rules require from the transmission system operator, to procure electricity for covering transmission grid losses on hourly base before day-ahead gate closure. In this paper is presented a software solution for day-ahead forecasting of hourly transmission losses that is based on statistical approach of the impacting factors correlations and uses as inputs numerical weather predictions.", "title": "" }, { "docid": "8e2006ca72dbc6be6592e21418b7f3ba", "text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.", "title": "" }, { "docid": "0bc0e621c58a79a7455f0849ccf41a02", "text": "With the adoption of power electronic converters in shipboard power systems and associated novel fault management concepts, the ability to isolate electric faults quickly from the power system is becoming more important than breaking high magnitude fault currents and the corresponding arcing between opening contacts within a switch. This allows for the design of substantially faster, as well as potentially lighter and more compact, mechanical disconnect switches. Herein, we are proposing a new class of mechanical disconnect switches that utilize piezoelectric actuators to isolate within less than one millisecond. This technology may become a key enabler for future all-electric ships.", "title": "" }, { "docid": "14fb71b01f86008f0772eabd52ea747a", "text": "This paper introduces a positioning system for walking persons, called \"Personal Dead-reckoning\" (PDR) system. The PDR system does not require GPS, beacons, or landmarks. The system is therefore useful in GPS-denied environments, such as inside buildings, tunnels, or dense forests. Potential users of the system are military and security personnel as well as emergency responders. The PDR system uses a 6-DOF inertial measurement unit (IMU) attached to the user's boot. The IMU provides rate-of-rotation and acceleration measurements that are used in real-time to estimate the location of the user relative to a known starting point. In order to reduce the most significant errors of this IMU-based system-caused by the bias drift of the accelerometers-we implemented a technique known as \"Zero Velocity Update\" (ZUPT). With the ZUPT technique and related signal processing algorithms, typical errors of our system are about 2% of distance traveled for short walks. This typical PDR system error is largely independent of the gait or speed of the user. When walking continuously for several minutes, the error increases gradually beyond 2%. The PDR system works in both 2-dimensional (2-D) and 3-D environments, although errors in Z-direction are usually larger than 2% of distance traveled. Earlier versions of our system used an unpractically large IMU. In the most recent version we implemented a much smaller IMU. This paper discussed specific problems of this small IMU, our measures for eliminating these problems, and our first experimental results with the small IMU under different conditions.", "title": "" }, { "docid": "d041a5fc5f788b1abd8abf35a26cb5d2", "text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.", "title": "" }, { "docid": "602077b20a691854102946757da4b287", "text": "For three-dimensional (3D) ultrasound imaging, connecting elements of a two-dimensional (2D) transducer array to the imaging system's front-end electronics is a challenge because of the large number of array elements and the small element size. To compactly connect the transducer array with electronics, we flip-chip bond a 2D 16 times 16-element capacitive micromachined ultrasonic transducer (CMUT) array to a custom-designed integrated circuit (IC). Through-wafer interconnects are used to connect the CMUT elements on the top side of the array with flip-chip bond pads on the back side. The IC provides a 25-V pulser and a transimpedance preamplifier to each element of the array. For each of three characterized devices, the element yield is excellent (99 to 100% of the elements are functional). Center frequencies range from 2.6 MHz to 5.1 MHz. For pulse-echo operation, the average -6-dB fractional bandwidth is as high as 125%. Transmit pressures normalized to the face of the transducer are as high as 339 kPa and input-referred receiver noise is typically 1.2 to 2.1 rnPa/ radicHz. The flip-chip bonded devices were used to acquire 3D synthetic aperture images of a wire-target phantom. Combining the transducer array and IC, as shown in this paper, allows for better utilization of large arrays, improves receive sensitivity, and may lead to new imaging techniques that depend on transducer arrays that are closely coupled to IC electronics.", "title": "" }, { "docid": "427c5f5825ca06350986a311957c6322", "text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.", "title": "" }, { "docid": "b7ca3a123963bb2f0bfbe586b3bc63d0", "text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.", "title": "" }, { "docid": "6ab5678d7f4bcb0d686ca3f384381134", "text": "We present a TTS neural network that is able to produce speech in multiple languages. The proposed network is able to transfer a voice, which was presented as a sample in a source language, into one of several target languages. Training is done without using matching or parallel data, i.e., without samples of the same speaker in multiple languages, making the method much more applicable. The conversion is based on learning a polyglot network that has multiple perlanguage sub-networks and adding loss terms that preserve the speaker’s identity in multiple languages. We evaluate the proposed polyglot neural network for three languages with a total of more than 400 speakers and demonstrate convincing conversion capabilities.", "title": "" }, { "docid": "e2f2961ab8c527914c3d23f8aa03e4bf", "text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "title": "" }, { "docid": "796625110c6e97f4ff834cfe04c784fe", "text": "This paper addresses the large-scale visual font recognition (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content. Although visual font recognition has many practical applications, it has largely been neglected by the vision community. To address the VFR problem, we construct a large-scale dataset containing 2,420 font classes, which easily exceeds the scale of most image categorization datasets in computer vision. As font recognition is inherently dynamic and open-ended, i.e., new classes and data for existing categories are constantly added to the database over time, we propose a scalable solution based on the nearest class mean classifier (NCM). The core algorithm is built on local feature embedding, local feature metric learning and max-margin template selection, which is naturally amenable to NCM and thus to such open-ended classification problems. The new algorithm can generalize to new classes and new data at little added cost. Extensive experiments demonstrate that our approach is very effective on our synthetic test images, and achieves promising results on real world test images.", "title": "" } ]
scidocsrr
d365c393d9a4dafe5cafa0a7cbe7a523
Using hidden Markov models for topic segmentation of meeting transcripts
[ { "docid": "0b0614f88f849aa5ecf135dcee55528a", "text": "This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may be domain-specific, that tend to be used near segment boundaries. Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms.", "title": "" }, { "docid": "f4380a5acaba5b534d13e1a4f09afe4f", "text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.", "title": "" } ]
[ { "docid": "579333c5b2532b0ad04d0e3d14968a54", "text": "We present a learning to rank approach to classify folktales, such as fairy tales and urban legends, according to their story type, a concept that is widely used by folktale researchers to organize and classify folktales. A story type represents a collection of similar stories often with recurring plot and themes. Our work is guided by two frequently used story type classification schemes. Contrary to most information retrieval problems, the text similarity in this problem goes beyond topical similarity. We experiment with approaches inspired by distributed information retrieval and features that compare subject-verb-object triplets. Our system was found to be highly effective compared with a baseline system.", "title": "" }, { "docid": "869f52723b215ba8dc5c4c614b2c79a6", "text": "Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.", "title": "" }, { "docid": "482ff6c78f7b203125781f5947990845", "text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.", "title": "" }, { "docid": "5b88a7f862eab6fc632a506bbb99be70", "text": "In this paper we propose a methodology to control a novel class of actuators that we called passive noise rejection variable stiffness actuators (pnrVSA). Differently from nowadays classical VSA designs, this novel class of actuators mimics the human musculoskeletal ability to increase noise rejection without relying on feedback. To fully highlight the potentialities behind these actuators we consider movement planning under two constraints: (1) absence of feedback, i.e. purely open-loop planning1; (2) uncertain dynamic model. Under these constraints, movement planning can be formalized as an open-loop stochastic optimal control. Due to the lack of classical methods forcing the open-loop nature of the computed solution, we used here a slight modification of available methodologies based on importance sampling of trajectories using forward diffusion processes. Simulations show that the proposed algorithm can be effectively used to plan open-loop movements with pnrVSA. In particular, two different scenarios are considered: the control of a single joint pnrVSA and the control of a two degrees of freedom planar arm equipped with antagonist pnrVSAs at each joint. In both cases, movement has to be planned in presence of uncertain dynamics for unstable tasks. It is shown that open-loop stochastic optimal control can modulate the intrinsic stiffness of the system to cope with both instability and noise.", "title": "" }, { "docid": "33f86056827e1e8958ab17e11d7e4136", "text": "The successful integration of Information and Communications Technology (ICT) into the teaching and learning of English Language is largely dependent on the level of teacher’s ICT competence, the actual utilization of ICT in the language classroom and factors that challenge teachers to use it in language teaching. The study therefore assessed the Secondary School English language teachers’ ICT literacy, the extent of ICT utilization in English language teaching and the challenges that prevent language teachers to integrate ICT in teaching. To answer the problems, three sets of survey questionnaires were distributed to 30 English teachers in the 11 schools of Cluster 1 (CarCanMadCarLan). Data gathered were analyzed using descriptive statistics and frequency count. The results revealed that the teachers’ ICT literacy was moderate. The findings provided evidence that there was only a limited use of ICT in language teaching. Feedback gathered from questionnaires show that teachers faced many challenges that demotivate them from using ICT in language activities. Based on these findings, it is recommended the teachers must be provided with intensive ICT-based trainings to equip them with knowledge of ICT and its utilization in language teaching. School administrators as well as stakeholders may look for interventions to upgrade school’s ICTbased resources for its optimum use in teaching and learning. Most importantly, a larger school-wide ICT development plan may be implemented to ensure coherence of ICT implementation in the teaching-learning activities. ‘ICT & Innovations in Education’ International Journal International Electronic Journal | ISSN 2321 – 7189 | www.ictejournal.com Volume 2, Issue 1 | February 2014", "title": "" }, { "docid": "34d6b5908b68bcba17edac3abaa1fe8e", "text": "This paper provides a survey of modern LIght Detection And Ranging (LIDAR) sensors from a perspective of how they can be used for spacecraft relative navigation. In addition to LIDAR technology commonly used in space applications today (e.g. scanning, flash), this paper reviews emerging LIDAR technologies gaining traction in other non-aerospace fields. The discussion will include an overview of sensor operating principles and specific pros/cons for each type of LIDAR. This paper provides a comprehensive review of LIDAR technology as applied specifically to spacecraft relative navigation.", "title": "" }, { "docid": "e2b3001513059a02cf053cadab6abb85", "text": "Data mining is the process of discovering meaningful new correlation, patterns and trends by sifting through large amounts of data, using pattern recognition technologies as well as statistical and mathematical techniques. Cluster analysis is often used as one of the major data analysis technique widely applied for many practical applications in emerging areas of data mining. Two of the most delegated, partition based clustering algorithms namely k-Means and Fuzzy C-Means are analyzed in this research work. These algorithms are implemented by means of practical approach to analyze its performance, based on their computational time. The telecommunication data is the source data for this analysis. The connection oriented broad band data is used to find the performance of the chosen algorithms. The distance (Euclidian distance) between the server locations and their connections are rearranged after processing the data. The computational complexity (execution time) of each algorithm is analyzed and the results are compared with one another. By comparing the result of this practical approach, it was found that the results obtained are more accurate, easy to understand and above all the time taken to process the data was substantially high in Fuzzy C-Means algorithm than the k-Means. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2dde5d26ab14ee6be365b23402cc13e1", "text": "Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the l1-magic algorithm proposed in the literature.", "title": "" }, { "docid": "20ecae219ecf21429fb7c2697339fe50", "text": "Massively multiplayer game holds a huge market in the digital entertainment industry. Companies invest heavily in the game and graphics development since a successful online game can attract million of users, and this translates to a huge investment payoff. However, multiplayer online game is also subjected to various forms of hacks and cheats. Hackers can alter the graphic rendering to reveal information otherwise be hidden in a normal game, or cheaters can use software robot to play the game automatically and gain an unfair advantage. Currently, some popular online games release software patches or incorporate anti-cheating software to detect known cheats. This not only creates deployment difficulty but new cheats will still be able to breach the normal game logic until software patches are available. Moreover, the anti-cheating software themselves are also vulnerable to hacks. In this paper, we propose a scalable and efficient method to detect whether a player is cheating or not. The methodology is based on the dynamic Bayesian network approach. The detection framework relies solely on the game states and runs in the game server only. Therefore it is invulnerable to hacks and it is a much more deployable solution. To demonstrate the effectiveness of the propose method, we implement a prototype multiplayer game system and to detect whether a player is using the “aiming robot” for cheating or not. Experiments show that not only we can effectively detect cheaters, but the false positive rate is extremely low. We believe the proposed methodology and the prototype system provide a first step toward a systematic study of cheating detection and security research in the area of online multiplayer games.", "title": "" }, { "docid": "d60deca88b46171ad940b9ee8964dc77", "text": "Established in 1987, the EuroQol Group initially comprised a network of international, multilingual and multidisciplinary researchers from seven centres in Finland, the Netherlands, Norway, Sweden and the UK. Nowadays, the Group comprises researchers from Canada, Denmark, Germany, Greece, Japan, New Zealand, Slovenia, Spain, the USA and Zimbabwe. The process of shared development and local experimentation resulted in EQ-5D, a generic measure of health status that provides a simple descriptive profile and a single index value that can be used in the clinical and economic evaluation of health care and in population health surveys. Currently, EQ-5D is being widely used in different countries by clinical researchers in a variety of clinical areas. EQ-5D is also being used by eight out of the first 10 of the top 50 pharmaceutical companies listed in the annual report of Pharma Business (November/December 1999). Furthermore, EQ-5D is one of the handful of measures recommended for use in cost-effectiveness analyses by the Washington Panel on Cost Effectiveness in Health and Medicine. EQ-5D has now been translated into most major languages with the EuroQol Group closely monitoring the process.", "title": "" }, { "docid": "1c17535a4f1edc36b698295136e9711a", "text": "Massive digital acquisition and preservation of deteriorating historical and artistic documents is of particular importance due to their value and fragile condition. The study and browsing of such digital libraries is invaluable for scholars in the Cultural Heritage field but requires automatic tools for analyzing and indexing these datasets. We present two completely automatic methods requiring no human intervention: text height estimation and text line extraction. Our proposed methods have been evaluated on a huge heterogeneous corpus of illuminated medieval manuscripts of different writing styles and with various problematic attributes, such as holes, spots, ink bleed-through, ornamentation, background noise, and overlapping text lines. Our experimental results demonstrate that these two new methods are efficient and reliable, even when applied to very noisy and damaged old handwritten manuscripts.", "title": "" }, { "docid": "cdc1e3b629659bf342def1f262d7aa0b", "text": "In educational contexts, understanding the student’s learning must take account of the student’s construction of reality. Reality as experienced by the student has an important additional value. This assumption also applies to a student’s perception of evaluation and assessment. Students’ study behaviour is not only determined by the examination or assessment modes that are used. Students’ perceptions about evaluation methods also play a significant role. This review aims to examine evaluation and assessment from the student’s point of view. Research findings reveal that students’ perceptions about assessment significantly influence their approaches to learning and studying. Conversely, students’ approaches to study influence the ways in which they perceive evaluation and assessment. Findings suggest that students hold strong views about different assessment and evaluation formats. In this respect students favour multiple-choice format exams to essay type questions. However, when compared with more innovative assessment methods, students call the ‘fairness’ of these well-known evaluation modes into question.", "title": "" }, { "docid": "645f49ff21d31bb99cce9f05449df0d7", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "25346cdef3e97173dab5b5499c4d4567", "text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.", "title": "" }, { "docid": "14276adf4f5b3538f95cfd10902825ef", "text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.", "title": "" }, { "docid": "4d8cc4d8a79f3d35ccc800c9f4f3dfdc", "text": "Many common events in our daily life affect us in positive and negative ways. For example, going on vacation is typically an enjoyable event, while being rushed to the hospital is an undesirable event. In narrative stories and personal conversations, recognizing that some events have a strong affective polarity is essential to understand the discourse and the emotional states of the affected people. However, current NLP systems mainly depend on sentiment analysis tools, which fail to recognize many events that are implicitly affective based on human knowledge about the event itself and cultural norms. Our goal is to automatically acquire knowledge of stereotypically positive and negative events from personal blogs. Our research creates an event context graph from a large collection of blog posts and uses a sentiment classifier and semi-supervised label propagation algorithm to discover affective events. We explore several graph configurations that propagate affective polarity across edges using local context, discourse proximity, and event-event co-occurrence. We then harvest highly affective events from the graph and evaluate the agreement of the polarities with human judgements.", "title": "" }, { "docid": "4c563b09a10ce0b444edb645ce411d42", "text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic", "title": "" }, { "docid": "28d1e4683ea4a3261f6a8a24f2870479", "text": "Memetic computation is a paradigm that uses the notion of meme(s) as units of information encoded in computational representations for the purpose of problem-solving. It covers a plethora of potentially rich meme-inspired computing methodologies, frameworks and operational algorithms including simple hybrids, adaptive hybrids and memetic automaton. In this paper, a comprehensive multi-facet survey of recent research in memetic computation is presented.", "title": "" }, { "docid": "13cbca0e2780a95c1e9d4928dc9d236c", "text": "Matching user accounts can help us build better users’ profiles and benefit many applications. It has attracted much attention from both industry and academia. Most of existing works are mainly based on rich user profile attributes. However, in many cases, user profile attributes are unavailable, incomplete or unreliable, either due to the privacy settings or just because users decline to share their information. This makes the existing schemes quite fragile. Users often share their activities on different social networks. This provides an opportunity to overcome the above problem. We aim to address the problem of user identification based on User Generated Content (UGC). We first formulate the problem of user identification based on UGCs and then propose a UGC-based user identification model. A supervised machine learning based solution is presented. It has three steps: firstly, we propose several algorithms to measure the spatial similarity, temporal similarity and content similarity of two UGCs; secondly, we extract the spatial, temporal and content features to exploit these similarities; afterwards, we employ the machine learning method to match user accounts, and conduct the experiments on three ground truth datasets. The results show that the proposed method has given excellent performance with F1 values reaching 89.79%, 86.78% and 86.24% on three ground truth datasets, respectively. This work presents the possibility of matching user accounts with high accessible online data. © 2018 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2c3bdb3dc3bf4aedc36a49e82a2dca50", "text": "We report the implementation of a text input application (speller) based on the P300 event related potential. We obtain high accuracies by using an SVM classifier and a novel feature. These techniques enable us to maintain fast performance without sacrificing the accuracy, thus making the speller usable in an online mode. In order to further improve the usability, we perform various studies on the data with a view to minimizing the training time required. We present data collected from nine healthy subjects, along with the high accuracies (of the order of 95% or more) measured online. We show that the training time can be further reduced by a factor of two from its current value of about 20 min. High accuracy, fast learning, and online performance make this P300 speller a potential communication tool for severely disabled individuals, who have lost all other means of communication and are otherwise cut off from the world, provided their disability does not interfere with the performance of the speller.", "title": "" } ]
scidocsrr
1d789f197e86684157d68543178be045
Hotel reviews sentiment analysis based on word vector clustering
[ { "docid": "1434ac827bebb684682d527b92721354", "text": "Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times. *A preliminary version of this paper appeared in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS'O3), Toulouse, France, 2003, Vol. 111, 2057-2059. +NASA Goddard Space Flight Center, Architecture and Automation Branch, Greenbelt, MD 20771 and Department of Computer Science, University of Maryland, College Park, Maryland, 20742. Email: nargess@cs.umd.edu. $ ~ e ~ a r t m e n t of Computer Science, University df Maryland, College Park, Maryland, 20742. The work of this author was supported by the Science Foundation under grant CCR-0098151. Email: mount@cs.umd.edu. l ~ e ~ a r t m e n t of Computer Science, Bar-Ilan University, Ramat-Gan 52900, Israel, and Center for Automation Research, University of Maryland, College Park, Maryland, 20742. Email: nathanOcs.biu.ac.il. NASA Goddard Space Flight Center, previously Applied Information Sciences Branch, currently Advanced Architectures and Automation Branch, Greenbelt, MD 20771. Email: Jacqueline.LeMoigne@nasa.gov. https://ntrs.nasa.gov/search.jsp?R=20070038185 2017-12-21T21:41:30+00:00Z", "title": "" }, { "docid": "38aa324964214620c55eb4edfecf1bd2", "text": "This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.", "title": "" }, { "docid": "6651777a7843a59ef2365dfc811d7cde", "text": "As the widespread use of computers and the high-speed development of the Internet, E-Commerce has already penetrated as a part of our daily life. For a popular product, there are a large number of reviews. This makes it difficult for a potential customer to make an informed decision on purchasing the product, as well as for the manufacturer of the product to keep track and to manage customer opinions. In this paper, we pay attention to online hotel reviews, and propose a supervised machine learning approach using unigram feature with two types of information (frequency and TF-IDF) to realize polarity classification of documents. As shown in our experimental results, the information of TF-IDF is more effective than frequency.", "title": "" } ]
[ { "docid": "e3e75689d9425ea04db2de83bbfc9102", "text": "Recently, with the advent of location-based social networking services (LBSNs), travel planning and location-aware information recommendation based on LBSNs have attracted much research attention. In this paper, we study the impact of social relations hidden in LBSNs, i.e., The social influence of friends. We propose a new social influence-based user recommender framework (SIR) to discover the potential value from reliable users (i.e., Close friends and travel experts). Explicitly, our SIR framework is able to infer influential users from an LBSN. We claim to capture the interactions among virtual communities, physical mobility activities and time effects to infer the social influence between user pairs. Furthermore, we intend to model the propagation of influence using diffusion-based mechanism. Moreover, we have designed a dynamic fusion framework to integrate the features mined into a united follow probability score. Finally, our SIR framework provides personalized top-k user recommendations for individuals. To evaluate the recommendation results, we have conducted extensive experiments on real datasets (i.e., The Go Walla dataset). The experimental results show that the performance of our SIR framework is better than the state-of the-art user recommendation mechanisms in terms of accuracy and reliability.", "title": "" }, { "docid": "15884b99bf0f288377bd1fe01423bdfd", "text": "This is an innovative work for the field of web usage mining. The main feature of our work a complete framework and findings in mining Web usage patterns from Web log files of a real Web site that has all the difficult aspects of real-life Web usage mining, including developing user profiles and external data describing an ontology of the Web content. We are presenting a method for discovering and tracking evolving user profiles. Profiles are also enriched with other domain-specific information facets that give a panoramic view of the discovered mass usage modes. An objective validation plan is also used to assess the quality of the mined profiles, in particular their adaptability in the face of evolving user behaviour. Keywords— Web mining, Cookies, Session.", "title": "" }, { "docid": "3f807cb7e753ebd70558a0ce74b416b7", "text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e599fa394befb387f9148a840bfbe308", "text": "Social media is becoming a major and popular technological platform that allows users to express personal opinions toward the subjects with shared interests, opinion are good for decision making to People would want to know others' opinion before taking a decision, while corporate would like to monitor pulse of people in a social media about their products and services and take appropriate actions. This paper reviewed about world are realizing that e-commerce is not just buying and selling over Internet, rather it is improve the efficiency to compete with other giants in the market. Their opinions on specific topic are inevitably dependent on many social effects such as user preference on topics, peer influence, user profile information.", "title": "" }, { "docid": "09c5da2fbf8a160ba27221ff0c5417ac", "text": " The burst fracture of the spine was first described by Holdsworth in 1963 and redefined by Denis in 1983 as being a fracture of the anterior and middle columns of the spine with or without an associated posterior column fracture. This injury has received much attention in the literature as regards its radiological diagnosis and also its clinical managment. The purpose of this article is to review the way that imaging has been used both to diagnose the injury and to guide management. Current concepts of the stability of this fracture are presented and our experience in the use of magnetic resonance imaging in deciding treatment options is discussed.", "title": "" }, { "docid": "cebdedb344f2ba7efb95c2933470e738", "text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks", "title": "" }, { "docid": "177f95dc300186f519bd3ac48081a6e0", "text": "TAI's multi-sensor fusion technology is accelerating the development of accurate MEMS sensor-based inertial navigation in situations where GPS does not operate reliably (GPS-denied environments). TAI has demonstrated that one inertial device per axis is not sufficient to produce low drift errors for long term accuracy needed for GPS-denied applications. TAI's technology uses arrays of off-the-shelf MEMS inertial sensors to create an inertial measurement unit (IMU) suitable for inertial navigation systems (INS) that require only occasional GPS updates. Compared to fiber optics gyros, properly combined MEMS gyro arrays are lower cost, fit into smaller volume, use less power and have equal or better performance. The patents TAI holds address this development for both gyro and accelerometer arrays. Existing inertial measurement units based on such array combinations, the backbone of TAI's inertial navigation system (INS) design, have demonstrated approximately 100 times lower sensor drift error to support very accurate angular rates, very accurate position measurements, and very low angle error for long durations. TAI's newest, fourth generation, product occupies small volume, has low weight, and consumes little power. The complete assembly can be potted in a protective sheath to form a rugged standalone product. An external exoskeleton case protects the electronic assembly for munitions and UAV applications. TAI's IMU/INS will provide the user with accurate real-time navigation information in difficult situations where GPS is not reliable. The key to such accurate performance is to achieve low sensor drift errors. The INS responds to quick movements without introducing delays while sharply reducing sensor drift errors that result in significant navigation errors. Discussed in the paper are physical characteristics of the IMU, an overview of the system design, TAI's systematic approach to drift reduction and some early results of applying a sigma point Kalman filter to sustain low gyro drift.", "title": "" }, { "docid": "1d6e23fedc5fa51b5125b984e4741529", "text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.", "title": "" }, { "docid": "2c5cab6e37ad905e0e3576259c4357ff", "text": "--------------------------------------------------------ABSTRACT-----------------------------------------------------------Classification and regression as data mining techniques for predicting the diseases outbreak has been permitted in the health institutions which have relative opportunities for conducting the treatment of diseases. But there is a need to develop a strong model for predicting disease outbreak in datasets based in various countries by filling the existing data mining technique gaps where the majority of models are relaying on single data mining techniques which their accuracies in prediction are not maximized for achieving expected results and also prediction are still few. This paper presents a survey and analysis for existing techniques on both classification and regression models techniques that have been applied for diseases outbreak prediction in datasets.", "title": "" }, { "docid": "c956c6d99053b44557cfed93f12dc1bc", "text": "We present a device demonstrating a lithographically patterned transmon integrated with a micromachined cavity resonator. Our two-cavity, one-qubit device is a multilayer microwave-integrated quantum circuit (MMIQC), comprising a basic unit capable of performing circuit-QED operations. We describe the qubit-cavity coupling mechanism of a specialized geometry using an electric-field picture and a circuit model, and obtain specific system parameters using simulations. Fabrication of the MMIQC includes lithography, etching, and metallic bonding of silicon wafers. Superconducting wafer bonding is a critical capability that is demonstrated by a micromachined storage-cavity lifetime of 34.3 μs, corresponding to a quality factor of 2 × 10 at single-photon energies. The transmon coherence times are T1 1⁄4 6.4 μs, and Techo 2 1⁄4 11.7 μs. We measure qubit-cavity dispersive coupling with a rate χqμ=2π 1⁄4 −1.17 MHz, constituting a Jaynes-Cummings system with an interaction strength g=2π 1⁄4 49 MHz. With these parameters we are able to demonstrate circuit-QED operations in the strong dispersive regime with ease. Finally, we highlight several improvements and anticipated extensions of the technology to complex MMIQCs.", "title": "" }, { "docid": "ff4c2f1467a141894dbe76491bc06d3b", "text": "Railways is the major means of transport in most of the countries. Rails are the backbone of the track structure and should be protected from defects. Surface defects are irregularities in the rails caused due to the shear stresses between the rails and wheels of the trains. This type of defects should be detected to avoid rail fractures. The objective of this paper is to propose an innovative technique to detect the surface defect on rail heads. In order to identify the defects, it is essential to extract the rails from the background and further enhance the image for thresholding. The proposed method uses Binary Image Based Rail Extraction (BIBRE) algorithm to extract the rails from the background. The extracted rails are enhanced to achieve uniform background with the help of direct enhancement method. The direct enhancement method enhance the image by enhancing the brightness difference between objects and their backgrounds. The enhanced rail image uses Gabor filters to identify the defects from the rails. The Gabor filters maximizes the energy difference between defect and defect less surface. Thresholding is done based on the energy of the defects. From the thresholded image the defects are identified and a message box is generated when there is a presence of defects.", "title": "" }, { "docid": "026f146c87f4b2f4a63789b8c08a482a", "text": "This study aims to develop a comprehensive review on the issue of poor school performance for professionals in both health and education areas. It discusses current aspects of education, learning and the main conditions involved in underachievement. It also presents updated data on key aspects of neurobiology, epidemiology, etiology, clinical presentation, comorbidities and diagnosis, early intervention and treatment of the major pathologies comprised. It is a comprehensive, non-systematic literature review on learning, school performance, learning disorders (dyslexia, dyscalculia and dysgraphia), attention deficit / hyperactivity disorder (ADHD) and developmental coordination disorder (DCD). Poor school performance is a frequent problem faced by our children, causing serious emotional, social and economic issues. An updated view of the subject facilitates clinical reasoning, accurate diagnosis and appropriate treatment.", "title": "" }, { "docid": "38d1e06642f12138f8b0a90deeb96979", "text": "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities.", "title": "" }, { "docid": "b720df1467aade5dd1ba82602ba14591", "text": "Modern medical devices and equipment have become very complex and sophisticated and are expected to operate under stringent environments. Hospitals must ensure that their critical medical devices are safe, accurate, reliable and operating at the required level of performance. Even though the importance, the application of all inspection, maintenance and optimization models to medical devices is fairly new. In Canada, most, if not all healthcare organizations include all their medical equipment in their maintenance program and just follow manufacturers’ recommendations for preventative maintenance. Then, current maintenance strategies employed in hospitals and healthcare organizations have difficulty in identifying specific risks and applying optimal risk reduction activities. This paper addresses these gaps found in literature for medical equipment inspection and maintenance and reviews various important aspects including current policies applied in hospitals. Finally we suggest future research which will be the starting point to develop tools and policies for better medical devices management in the future.", "title": "" }, { "docid": "b8334d21af0d511b13dcaf27b6916dc5", "text": "Almost all of today’s knowledge is stored in databases and thus can only be accessed with the help of domain specific query languages, strongly limiting the number of people which can access the data. In this work, we demonstrate an end-to-end trainable question answering (QA) system that allows a user to query an external NoSQL database by using natural language. A major challenge of such a system is the non-differentiability of database operations which we overcome by applying policy-based reinforcement learning. We evaluate our approach on Facebook’s bAbI Movie Dialog dataset and achieve a competitive score of 84.2% compared to several benchmark models. We conclude that our approach excels with regard to real-world scenarios where knowledge resides in external databases and intermediate labels are too costly to gather for non-end-to-end trainable QA systems.", "title": "" }, { "docid": "99f93328d19ac240378c5cfe08cf9f9e", "text": "Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 \"latent\" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.", "title": "" }, { "docid": "2be043b09e6dd631b5fe6f9eed44e2ec", "text": "This article aims to contribute to a critical research agenda for investigating the democratic implications of citizen journalism and social news. The article calls for a broad conception of ‘citizen journalism’ which is (1) not an exclusively online phenomenon, (2) not confined to explicitly ‘alternative’ news sources, and (3) includes ‘metajournalism’ as well as the practices of journalism itself. A case is made for seeing democratic implications not simply in the horizontal or ‘peer-to-peer’ public sphere of citizen journalism networks, but also in the possibility of a more ‘reflexive’ culture of news consumption through citizen participation. The article calls for a research agenda that investigates new forms of gatekeeping and agendasetting power within social news and citizen journalism networks and, drawing on the example of three sites, highlights the importance of both formal and informal status differentials and of the software ‘code’ structuring these new modes of news", "title": "" }, { "docid": "6a763e49cdfd41b28922eb536d9404ed", "text": "With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.", "title": "" }, { "docid": "3f9f01e3b3f5ab541cbe78fb210cf744", "text": "The reliable and effective localization system is the basis of Automatic Guided Vehicle (AGV) to complete given tasks automatically in warehouse environment. However, there are no obvious features that can be used for localization of AGV to be extracted in warehouse environment and it dose make it difficult to realize the localization of AGV. So in this paper, we concentrate on the problem of optimal landmarks placement in warehouse so as to improve the reliability of localization. Firstly, we take the practical warehouse environment into consideration and transform the problem of landmarks placement into an optimization problem which aims at maximizing the difference degree between each basic unit of localization. Then Genetic Algorithm (GA) is used to solve the optimization problem. Then we match the observed landmarks with the already known ones stored in the map and the Triangulation method is used to estimate the position of AGV after the matching has been done. Finally, experiments in a real warehouse environment validate the effectiveness and reliability of our method.", "title": "" }, { "docid": "8aeead40ab3112b0ef69c77c73885d46", "text": "We provide a new understanding of the fundamental nature of adversarially robust classifiers and how they differ from standard models. In particular, we show that there provably exists a trade-off between the standard accuracy of a model and its robustness to adversarial perturbations. We demonstrate an intriguing phenomenon at the root of this tension: a certain dichotomy between “robust” and “non-robust” features. We show that while robustness comes at a price, it also has some surprising benefits. Robust models turn out to have interpretable gradients and feature representations that align unusually well with salient data characteristics. In fact, they yield striking feature interpolations that have thus far been possible to obtain only using generative models such as GANs.", "title": "" } ]
scidocsrr
147a8f2b62ceea97cf02c011f6d8446f
Scaled Current Tracking Control for Doubly Fed Induction Generator to Ride-Through Serious Grid Faults
[ { "docid": "8066246656f6a9a3060e42efae3b197f", "text": "The paper describes the engineering and design of a doubly fed induction generator (DFIG), using back-to-back PWM voltage-source converters in the rotor circuit. A vector-control scheme for the supply-side PWM converter results in independent control of active and reactive power drawn from the supply, while ensuring sinusoidal supply currents. Vector control of the rotor-connected converter provides for wide speed-range operation; the vector scheme is embedded in control loops which enable optimal speed tracking for maximum energy capture from the wind. An experimental rig, which represents a 1.5 kW variable speed wind-energy generation system is described, and experimental results are given that illustrate the excellent performance characteristics of the system. The paper considers a grid-connected system; a further paper will describe a stand-alone system.", "title": "" } ]
[ { "docid": "f613a2ed6f64c469cf1180d1e8fe9e4a", "text": "We describe an estimation technique which, given a measurement of the depth of a target from a wide-fieldof-view (WFOV) stereo camera pair, produces a minimax risk fixed-size confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain of optimal fixed-size confidenceinterval decision theory. The approach is evaluated in terms of theoretical capture probability and empirical cap ture frequency during actual experiments with a target on an optical bench. The method is compared to several other procedures including the Kalman Filter. The minimax approach is found to dominate all the other methods in performance. In particular, for the minimax approach, a very close agreement is achieved between theoreticalcapture probability andempiricalcapture frequency. This allows performance to be accurately predicted, greatly facilitating the system design, and delineating the tasks that may be performed with a given system.", "title": "" }, { "docid": "4816d3c4ca52f2ba592b29636b4a3c35", "text": "In this paper, we describe a system that applies maximum entropy (ME) models to the task of named entity recognition (NER). Starting with an annotated corpus and a set of features which are easily obtainable for almost any language, we first build a baseline NE recognizer which is then used to extract the named entities and their context information from additional nonannotated data. In turn, these lists are incorporated into the final recognizer to further improve the recognition accuracy.", "title": "" }, { "docid": "1bf735fc91f375bd3c1d5a437aabf6eb", "text": "In any collaborative system, there are both symmetries and asymmetries present in the design of the technology and in the ways that technology is appropriated. Yet media space research tends to focus more on supporting and fostering the symmetries than the asymmetries. Throughout more than 20 years of media space research, the pursuit of increased symmetry, whether achieved through technical or social means, has been a recurrent theme. The research literature on the use of contemporary awareness systems, in contrast, displays little if any of this emphasis on symmetrical use; indeed, this body of research occasionally highlights the perceived value of asymmetry. In this paper, we unpack the different forms of asymmetry present in both media spaces and contemporary awareness systems. We argue that just as asymmetry has been demonstrated to have value in contemporary awareness systems, so might asymmetry have value in media spaces and in other CSCW systems, more generally. To illustrate, we present a media space that emphasizes and embodies multiple forms of asymmetry and does so in response to the needs of a particular work context.", "title": "" }, { "docid": "c7f0a749e38b3b7eba871fca80df9464", "text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.", "title": "" }, { "docid": "46c4b4a68e0be453148779529f235e98", "text": "Received Feb 14, 2017 Revised Apr 14, 2017 Accepted Apr 28, 2017 This paper proposes maximum boost control for 7-level z-source cascaded h-bridge inverter and their affiliation between voltage boost gain and modulation index. Z-source network avoids the usage of external dc-dc boost converter and improves output voltage with minimised harmonic content. Z-source network utilises distinctive LC impedance combination with 7-level cascaded inverter and it conquers the conventional voltage source inverter. The maximum boost controller furnishes voltage boost and maintain constant voltage stress across power switches, which provides better output voltage with variation of duty cycles. Single phase 7-level z-source cascaded inverter simulated using matlab/simulink. Keyword:", "title": "" }, { "docid": "6b4a30948ed87cfc9f3a19a984d94994", "text": "In Ethernet-based time-triggered networks, like TTEthernet, a global communication scheme, for which the schedule synthesis is known to be an NP-complete problem, establishes contention-free windows for the exchange of messages with guaranteed low latency and minimal jitter. However, in order to achieve end-to-end determinism at the application level, software tasks running on the end-system nodes need to obey a similar execution scheme with tight dependencies towards the network domain. In this paper we address the simultaneous co-synthesis of network as well as application schedules for preemptive time-triggered tasks communicating in a switched multi-speed time-triggered network. We use Satisfiability Modulo Theories (SMT) to formulate the scheduling constraints and solve the resulting problem using a state-of-the-art SMT solver. Furthermore, we introduce a novel incremental scheduling approach, based on the demand bound test for asynchronous constrained-deadline periodic tasks, which significantly improves scalability for the average case without sacrificing schedulability. We demonstrate the performance of our approach using synthetic network topologies and system configurations.", "title": "" }, { "docid": "5f811c5f95c60c6edc48b1fedab07a2a", "text": "This paper discusses dexterous, within-hand manipulation with differential-type underactuated hands. We discuss the fact that not only can this class of hands, which to date have been considered almost exclusively for adaptive grasping, be utilized for precision manipulation, but also that the reduction of the number of actuators and constraints can make within-hand manipulation easier to implement and control. Next, we introduce an analytical framework for evaluating the dexterous workspace of objects held within the fingertips in a precision grasp. A set of design principles for underactuated fingers are developed that enable fingertip grasping and manipulation. Finally, we apply this framework to analyze the workspace of stable object configurations for an object held within a pinch grasp of a two-fingered underactuated planar hand, demonstrating a large and useful workspace despite only one actuator per finger. The in-hand manipulation workspace for the iRobot–Harvard–Yale Hand is experimentally measured and presented.", "title": "" }, { "docid": "5595102130b4c03c7f65f31207951f79", "text": "Being a leading location-based social network (LBSN), Foursquare’s Swarm app allows users to conduct checkins at a specified location and share their real-time locations with friends. This app records a massive set of spatio-temporal information of users around the world. In this paper, we track the evolution of user density of the Swarm app in New York City (NYC) for one entire week. We study the temporal patterns of different venue categories, and investigate how the function of venue categories affects the temporal behavior of visitors. Moreover, by applying time-series analysis, we validate that the temporal patterns can be effectively decomposed into regular parts which represent the regular human behavior and stochastic parts which represent the randomness of human behavior. Finally, we build a model to predict the evolution of the user density, and our results demonstrate an accurate prediction.", "title": "" }, { "docid": "ffbe1b8861515e0801da9cb514e490b7", "text": "A mathematical study is performed to assess how the arterial pressure-volume (P-V) relationship, blood pressure pulse amplitude and shape affect the results of non-invasive oscillometric finger mean blood pressure estimation by the maximum oscillation criterion (MOC). The exponential models for a relaxed finger artery and for a partly contracted artery are studied. A new modification of the error equation is suggested. This equation and the results of simulation demonstrate that the value of pressure estimated by the MOC does not exactly agree with the value of the true mean blood pressure (the latter being defined as pressure corresponding to maximum arterial compliance). The error depends on the arterial pressure pulse amplitude, as well as on the difference between the arterial pressure pulse shape index and the arterial P-V curve shape index. In the case of contracted finger arteries, the MOC can give an overestimation of up to 19 mmHg, the pressure pulse shape index being 0.21 and the pulse amplitude 60 mmHg. In the case of relaxed arteries, the error is less evident.", "title": "" }, { "docid": "1796b8d91de88303571cc6f3f66b580b", "text": "In this paper it is shown that bifilar of a Quadrifilar Helix Antenna (QHA) when designed in side-fed configuration at a given diameter and length of helical arm, effectively becomes equivalent to combination of a loop and a dipole antenna. The vertical and horizontal electric fields caused by these equivalent antennas can be made to vary by changing the turn angle of the bifilar. It is shown how the variation in horizontal and vertical electric field dominance is seen until perfect circular polarization is achieved when two fields are equal at a certain turn angle where area of the loop equals product of pitch of helix and radian length i.e. equivalent dipole length. The antenna is low profile and does not require ground plane and thus can be used in high speed aerodynamic and platform bodies made of composite material where metallic ground is unavailable. Additionally not requiring ground plane increases the isolation between the antennas with stable radiation pattern and hence can be used in MIMO systems.", "title": "" }, { "docid": "e34a61754ff8cfac053af5cbedadd9e0", "text": "An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods.", "title": "" }, { "docid": "1573020547c887b8f54948e99b87ca53", "text": "Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in just 800 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.", "title": "" }, { "docid": "38d791ebe063bd58a04afd21e6d8f25a", "text": "The design of a Web search evaluation metric is closely related with how the user's interaction process is modeled. Each behavioral model results in a different metric used to evaluate search performance. In these models and the user behavior assumptions behind them, when a user ends a search session is one of the prime concerns because it is highly related to both benefit and cost estimation. Existing metric design usually adopts some simplified criteria to decide the stopping time point: (1) upper limit for benefit (e.g. RR, AP); (2) upper limit for cost (e.g. Precision@N, DCG@N). However, in many practical search sessions (e.g. exploratory search), the stopping criterion is more complex than the simplified case. Analyzing benefit and cost of actual users' search sessions, we find that the stopping criteria vary with search tasks and are usually combination effects of both benefit and cost factors. Inspired by a popular computer game named Bejeweled, we propose a Bejeweled Player Model (BPM) to simulate users' search interaction processes and evaluate their search performances. In the BPM, a user stops when he/she either has found sufficient useful information or has no more patience to continue. Given this assumption, a new evaluation framework based on upper limits (either fixed or changeable as search proceeds) for both benefit and cost is proposed. We show how to derive a new metric from the framework and demonstrate that it can be adopted to revise traditional metrics like Discounted Cumulative Gain (DCG), Expected Reciprocal Rank (ERR) and Average Precision (AP). To show effectiveness of the proposed framework, we compare it with a number of existing metrics in terms of correlation between user satisfaction and the metrics based on a dataset that collects users' explicit satisfaction feedbacks and assessors' relevance judgements. Experiment results show that the framework is better correlated with user satisfaction feedbacks.", "title": "" }, { "docid": "b4cadd9179150203638ff9b045a4145d", "text": "Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application.", "title": "" }, { "docid": "e75df6ff31c9840712cf1a4d7f6582cd", "text": "Endotoxin, a constituent of Gram-negative bacteria, stimulates macrophages to release large quantities of tumor necrosis factor (TNF) and interleukin-1 (IL-1), which can precipitate tissue injury and lethal shock (endotoxemia). Antagonists of TNF and IL-1 have shown limited efficacy in clinical trials, possibly because these cytokines are early mediators in pathogenesis. Here a potential late mediator of lethality is identified and characterized in a mouse model. High mobility group-1 (HMG-1) protein was found to be released by cultured macrophages more than 8 hours after stimulation with endotoxin, TNF, or IL-1. Mice showed increased serum levels of HMG-1 from 8 to 32 hours after endotoxin exposure. Delayed administration of antibodies to HMG-1 attenuated endotoxin lethality in mice, and administration of HMG-1 itself was lethal. Septic patients who succumbed to infection had increased serum HMG-1 levels, suggesting that this protein warrants investigation as a therapeutic target.", "title": "" }, { "docid": "835f004b55534f051a5dc98dc8852e12", "text": "The focus of this paper is on presentation attack detection for the iris biometrics, which measures the pattern within the colored concentric circle of the subjects' eyes, to authenticate an individual to a generic user verification system. Unlike previous deep learning methods that use single convolutional neural network architectures, this paper develops a framework built upon triplet convolutional networks that takes as input two real iris patches and a fake patch or two fake patches and a genuine patch. The aim is to increase the number of training samples and to generate a representation that separates the real from the fake iris patches. The smaller architecture provides a way to do early stopping based on the liveness of single patches rather than the whole image. The matching is performed by computing the distance with respect to a reference set of real and fake examples. The proposed approach allows for real-time processing using a smaller network and provides equal or better than state-of-the-art performance on three benchmark datasets of photo-based and contact lens presentation attacks.", "title": "" }, { "docid": "cfaf2c04cd06103489ac60d00a70cd2c", "text": "BACKGROUND\nΔ(9)-Tetrahydrocannabinol (THC), 11-nor-9-carboxy-THC (THCCOOH), and cannabinol (CBN) were measured in breath following controlled cannabis smoking to characterize the time course and windows of detection of breath cannabinoids.\n\n\nMETHODS\nExhaled breath was collected from chronic (≥4 times per week) and occasional (<twice per week) smokers before and after smoking a 6.8% THC cigarette. Sample analysis included methanol extraction from breath pads, solid-phase extraction, and liquid chromatography-tandem mass spectrometry quantification.\n\n\nRESULTS\nTHC was the major cannabinoid in breath; no sample contained THCCOOH and only 1 contained CBN. Among chronic smokers (n = 13), all breath samples were positive for THC at 0.89 h, 76.9% at 1.38 h, and 53.8% at 2.38 h, and only 1 sample was positive at 4.2 h after smoking. Among occasional smokers (n = 11), 90.9% of breath samples were THC-positive at 0.95 h and 63.6% at 1.49 h. One occasional smoker had no detectable THC. Analyte recovery from breath pads by methanolic extraction was 84.2%-97.4%. Limits of quantification were 50 pg/pad for THC and CBN and 100 pg/pad for THCCOOH. Solid-phase extraction efficiency was 46.6%-52.1% (THC) and 76.3%-83.8% (THCCOOH, CBN). Matrix effects were -34.6% to 12.3%. Cannabinoids fortified onto breath pads were stable (≤18.2% concentration change) for 8 h at room temperature and -20°C storage for 6 months.\n\n\nCONCLUSIONS\nBreath may offer an alternative matrix for identifying recent driving under the influence of cannabis, but currently sensitivity is limited to a short detection window (0.5-2 h).", "title": "" }, { "docid": "cded40190ef8cc022adeb97c2e77ce36", "text": "Question classification is very important for question answering. This paper present our research work on question classification through machine learning approach. In order to train the learning model, we designed a rich set of features that are predictive of question categories. An important component of question answering systems is question classification. The task of question classification is to predict the entity type of the answer of a natural language question. Question classification is typically done using machine learning techniques. Different lexical, syntactical and semantic features can be extracted from a question. In this work we combined lexical, syntactic and semantic features which improve the accuracy of classification. Furthermore, we adopted three different classifiers: Nearest Neighbors (NN), Naïve Bayes (NB), and Support Vector Machines (SVM) using two kinds of features: bag-of-words and bag-of n grams. Furthermore, we discovered that when we take SVM classifier and combine the semantic, syntactic, lexical feature we found that it will improve the accuracy of classification. We tested our proposed approaches on the well-known UIUC dataset and succeeded to achieve a new record on the accuracy of classification on this dataset.", "title": "" }, { "docid": "a45be66a54403701a8271c3063dd24d8", "text": "This paper highlights the role of humans in the next generation of driver assistance and intelligent vehicles. Understanding, modeling, and predicting human agents are discussed in three domains where humans and highly automated or self-driving vehicles interact: 1) inside the vehicle cabin, 2) around the vehicle, and 3) inside surrounding vehicles. Efforts within each domain, integrative frameworks across domains, and scientific tools required for future developments are discussed to provide a human-centered perspective on research in intelligent vehicles.", "title": "" } ]
scidocsrr
6cf17f7076502c1c982b5c3f6ae43bd3
Gaussian Processes for Rumour Stance Classification in Social Media
[ { "docid": "9ae491c47c20a746eb13f3370217a8fa", "text": "The open structure of online social networks and their uncurated nature give rise to problems of user credibility and influence. In this paper, we address the task of predicting the impact of Twitter users based only on features under their direct control, such as usage statistics and the text posted in their tweets. We approach the problem as regression and apply linear as well as nonlinear learning methods to predict a user impact score, estimated by combining the numbers of the user’s followers, followees and listings. The experimental results point out that a strong prediction performance is achieved, especially for models based on the Gaussian Processes framework. Hence, we can interpret various modelling components, transforming them into indirect ‘suggestions’ for impact boosting.", "title": "" } ]
[ { "docid": "fe2b8921623f3bcf7b8789853b45e912", "text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.", "title": "" }, { "docid": "dc23ec643882393b69adca86c944bef4", "text": "This memo describes a snapshot of the reasoning behind a proposed new namespace, the Host Identity namespace, and a new protocol layer, the Host Identity Protocol (HIP), between the internetworking and transport layers. Herein are presented the basics of the current namespaces, their strengths and weaknesses, and how a new namespace will add completeness to them. The roles of this new namespace in the protocols are defined. The memo describes the thinking of the authors as of Fall 2003. The architecture may have evolved since. This document represents one stable point in that evolution of understanding.", "title": "" }, { "docid": "8ea2dadd6024e2f1b757818e0c5d76fa", "text": "BACKGROUND\nLysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study.\n\n\nMETHOD\nA total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session.\n\n\nRESULTS\nLSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking.\n\n\nCONCLUSIONS\nThe present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.", "title": "" }, { "docid": "05b362c5dd31decd8d0d33ba45a36783", "text": "Behavioral interventions preceded by a functional analysis have been proven efficacious in treating severe problem behavior associated with autism. There is, however, a lack of research showing socially validated outcomes when assessment and treatment procedures are conducted by ecologically relevant individuals in typical settings. In this study, interview-informed functional analyses and skill-based treatments (Hanley et al. in J Appl Behav Anal 47:16-36, 2014) were applied by a teacher and home-based provider in the classroom and home of two children with autism. The function-based treatments resulted in socially validated reductions in severe problem behavior (self-injury, aggression, property destruction). Furthermore, skills lacking in baseline-functional communication, denial and delay tolerance, and compliance with adult instructions-occurred with regularity following intervention. The generality and costs of the process are discussed.", "title": "" }, { "docid": "39cf15285321c7d56904c8c59b3e1373", "text": "J. Naidoo1*, D. B. Page2, B. T. Li3, L. C. Connell3, K. Schindler4, M. E. Lacouture5,6, M. A. Postow3,6 & J. D. Wolchok3,6 Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore; Providence Portland Medical Center and Earl A. Chiles Research Institute, Portland; Department of Medicine and Ludwig Center, Memorial Sloan Kettering Cancer Center, New York, USA; Department of Dermatology, Medical University of Vienna, Vienna, Austria; Dermatology Service, Memorial Sloan Kettering Cancer Center, New York; Department of Medicine, Weill Cornell Medical College, New York, USA", "title": "" }, { "docid": "711ad6f6641b916f25f08a32d4a78016", "text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "20def85748f9d2f71cd34c4f0ca7f57c", "text": "Recent advances in artificial intelligence (AI) and machine learning, combined with developments in neuromorphic hardware technologies and ubiquitous computing, promote machines to emulate human perceptual and cognitive abilities in a way that will continue the trend of automation for several upcoming decades. Despite the gloomy scenario of automation as a job eliminator, we argue humans and machines can cross-fertilise in a way that forwards a cooperative coexistence. We build our argument on three pillars: (i) the economic mechanism of automation, (ii) the dichotomy of ‘experience’ that separates the first-person perspective of humans from artificial learning algorithms, and (iii) the interdependent relationship between humans and machines. To realise this vision, policy makers have to implement alternative educational approaches that support lifelong training and flexible job transitions.", "title": "" }, { "docid": "f5d8c506c9f25bff429cea1ed4c84089", "text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.", "title": "" }, { "docid": "100c152685655ad6865f740639dd7d57", "text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.", "title": "" }, { "docid": "23a329c63f9a778e3ec38c25fa59748a", "text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.", "title": "" }, { "docid": "dc810b43c71ab591981454ad20e34b7a", "text": "This paper proposes a real-time variable-Q non-stationary Gabor transform (VQ-NSGT) system for speech pitch shifting. The system allows for time-frequency representations of speech on variable-Q (VQ) with perfect reconstruction and computational efficiency. The proposed VQ-NSGT phase vocoder can be used for pitch shifting by simple frequency translation (transposing partials along the frequency axis) instead of spectral stretching in frequency domain by the Fourier transform. In order to retain natural sounding pitch shifted speech, a hybrid of smoothly varying Q scheme is used to retain the formant structure of the original signal at both low and high frequencies. Moreover, the preservation of transients of speech are improved due to the high time resolution of VQ-NSGT at high frequencies. A sliced VQ-NSGT is used to retain inter-partials phase coherence by synchronized overlap-add method. Therefore, the proposed system lends itself to real-time processing while retaining the formant structure of the original signal and inter-partial phase coherence. The simulation results showed that the proposed approach is suitable for pitch shifting of both speech and music signals.", "title": "" }, { "docid": "f9c4f413618d94b78b96c8cb188e09c5", "text": "We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k2) to O(k) where k is the number of samples. We use the column wise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our 1This work was supported in part by the Nanyang Assistant Professorship (M4080134), JSPSNTU joint project (M4080882), Natural Science Foundation of China (61105013), and National Science and Technology Pillar Program (2012BAI14B03). Part of this work was done when Yang Cong was a research fellow at NTU. Preprint submitted to Pattern Recognition January 30, 2013 method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method.", "title": "" }, { "docid": "7d32ed1dbd25e7845bf43f58f42be34a", "text": "ETHNOPHARMACOLOGICAL RELEVANCE\nSenna occidentalis, Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and Albizia schimperiana are traditionally used for treatment of various ailments including helminth infection in Ethiopia.\n\n\nMATERIALS AND METHODS\nIn vitro egg hatch assay and larval development tests were conducted to determine the possible anthelmintic effects of crude aqueous and hydro-alcoholic extracts of the leaves of Senna occidentalis, aerial parts of Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and stem bark of Albizia schimperiana on eggs and larvae of Haemonchus contortus.\n\n\nRESULTS\nBoth aqueous and hydro-alcoholic extracts of Leucas martinicensis, Leonotis ocymifolia and aqueous extract of Senna occidentalis and Albizia schimperiana induced complete inhibition of egg hatching at concentration less than or equal to 1mg/ml. Aqueous and hydro-alcoholic extracts of all tested medicinal plants have shown statistically significant and dose dependent egg hatching inhibition. Based on ED(50), the most potent extracts were aqueous and hydro-alcoholic extracts of Leucas martinicensis (0.09 mg/ml), aqueous extracts of Rumex abyssinicus (0.11 mg/ml) and Albizia schimperiana (0.11 mg/ml). Most of the tested plant extracts have shown remarkable larval development inhibition. Aqueous extracts of Leonotis ocymifolia, Leucas martinicensis, Albizia schimperiana and Senna occidentalis induced 100, 99.85, 99.31, and 96.36% inhibition of larval development, respectively; while hydro-alcoholic extracts of Albizia schimperiana induced 99.09 inhibition at the highest concentration tested (50mg/ml). Poor inhibition was recorded for hydro-alcoholic extracts of Senna occidentalis (9%) and Leonotis ocymifolia (37%) at 50mg/ml.\n\n\nCONCLUSIONS\nThe overall findings of the current study indicated that the evaluated medicinal plants have potential anthelmintic effect and further in vitro and in vivo evaluation is indispensable to make use of these plants.", "title": "" }, { "docid": "f97093a848329227f363a8a073a6334a", "text": "With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.", "title": "" }, { "docid": "bfde0c836406a25a08b7c95b330aaafa", "text": "The concept of agile process models has gained great popularity in software (SW) development community in past few years. Agile models promote fast development. This property has certain drawbacks, such as poor documentation and bad quality. Fast development promotes use of agile process models in small-scale projects. This paper modifies and evaluates extreme programming (XP) process model and proposes a novel adaptive process mode based on these modifications. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a8e665f8b7ea7473e5f7095d12db00ce", "text": "Although there has been considerable progress in reducing cancer incidence in the United States, the number of cancer survivors continues to increase due to the aging and growth of the population and improvements in survival rates. As a result, it is increasingly important to understand the unique medical and psychosocial needs of survivors and be aware of resources that can assist patients, caregivers, and health care providers in navigating the various phases of cancer survivorship. To highlight the challenges and opportunities to serve these survivors, the American Cancer Society and the National Cancer Institute estimated the prevalence of cancer survivors on January 1, 2012 and January 1, 2022, by cancer site. Data from Surveillance, Epidemiology, and End Results (SEER) registries were used to describe median age and stage at diagnosis and survival; data from the National Cancer Data Base and the SEER-Medicare Database were used to describe patterns of cancer treatment. An estimated 13.7 million Americans with a history of cancer were alive on January 1, 2012, and by January 1, 2022, that number will increase to nearly 18 million. The 3 most prevalent cancers among males are prostate (43%), colorectal (9%), and melanoma of the skin (7%), and those among females are breast (41%), uterine corpus (8%), and colorectal (8%). This article summarizes common cancer treatments, survival rates, and posttreatment concerns and introduces the new National Cancer Survivorship Resource Center, which has engaged more than 100 volunteer survivorship experts nationwide to develop tools for cancer survivors, caregivers, health care professionals, advocates, and policy makers.", "title": "" }, { "docid": "582b9c59e07922ae3d5b01309e030bba", "text": "This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n2 logn) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.", "title": "" }, { "docid": "00f8c6d7fd58f06fc2672443de9773b7", "text": "The utility industry has invested widely in smart grid (SG) over the past decade. They considered it the future electrical grid while the information and electricity are delivered in two-way flow. SG has many Artificial Intelligence (AI) applications such as Artificial Neural Network (ANN), Machine Learning (ML) and Deep Learning (DL). Recently, DL has been a hot topic for AI applications in many fields such as time series load forecasting. This paper introduces the common algorithms of DL in the literature applied to load forecasting problems in the SG and power systems. The intention of this survey is to explore the different applications of DL that are used in the power systems and smart grid load forecasting. In addition, it compares the accuracy results RMSE and MAE for the reviewed applications and shows the use of convolutional neural network CNN with k-means algorithm had a great percentage of reduction in terms of RMSE.", "title": "" }, { "docid": "81537ba56a8f0b3beb29a03ed3c74425", "text": "About ten years ago, soon after the Web’s birth, Web “search engines” were first by word of mouth. Soon, however, automated search engines became a world wide phenomenon, especially AltaVista at the beginning. I was pleasantly surprised by the amount and diversity of information made accessible by the Web search engines even in the mid 1990’s. The growth of the available Web pages is beyond most, if not all, people’s imagination. The search engines enabled people to find information, facts, and references among these Web pages.", "title": "" }, { "docid": "04abe3f22084ab74ed3db8cbda680f62", "text": "Standard targets are typically used for structural (white-box) evaluation of fingerprint readers, e.g., for calibrating imaging components of a reader. However, there is no standard method for behavioral (black-box) evaluation of fingerprint readers in operational settings where variations in finger placement by the user are encountered. The goal of this research is to design and fabricate 3D targets for repeatable behavioral evaluation of fingerprint readers. 2D calibration patterns with known characteristics (e.g., sinusoidal gratings of pre-specified orientation and frequency, and fingerprints with known singular points and minutiae) are projected onto a generic 3D finger surface to create electronic 3D targets. A state-of-the-art 3D printer (Stratasys Objet350 Connex) is used to fabricate wearable 3D targets with materials similar in hardness and elasticity to the human finger skin. The 3D printed targets are cleaned using 2M NaOH solution to obtain evaluation-ready 3D targets. Our experimental results show that: 1) features present in the 2D calibration pattern are preserved during the creation of the electronic 3D target; 2) features engraved on the electronic 3D target are preserved during the physical 3D target fabrication; and 3) intra-class variability between multiple impressions of the physical 3D target is small. We also demonstrate that the generated 3D targets are suitable for behavioral evaluation of three different (500/1000 ppi) PIV/Appendix F certified optical fingerprint readers in the operational settings.", "title": "" } ]
scidocsrr